This commit is contained in:
gaotue 2026-03-23 16:30:33 +01:00 committed by GitHub
commit 30dff41cc9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
23 changed files with 14270 additions and 14121 deletions

View file

@ -1,431 +1,431 @@
Contributing
============
.. contents::
:depth: 3
Thank you!
----------
First off, thank you for considering contributing to beets! Its people like you
that make beets continue to succeed.
These guidelines describe how you can help most effectively. By following these
guidelines, you can make life easier for the development team as it indicates
you respect the maintainers time; in return, the maintainers will reciprocate
by helping to address your issue, review changes, and finalize pull requests.
Types of Contributions
----------------------
We love to get contributions from our community—you! There are many ways to
contribute, whether youre a programmer or not.
The first thing to do, regardless of how you'd like to contribute to the
project, is to check out our :doc:`Code of Conduct <code_of_conduct>` and to
keep that in mind while interacting with other contributors and users.
Non-Programming
~~~~~~~~~~~~~~~
- Promote beets! Help get the word out by telling your friends, writing a blog
post, or discussing it on a forum you frequent.
- Improve the documentation_. Its incredibly easy to contribute here: just find
a page you want to modify and hit the “Edit on GitHub” button in the
upper-right. You can automatically send us a pull request for your changes.
- GUI design. For the time being, beets is a command-line-only affair. But
thats mostly because we dont have any great ideas for what a good GUI should
look like. If you have those great ideas, please get in touch.
- Benchmarks. Wed like to have a consistent way of measuring speed improvements
in beets tagger and other functionality as well as a way of comparing beets
performance to other tools. You can help by compiling a library of
freely-licensed music files (preferably with incorrect metadata) for testing
and measurement.
- Think you have a nice config or cool use-case for beets? Wed love to hear
about it! Submit a post to our `discussion board
<https://github.com/beetbox/beets/discussions/categories/show-and-tell>`__
under the “Show and Tell” category for a chance to get featured in `the docs
<https://beets.readthedocs.io/en/stable/guides/advanced.html>`__.
- Consider helping out fellow users by `responding to support requests
<https://github.com/beetbox/beets/discussions/categories/q-a>`__ .
Programming
~~~~~~~~~~~
- As a programmer (even if youre just a beginner!), you have a ton of
opportunities to get your feet wet with beets.
- For developing plugins, or hacking away at beets, theres some good
information in the `“For Developers” section of the docs
<https://beets.readthedocs.io/en/stable/dev/>`__.
.. _development-tools:
Development Tools
+++++++++++++++++
In order to develop beets, you will need a few tools installed:
- poetry_ for packaging, virtual environment and dependency management
- poethepoet_ to run tasks, such as linting, formatting, testing
Python community recommends using pipx_ to install stand-alone command-line
applications such as above. pipx_ installs each application in an isolated
virtual environment, where its dependencies will not interfere with your system
and other CLI tools.
If you do not have pipx_ installed in your system, follow `pipx installation
instructions <https://pipx.pypa.io/stable/how-to/install-pipx/>`__ or
.. code-block:: sh
$ python3 -m pip install --user pipx
Install poetry_ and poethepoet_ using pipx_:
::
$ pipx install poetry poethepoet
.. admonition:: Check ``tool.pipx-install`` section in ``pyproject.toml`` to see supported versions
.. code-block:: toml
[tool.pipx-install]
poethepoet = ">=0.26"
poetry = "<2"
.. _getting-the-source:
Getting the Source
++++++++++++++++++
The easiest way to get started with the latest beets source is to clone the
repository and install ``beets`` in a local virtual environment using poetry_.
This can be done with:
.. code-block:: bash
$ git clone https://github.com/beetbox/beets.git
$ cd beets
$ poetry install
This will install ``beets`` and all development dependencies into its own
virtual environment in your ``$POETRY_CACHE_DIR``. See ``poetry install --help``
for installation options, including installing ``extra`` dependencies for
plugins.
In order to run something within this virtual environment, start the command
with ``poetry run`` to them, for example ``poetry run pytest``.
On the other hand, it may get tedious to type ``poetry run`` before every
command. Instead, you can activate the virtual environment in your shell with:
::
$ poetry shell
You should see ``(beets-py3.10)`` prefix in your shell prompt. Now you can run
commands directly, for example:
::
$ (beets-py3.10) pytest
Additionally, poethepoet_ task runner assists us with the most common
operations. Formatting, linting, testing are defined as ``poe`` tasks in
pyproject.toml_. Run:
::
$ poe
to see all available tasks. They can be used like this, for example
.. code-block:: sh
$ poe lint # check code style
$ poe format # fix formatting issues
$ poe test # run tests
# ... fix failing tests
$ poe test --lf # re-run failing tests (note the additional pytest option)
$ poe check-types --pretty # check types with an extra option for mypy
Code Contribution Ideas
+++++++++++++++++++++++
- We maintain a set of `issues marked as “good first issue”
<https://github.com/beetbox/beets/labels/good%20first%20issue>`__. These are
issues that would serve as a good introduction to the codebase. Claim one and
start exploring!
- Like testing? Our `test coverage
<https://app.codecov.io/github/beetbox/beets>`__ is somewhat low. You can help
out by finding low-coverage modules or checking out other `testing-related
issues <https://github.com/beetbox/beets/labels/testing>`__.
- There are several ways to improve the tests in general (see :ref:`testing` and
some places to think about performance optimization (see `Optimization
<https://github.com/beetbox/beets/wiki/Optimization>`__).
- Not all of our code is up to our coding conventions. In particular, the
`library API documentation
<https://beets.readthedocs.io/en/stable/dev/library.html>`__ are currently
quite sparse. You can help by adding to the docstrings in the code and to the
documentation pages themselves. beets follows `PEP-257
<https://peps.python.org/pep-0257/>`__ for docstrings and in some places, we
also sometimes use `ReST autodoc syntax for Sphinx
<https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html>`__ to,
for example, refer to a class name.
Your First Contribution
-----------------------
If this is your first time contributing to an open source project, welcome! If
you are confused at all about how to contribute or what to contribute, take a
look at `this great tutorial <https://makeapullrequest.com/>`__, or stop by our
`discussion board`_ if you have any questions.
We maintain a list of issues we reserved for those new to open source labeled
`first timers only`_. Since the goal of these issues is to get users comfortable
with contributing to an open source project, please do not hesitate to ask any
questions.
.. _first timers only: https://github.com/beetbox/beets/issues?q=is%3Aopen+is%3Aissue+label%3A%22first+timers+only%22
How to Submit Your Work
-----------------------
Do you have a great bug fix, new feature, or documentation expansion youd like
to contribute? Follow these steps to create a GitHub pull request and your code
will ship in no time.
1. Fork the beets repository and clone it (see above) to create a workspace.
2. Install pre-commit, following the instructions `here
<https://pre-commit.com/>`_.
3. Make your changes.
4. Add tests. If youve fixed a bug, write a test to ensure that youve actually
fixed it. If theres a new feature or plugin, please contribute tests that
show that your code does what it says.
5. Add documentation. If youve added a new command flag, for example, find the
appropriate page under ``docs/`` where it needs to be listed.
6. Add a changelog entry to ``docs/changelog.rst`` near the top of the document.
7. Run the tests and style checker, see :ref:`testing`.
8. Push to your fork and open a pull request! Well be in touch shortly.
9. If you add commits to a pull request, please add a comment or re-request a
review after you push them since GitHub doesnt automatically notify us when
commits are added.
Remember, code contributions have four parts: the code, the tests, the
documentation, and the changelog entry. Thank you for contributing!
.. admonition:: Ownership
If you are the owner of a plugin, please consider reviewing pull requests
that affect your plugin. If you are not the owner of a plugin, please
consider becoming one! You can do so by adding an entry to
``.github/CODEOWNERS``. This way, you will automatically receive a review
request for pull requests that adjust the code that you own. If you have any
questions, please ask on our `discussion board`_.
The Code
--------
The documentation has a section on the `library API
<https://beets.readthedocs.io/en/stable/dev/library.html>`__ that serves as an
introduction to beets design.
Coding Conventions
------------------
General
~~~~~~~
There are a few coding conventions we use in beets:
- Whenever you access the library database, do so through the provided Library
methods or via a Transaction object. Never call ``lib.conn.*`` directly. For
example, do this:
.. code-block:: python
with g.lib.transaction() as tx:
rows = tx.query("SELECT DISTINCT {field} FROM {model._table} ORDER BY {sort_field}")
To fetch Item objects from the database, use lib.items(…) and supply a query
as an argument. Resist the urge to write raw SQL for your query. If you must
use lower-level queries into the database, do this, for example:
.. code-block:: python
with lib.transaction() as tx:
rows = tx.query("SELECT path FROM items WHERE album_id = ?", (album_id,))
Transaction objects help control concurrent access to the database and assist
in debugging conflicting accesses.
- f-strings should be used instead of the ``%`` operator and ``str.format()``
calls.
- Never ``print`` informational messages; use the `logging
<https://docs.python.org/3/library/logging.html>`__ module instead. In
particular, we have our own logging shim, so youll see ``from beets import
logging`` in most files.
- The loggers use `str.format
<https://docs.python.org/3/library/stdtypes.html>`__-style logging instead
of ``%``-style, so you can type ``log.debug("{}", obj)`` to do your
formatting.
- Exception handlers must use ``except A as B:`` instead of ``except A, B:``.
Style
~~~~~
We use `ruff <https://docs.astral.sh/ruff/>`__ to format and lint the codebase.
Run ``poe check-format`` and ``poe lint`` to check your code for style and
linting errors. Running ``poe format`` will automatically format your code
according to the specifications required by the project.
Similarly, run ``poe format-docs`` and ``poe lint-docs`` to ensure consistent
documentation formatting and check for any issues.
Editor Settings
~~~~~~~~~~~~~~~
Personally, I work on beets with vim_. Here are some ``.vimrc`` lines that might
help with PEP 8-compliant Python coding:
::
filetype indent on
autocmd FileType python setlocal shiftwidth=4 tabstop=4 softtabstop=4 expandtab shiftround autoindent
Consider installing `this alternative Python indentation plugin
<https://github.com/mitsuhiko/vim-python-combined>`__. I also like `neomake
<https://github.com/neomake/neomake>`__ with its flake8 checker.
.. _testing:
Testing
-------
Running the Tests
~~~~~~~~~~~~~~~~~
Use ``poe`` to run tests:
::
$ poe test [pytest options]
You can disable a hand-selected set of "slow" tests by setting the environment
variable ``SKIP_SLOW_TESTS``, for example:
::
$ SKIP_SLOW_TESTS=1 poe test
Coverage
++++++++
The ``test`` command does not include coverage as it slows down testing. In
order to measure it, use the ``test-with-coverage`` task
$ poe test-with-coverage [pytest options]
You are welcome to explore coverage by opening the HTML report in
``.reports/html/index.html``.
Note that for each covered line the report shows **which tests cover it**
(expand the list on the right-hand side of the affected line).
You can find project coverage status on Codecov_.
Red Flags
+++++++++
The pytest-random_ plugin makes it easy to randomize the order of tests. ``poe
test --random`` will occasionally turn up failing tests that reveal ordering
dependencies—which are bad news!
Test Dependencies
+++++++++++++++++
The tests have a few more dependencies than beets itself. (The additional
dependencies consist of testing utilities and dependencies of non-default
plugins exercised by the test suite.) The dependencies are listed under the
``tool.poetry.group.test.dependencies`` section in pyproject.toml_.
Writing Tests
~~~~~~~~~~~~~
Writing tests is done by adding or modifying files in folder test_. Take a look
at test-query_ to get a basic view on how tests are written. Since we are
currently migrating the tests from unittest_ to pytest_, new tests should be
written using pytest_. Contributions migrating existing tests are welcome!
External API requests under test should be mocked with requests-mock_, However,
we still want to know whether external APIs are up and that they return expected
responses, therefore we test them weekly with our `integration test`_ suite.
In order to add such a test, mark your test with the ``integration_test`` marker
.. code-block:: python
@pytest.mark.integration_test
def test_external_api_call(): ...
This way, the test will be run only in the integration test suite.
beets also defines custom pytest markers in ``test/conftest.py``:
- ``integration_test``: runs only when ``INTEGRATION_TEST=true`` is set.
- ``on_lyrics_update``: runs only when ``LYRICS_UPDATED=true`` is set.
- ``requires_import("module", force_ci=True)``: runs the test only when the
module is importable. With the default ``force_ci=True``, this import check is
bypassed on GitHub Actions for ``beetbox/beets`` so CI still runs the test.
Set ``force_ci=False`` to allow CI to skip when the module is missing.
.. code-block:: python
@pytest.mark.integration_test
def test_external_api_call(): ...
@pytest.mark.on_lyrics_update
def test_real_lyrics_backend(): ...
@pytest.mark.requires_import("langdetect")
def test_language_detection(): ...
@pytest.mark.requires_import("librosa", force_ci=False)
def test_autobpm_command(): ...
.. _codecov: https://app.codecov.io/github/beetbox/beets
.. _discussion board: https://github.com/beetbox/beets/discussions
.. _documentation: https://beets.readthedocs.io/en/stable/
.. _integration test: https://github.com/beetbox/beets/actions?query=workflow%3A%22integration+tests%22
.. _pipx: https://pipx.pypa.io/stable
.. _poethepoet: https://poethepoet.natn.io/index.html
.. _poetry: https://python-poetry.org/docs/
.. _pyproject.toml: https://github.com/beetbox/beets/blob/master/pyproject.toml
.. _pytest: https://docs.pytest.org/en/stable/
.. _pytest-random: https://github.com/klrmn/pytest-random
.. _requests-mock: https://requests-mock.readthedocs.io/en/latest/response.html
.. _test: https://github.com/beetbox/beets/tree/master/test
.. _test-query: https://github.com/beetbox/beets/blob/master/test/test_query.py
.. _unittest: https://docs.python.org/3/library/unittest.html
.. _vim: https://www.vim.org/
Contributing
============
.. contents::
:depth: 3
Thank you!
----------
First off, thank you for considering contributing to beets! Its people like you
that make beets continue to succeed.
These guidelines describe how you can help most effectively. By following these
guidelines, you can make life easier for the development team as it indicates
you respect the maintainers time; in return, the maintainers will reciprocate
by helping to address your issue, review changes, and finalize pull requests.
Types of Contributions
----------------------
We love to get contributions from our community—you! There are many ways to
contribute, whether youre a programmer or not.
The first thing to do, regardless of how you'd like to contribute to the
project, is to check out our :doc:`Code of Conduct <code_of_conduct>` and to
keep that in mind while interacting with other contributors and users.
Non-Programming
~~~~~~~~~~~~~~~
- Promote beets! Help get the word out by telling your friends, writing a blog
post, or discussing it on a forum you frequent.
- Improve the documentation_. Its incredibly easy to contribute here: just find
a page you want to modify and hit the “Edit on GitHub” button in the
upper-right. You can automatically send us a pull request for your changes.
- GUI design. For the time being, beets is a command-line-only affair. But
thats mostly because we dont have any great ideas for what a good GUI should
look like. If you have those great ideas, please get in touch.
- Benchmarks. Wed like to have a consistent way of measuring speed improvements
in beets tagger and other functionality as well as a way of comparing beets
performance to other tools. You can help by compiling a library of
freely-licensed music files (preferably with incorrect metadata) for testing
and measurement.
- Think you have a nice config or cool use-case for beets? Wed love to hear
about it! Submit a post to our `discussion board
<https://github.com/beetbox/beets/discussions/categories/show-and-tell>`__
under the “Show and Tell” category for a chance to get featured in `the docs
<https://beets.readthedocs.io/en/stable/guides/advanced.html>`__.
- Consider helping out fellow users by `responding to support requests
<https://github.com/beetbox/beets/discussions/categories/q-a>`__ .
Programming
~~~~~~~~~~~
- As a programmer (even if youre just a beginner!), you have a ton of
opportunities to get your feet wet with beets.
- For developing plugins, or hacking away at beets, theres some good
information in the `“For Developers” section of the docs
<https://beets.readthedocs.io/en/stable/dev/>`__.
.. _development-tools:
Development Tools
+++++++++++++++++
In order to develop beets, you will need a few tools installed:
- poetry_ for packaging, virtual environment and dependency management
- poethepoet_ to run tasks, such as linting, formatting, testing
Python community recommends using pipx_ to install stand-alone command-line
applications such as above. pipx_ installs each application in an isolated
virtual environment, where its dependencies will not interfere with your system
and other CLI tools.
If you do not have pipx_ installed in your system, follow `pipx installation
instructions <https://pipx.pypa.io/stable/how-to/install-pipx/>`__ or
.. code-block:: sh
$ python3 -m pip install --user pipx
Install poetry_ and poethepoet_ using pipx_:
::
$ pipx install poetry poethepoet
.. admonition:: Check ``tool.pipx-install`` section in ``pyproject.toml`` to see supported versions
.. code-block:: toml
[tool.pipx-install]
poethepoet = ">=0.26"
poetry = "<2"
.. _getting-the-source:
Getting the Source
++++++++++++++++++
The easiest way to get started with the latest beets source is to clone the
repository and install ``beets`` in a local virtual environment using poetry_.
This can be done with:
.. code-block:: bash
$ git clone https://github.com/beetbox/beets.git
$ cd beets
$ poetry install
This will install ``beets`` and all development dependencies into its own
virtual environment in your ``$POETRY_CACHE_DIR``. See ``poetry install --help``
for installation options, including installing ``extra`` dependencies for
plugins.
In order to run something within this virtual environment, start the command
with ``poetry run`` to them, for example ``poetry run pytest``.
On the other hand, it may get tedious to type ``poetry run`` before every
command. Instead, you can activate the virtual environment in your shell with:
::
$ poetry shell
You should see ``(beets-py3.10)`` prefix in your shell prompt. Now you can run
commands directly, for example:
::
$ (beets-py3.10) pytest
Additionally, poethepoet_ task runner assists us with the most common
operations. Formatting, linting, testing are defined as ``poe`` tasks in
pyproject.toml_. Run:
::
$ poe
to see all available tasks. They can be used like this, for example
.. code-block:: sh
$ poe lint # check code style
$ poe format # fix formatting issues
$ poe test # run tests
# ... fix failing tests
$ poe test --lf # re-run failing tests (note the additional pytest option)
$ poe check-types --pretty # check types with an extra option for mypy
Code Contribution Ideas
+++++++++++++++++++++++
- We maintain a set of `issues marked as “good first issue”
<https://github.com/beetbox/beets/labels/good%20first%20issue>`__. These are
issues that would serve as a good introduction to the codebase. Claim one and
start exploring!
- Like testing? Our `test coverage
<https://app.codecov.io/github/beetbox/beets>`__ is somewhat low. You can help
out by finding low-coverage modules or checking out other `testing-related
issues <https://github.com/beetbox/beets/labels/testing>`__.
- There are several ways to improve the tests in general (see :ref:`testing` and
some places to think about performance optimization (see `Optimization
<https://github.com/beetbox/beets/wiki/Optimization>`__).
- Not all of our code is up to our coding conventions. In particular, the
`library API documentation
<https://beets.readthedocs.io/en/stable/dev/library.html>`__ are currently
quite sparse. You can help by adding to the docstrings in the code and to the
documentation pages themselves. beets follows `PEP-257
<https://peps.python.org/pep-0257/>`__ for docstrings and in some places, we
also sometimes use `ReST autodoc syntax for Sphinx
<https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html>`__ to,
for example, refer to a class name.
Your First Contribution
-----------------------
If this is your first time contributing to an open source project, welcome! If
you are confused at all about how to contribute or what to contribute, take a
look at `this great tutorial <https://makeapullrequest.com/>`__, or stop by our
`discussion board`_ if you have any questions.
We maintain a list of issues we reserved for those new to open source labeled
`first timers only`_. Since the goal of these issues is to get users comfortable
with contributing to an open source project, please do not hesitate to ask any
questions.
.. _first timers only: https://github.com/beetbox/beets/issues?q=is%3Aopen+is%3Aissue+label%3A%22first+timers+only%22
How to Submit Your Work
-----------------------
Do you have a great bug fix, new feature, or documentation expansion youd like
to contribute? Follow these steps to create a GitHub pull request and your code
will ship in no time.
1. Fork the beets repository and clone it (see above) to create a workspace.
2. Install pre-commit, following the instructions `here
<https://pre-commit.com/>`_.
3. Make your changes.
4. Add tests. If youve fixed a bug, write a test to ensure that youve actually
fixed it. If theres a new feature or plugin, please contribute tests that
show that your code does what it says.
5. Add documentation. If youve added a new command flag, for example, find the
appropriate page under ``docs/`` where it needs to be listed.
6. Add a changelog entry to ``docs/changelog.rst`` near the top of the document.
7. Run the tests and style checker, see :ref:`testing`.
8. Push to your fork and open a pull request! Well be in touch shortly.
9. If you add commits to a pull request, please add a comment or re-request a
review after you push them since GitHub doesnt automatically notify us when
commits are added.
Remember, code contributions have four parts: the code, the tests, the
documentation, and the changelog entry. Thank you for contributing!
.. admonition:: Ownership
If you are the owner of a plugin, please consider reviewing pull requests
that affect your plugin. If you are not the owner of a plugin, please
consider becoming one! You can do so by adding an entry to
``.github/CODEOWNERS``. This way, you will automatically receive a review
request for pull requests that adjust the code that you own. If you have any
questions, please ask on our `discussion board`_.
The Code
--------
The documentation has a section on the `library API
<https://beets.readthedocs.io/en/stable/dev/library.html>`__ that serves as an
introduction to beets design.
Coding Conventions
------------------
General
~~~~~~~
There are a few coding conventions we use in beets:
- Whenever you access the library database, do so through the provided Library
methods or via a Transaction object. Never call ``lib.conn.*`` directly. For
example, do this:
.. code-block:: python
with g.lib.transaction() as tx:
rows = tx.query("SELECT DISTINCT {field} FROM {model._table} ORDER BY {sort_field}")
To fetch Item objects from the database, use lib.items(…) and supply a query
as an argument. Resist the urge to write raw SQL for your query. If you must
use lower-level queries into the database, do this, for example:
.. code-block:: python
with lib.transaction() as tx:
rows = tx.query("SELECT path FROM items WHERE album_id = ?", (album_id,))
Transaction objects help control concurrent access to the database and assist
in debugging conflicting accesses.
- f-strings should be used instead of the ``%`` operator and ``str.format()``
calls.
- Never ``print`` informational messages; use the `logging
<https://docs.python.org/3/library/logging.html>`__ module instead. In
particular, we have our own logging shim, so youll see ``from beets import
logging`` in most files.
- The loggers use `str.format
<https://docs.python.org/3/library/stdtypes.html>`__-style logging instead
of ``%``-style, so you can type ``log.debug("{}", obj)`` to do your
formatting.
- Exception handlers must use ``except A as B:`` instead of ``except A, B:``.
Style
~~~~~
We use `ruff <https://docs.astral.sh/ruff/>`__ to format and lint the codebase.
Run ``poe check-format`` and ``poe lint`` to check your code for style and
linting errors. Running ``poe format`` will automatically format your code
according to the specifications required by the project.
Similarly, run ``poe format-docs`` and ``poe lint-docs`` to ensure consistent
documentation formatting and check for any issues.
Editor Settings
~~~~~~~~~~~~~~~
Personally, I work on beets with vim_. Here are some ``.vimrc`` lines that might
help with PEP 8-compliant Python coding:
::
filetype indent on
autocmd FileType python setlocal shiftwidth=4 tabstop=4 softtabstop=4 expandtab shiftround autoindent
Consider installing `this alternative Python indentation plugin
<https://github.com/mitsuhiko/vim-python-combined>`__. I also like `neomake
<https://github.com/neomake/neomake>`__ with its flake8 checker.
.. _testing:
Testing
-------
Running the Tests
~~~~~~~~~~~~~~~~~
Use ``poe`` to run tests:
::
$ poe test [pytest options]
You can disable a hand-selected set of "slow" tests by setting the environment
variable ``SKIP_SLOW_TESTS``, for example:
::
$ SKIP_SLOW_TESTS=1 poe test
Coverage
++++++++
The ``test`` command does not include coverage as it slows down testing. In
order to measure it, use the ``test-with-coverage`` task
$ poe test-with-coverage [pytest options]
You are welcome to explore coverage by opening the HTML report in
``.reports/html/index.html``.
Note that for each covered line the report shows **which tests cover it**
(expand the list on the right-hand side of the affected line).
You can find project coverage status on Codecov_.
Red Flags
+++++++++
The pytest-random_ plugin makes it easy to randomize the order of tests. ``poe
test --random`` will occasionally turn up failing tests that reveal ordering
dependencies—which are bad news!
Test Dependencies
+++++++++++++++++
The tests have a few more dependencies than beets itself. (The additional
dependencies consist of testing utilities and dependencies of non-default
plugins exercised by the test suite.) The dependencies are listed under the
``tool.poetry.group.test.dependencies`` section in pyproject.toml_.
Writing Tests
~~~~~~~~~~~~~
Writing tests is done by adding or modifying files in folder test_. Take a look
at test-query_ to get a basic view on how tests are written. Since we are
currently migrating the tests from unittest_ to pytest_, new tests should be
written using pytest_. Contributions migrating existing tests are welcome!
External API requests under test should be mocked with requests-mock_, However,
we still want to know whether external APIs are up and that they return expected
responses, therefore we test them weekly with our `integration test`_ suite.
In order to add such a test, mark your test with the ``integration_test`` marker
.. code-block:: python
@pytest.mark.integration_test
def test_external_api_call(): ...
This way, the test will be run only in the integration test suite.
beets also defines custom pytest markers in ``test/conftest.py``:
- ``integration_test``: runs only when ``INTEGRATION_TEST=true`` is set.
- ``on_lyrics_update``: runs only when ``LYRICS_UPDATED=true`` is set.
- ``requires_import("module", force_ci=True)``: runs the test only when the
module is importable. With the default ``force_ci=True``, this import check is
bypassed on GitHub Actions for ``beetbox/beets`` so CI still runs the test.
Set ``force_ci=False`` to allow CI to skip when the module is missing.
.. code-block:: python
@pytest.mark.integration_test
def test_external_api_call(): ...
@pytest.mark.on_lyrics_update
def test_real_lyrics_backend(): ...
@pytest.mark.requires_import("langdetect")
def test_language_detection(): ...
@pytest.mark.requires_import("librosa", force_ci=False)
def test_autobpm_command(): ...
.. _codecov: https://app.codecov.io/github/beetbox/beets
.. _discussion board: https://github.com/beetbox/beets/discussions
.. _documentation: https://beets.readthedocs.io/en/stable/
.. _integration test: https://github.com/beetbox/beets/actions?query=workflow%3A%22integration+tests%22
.. _pipx: https://pipx.pypa.io/stable
.. _poethepoet: https://poethepoet.natn.io/index.html
.. _poetry: https://python-poetry.org/docs/
.. _pyproject.toml: https://github.com/beetbox/beets/blob/master/pyproject.toml
.. _pytest: https://docs.pytest.org/en/stable/
.. _pytest-random: https://github.com/klrmn/pytest-random
.. _requests-mock: https://requests-mock.readthedocs.io/en/latest/response.html
.. _test: https://github.com/beetbox/beets/tree/master/test
.. _test-query: https://github.com/beetbox/beets/blob/master/test/test_query.py
.. _unittest: https://docs.python.org/3/library/unittest.html
.. _vim: https://www.vim.org/

View file

@ -1,49 +1,49 @@
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Facilities for automatically determining files' correct metadata."""
from __future__ import annotations
from importlib import import_module
# Parts of external interface.
from beets.util.deprecation import deprecate_for_maintainers, deprecate_imports
from .hooks import AlbumInfo, AlbumMatch, TrackInfo, TrackMatch
from .match import Proposal, Recommendation, tag_album, tag_item
def __getattr__(name: str):
if name == "current_metadata":
deprecate_for_maintainers(
f"'beets.autotag.{name}'", "'beets.util.get_most_common_tags'"
)
return import_module("beets.util").get_most_common_tags
return deprecate_imports(
__name__, {"Distance": "beets.autotag.distance"}, name
)
__all__ = [
"AlbumInfo",
"AlbumMatch",
"Proposal",
"Recommendation",
"TrackInfo",
"TrackMatch",
"tag_album",
"tag_item",
]
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Facilities for automatically determining files' correct metadata."""
from __future__ import annotations
from importlib import import_module
# Parts of external interface.
from beets.util.deprecation import deprecate_for_maintainers, deprecate_imports
from .hooks import AlbumInfo, AlbumMatch, TrackInfo, TrackMatch
from .match import Proposal, Recommendation, tag_album, tag_item
def __getattr__(name: str):
if name == "current_metadata":
deprecate_for_maintainers(
f"'beets.autotag.{name}'", "'beets.util.get_most_common_tags'"
)
return import_module("beets.util").get_most_common_tags
return deprecate_imports(
__name__, {"Distance": "beets.autotag.distance"}, name
)
__all__ = [
"AlbumInfo",
"AlbumMatch",
"Proposal",
"Recommendation",
"TrackInfo",
"TrackMatch",
"tag_album",
"tag_item",
]

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,386 +1,386 @@
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Matches existing metadata with canonical information to identify
releases and tracks.
"""
from __future__ import annotations
from enum import IntEnum
from typing import TYPE_CHECKING, NamedTuple, TypeVar
import lap
import numpy as np
from beets import config, logging, metadata_plugins
from beets.autotag import AlbumMatch, TrackMatch, hooks
from beets.util import get_most_common_tags
from .distance import VA_ARTISTS, distance, track_distance
from .hooks import Info
if TYPE_CHECKING:
from collections.abc import Iterable, Sequence
from beets.autotag import AlbumInfo, TrackInfo
from beets.library import Item
AnyMatch = TypeVar("AnyMatch", TrackMatch, AlbumMatch)
Candidates = dict[Info.Identifier, AnyMatch]
# Global logger.
log = logging.getLogger("beets")
# Recommendation enumeration.
class Recommendation(IntEnum):
"""Indicates a qualitative suggestion to the user about what should
be done with a given match.
"""
none = 0
low = 1
medium = 2
strong = 3
# A structure for holding a set of possible matches to choose between. This
# consists of a list of possible candidates (i.e., AlbumInfo or TrackInfo
# objects) and a recommendation value.
class Proposal(NamedTuple):
candidates: Sequence[AlbumMatch | TrackMatch]
recommendation: Recommendation
# Primary matching functionality.
def assign_items(
items: Sequence[Item],
tracks: Sequence[TrackInfo],
) -> tuple[list[tuple[Item, TrackInfo]], list[Item], list[TrackInfo]]:
"""Given a list of Items and a list of TrackInfo objects, find the
best mapping between them. Returns a mapping from Items to TrackInfo
objects, a set of extra Items, and a set of extra TrackInfo
objects. These "extra" objects occur when there is an unequal number
of objects of the two types.
"""
log.debug("Computing track assignment...")
# Construct the cost matrix.
costs = [[float(track_distance(i, t)) for t in tracks] for i in items]
# Assign items to tracks
_, _, assigned_item_idxs = lap.lapjv(np.array(costs), extend_cost=True)
log.debug("...done.")
# Each item in `assigned_item_idxs` list corresponds to a track in the
# `tracks` list. Each value is either an index into the assigned item in
# `items` list, or -1 if that track has no match.
mapping = {
items[iidx]: t
for iidx, t in zip(assigned_item_idxs, tracks)
if iidx != -1
}
extra_items = list(set(items) - mapping.keys())
extra_items.sort(key=lambda i: (i.disc, i.track, i.title))
extra_tracks = list(set(tracks) - set(mapping.values()))
extra_tracks.sort(key=lambda t: (t.index, t.title))
return list(mapping.items()), extra_items, extra_tracks
def match_by_id(album_id: str | None, consensus: bool) -> Iterable[AlbumInfo]:
"""Return album candidates for the given album id.
Make sure that the ID is present and that there is consensus on it among
the items being tagged.
"""
if not album_id:
log.debug("No album ID found.")
elif not consensus:
log.debug("No album ID consensus.")
else:
log.debug("Searching for discovered album ID: {}", album_id)
return metadata_plugins.albums_for_ids([album_id])
return ()
def _recommendation(
results: Sequence[AlbumMatch | TrackMatch],
) -> Recommendation:
"""Given a sorted list of AlbumMatch or TrackMatch objects, return a
recommendation based on the results' distances.
If the recommendation is higher than the configured maximum for
an applied penalty, the recommendation will be downgraded to the
configured maximum for that penalty.
"""
if not results:
# No candidates: no recommendation.
return Recommendation.none
# Basic distance thresholding.
min_dist = results[0].distance
if min_dist < config["match"]["strong_rec_thresh"].as_number():
# Strong recommendation level.
rec = Recommendation.strong
elif min_dist <= config["match"]["medium_rec_thresh"].as_number():
# Medium recommendation level.
rec = Recommendation.medium
elif len(results) == 1:
# Only a single candidate.
rec = Recommendation.low
elif (
results[1].distance - min_dist
>= config["match"]["rec_gap_thresh"].as_number()
):
# Gap between first two candidates is large.
rec = Recommendation.low
else:
# No conclusion. Return immediately. Can't be downgraded any further.
return Recommendation.none
# Downgrade to the max rec if it is lower than the current rec for an
# applied penalty.
keys = set(min_dist.keys())
if isinstance(results[0], hooks.AlbumMatch):
for track_dist in min_dist.tracks.values():
keys.update(list(track_dist.keys()))
max_rec_view = config["match"]["max_rec"]
for key in keys:
if key in list(max_rec_view.keys()):
max_rec = max_rec_view[key].as_choice(
{
"strong": Recommendation.strong,
"medium": Recommendation.medium,
"low": Recommendation.low,
"none": Recommendation.none,
}
)
rec = min(rec, max_rec)
return rec
def _sort_candidates(candidates: Iterable[AnyMatch]) -> Sequence[AnyMatch]:
"""Sort candidates by distance."""
return sorted(candidates, key=lambda match: match.distance)
def _add_candidate(
items: Sequence[Item],
results: Candidates[AlbumMatch],
info: AlbumInfo,
):
"""Given a candidate AlbumInfo object, attempt to add the candidate
to the output dictionary of AlbumMatch objects. This involves
checking the track count, ordering the items, checking for
duplicates, and calculating the distance.
"""
log.debug(
"Candidate: {0.artist} - {0.album} ({0.album_id}) from {0.data_source}",
info,
)
# Discard albums with zero tracks.
if not info.tracks:
log.debug("No tracks.")
return
# Prevent duplicates.
if info.album_id and info.identifier in results:
log.debug("Duplicate.")
return
# Discard matches without required tags.
required_tags: Sequence[str] = config["match"]["required"].as_str_seq()
for req_tag in required_tags:
if getattr(info, req_tag) is None:
log.debug("Ignored. Missing required tag: {}", req_tag)
return
# Find mapping between the items and the track info.
item_info_pairs, extra_items, extra_tracks = assign_items(
items, info.tracks
)
# Get the change distance.
dist = distance(items, info, item_info_pairs)
# Skip matches with ignored penalties.
penalties = [key for key, _ in dist]
ignored_tags: Sequence[str] = config["match"]["ignored"].as_str_seq()
for penalty in ignored_tags:
if penalty in penalties:
log.debug("Ignored. Penalty: {}", penalty)
return
log.debug("Success. Distance: {}", dist)
results[info.identifier] = hooks.AlbumMatch(
dist, info, dict(item_info_pairs), extra_items, extra_tracks
)
def tag_album(
items,
search_artist: str | None = None,
search_name: str | None = None,
search_ids: list[str] = [],
) -> tuple[str, str, Proposal]:
"""Return a tuple of the current artist name, the current album
name, and a `Proposal` containing `AlbumMatch` candidates.
The artist and album are the most common values of these fields
among `items`.
The `AlbumMatch` objects are generated by searching the metadata
backends. By default, the metadata of the items is used for the
search. This can be customized by setting the parameters.
`search_ids` is a list of metadata backend IDs: if specified,
it will restrict the candidates to those IDs, ignoring
`search_artist` and `search album`. The `mapping` field of the
album has the matched `items` as keys.
The recommendation is calculated from the match quality of the
candidates.
"""
# Get current metadata.
likelies, consensus = get_most_common_tags(items)
cur_artist: str = likelies["artist"]
cur_album: str = likelies["album"]
log.debug("Tagging {} - {}", cur_artist, cur_album)
# The output result, keys are (data_source, album_id) pairs, values are
# AlbumMatch objects.
candidates: Candidates[AlbumMatch] = {}
# Search by explicit ID.
if search_ids:
log.debug("Searching for album IDs: {}", ", ".join(search_ids))
for _info in metadata_plugins.albums_for_ids(search_ids):
_add_candidate(items, candidates, _info)
# Use existing metadata or text search.
else:
# Try search based on current ID.
for info in match_by_id(
likelies["mb_albumid"], consensus["mb_albumid"]
):
_add_candidate(items, candidates, info)
rec = _recommendation(list(candidates.values()))
log.debug("Album ID match recommendation is {}", rec)
if candidates and not config["import"]["timid"]:
# If we have a very good MBID match, return immediately.
# Otherwise, this match will compete against metadata-based
# matches.
if rec == Recommendation.strong:
log.debug("ID match.")
return (
cur_artist,
cur_album,
Proposal(list(candidates.values()), rec),
)
# Search terms.
if not (search_artist and search_name):
# No explicit search terms -- use current metadata.
search_artist, search_name = cur_artist, cur_album
log.debug("Search terms: {} - {}", search_artist, search_name)
# Is this album likely to be a "various artist" release?
va_likely = (
(not consensus["artist"])
or (search_artist.lower() in VA_ARTISTS)
or any(item.comp for item in items)
)
log.debug("Album might be VA: {}", va_likely)
# Get the results from the data sources.
for matched_candidate in metadata_plugins.candidates(
items, search_artist, search_name, va_likely
):
_add_candidate(items, candidates, matched_candidate)
log.debug("Evaluating {} candidates.", len(candidates))
# Sort and get the recommendation.
candidates_sorted = _sort_candidates(candidates.values())
rec = _recommendation(candidates_sorted)
return cur_artist, cur_album, Proposal(candidates_sorted, rec)
def tag_item(
item,
search_artist: str | None = None,
search_name: str | None = None,
search_ids: list[str] | None = None,
) -> Proposal:
"""Find metadata for a single track. Return a `Proposal` consisting
of `TrackMatch` objects.
`search_artist` and `search_title` may be used to override the item
metadata in the search query. `search_ids` may be used for restricting the
search to a list of metadata backend IDs.
"""
# Holds candidates found so far: keys are (data_source, track_id) pairs,
# values TrackMatch objects
candidates: Candidates[TrackMatch] = {}
rec: Recommendation | None = None
# First, try matching by the external source ID.
trackids = search_ids or [t for t in [item.mb_trackid] if t]
if trackids:
log.debug("Searching for track IDs: {}", ", ".join(trackids))
for info in metadata_plugins.tracks_for_ids(trackids):
dist = track_distance(item, info, incl_artist=True)
candidates[info.identifier] = hooks.TrackMatch(dist, info, item)
# If this is a good match, then don't keep searching.
rec = _recommendation(_sort_candidates(candidates.values()))
if rec == Recommendation.strong and not config["import"]["timid"]:
log.debug("Track ID match.")
return Proposal(_sort_candidates(candidates.values()), rec)
# If we're searching by ID, don't proceed.
if search_ids:
if candidates:
assert rec is not None
return Proposal(_sort_candidates(candidates.values()), rec)
else:
return Proposal([], Recommendation.none)
# Search terms.
search_artist = search_artist or item.artist
search_name = search_name or item.title
log.debug("Item search terms: {} - {}", search_artist, search_name)
# Get and evaluate candidate metadata.
for track_info in metadata_plugins.item_candidates(
item, search_artist, search_name
):
dist = track_distance(item, track_info, incl_artist=True)
candidates[track_info.identifier] = hooks.TrackMatch(
dist, track_info, item
)
# Sort by distance and return with recommendation.
log.debug("Found {} candidates.", len(candidates))
candidates_sorted = _sort_candidates(candidates.values())
rec = _recommendation(candidates_sorted)
return Proposal(candidates_sorted, rec)
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Matches existing metadata with canonical information to identify
releases and tracks.
"""
from __future__ import annotations
from enum import IntEnum
from typing import TYPE_CHECKING, NamedTuple, TypeVar
import lap
import numpy as np
from beets import config, logging, metadata_plugins
from beets.autotag import AlbumMatch, TrackMatch, hooks
from beets.util import get_most_common_tags
from .distance import VA_ARTISTS, distance, track_distance
from .hooks import Info
if TYPE_CHECKING:
from collections.abc import Iterable, Sequence
from beets.autotag import AlbumInfo, TrackInfo
from beets.library import Item
AnyMatch = TypeVar("AnyMatch", TrackMatch, AlbumMatch)
Candidates = dict[Info.Identifier, AnyMatch]
# Global logger.
log = logging.getLogger("beets")
# Recommendation enumeration.
class Recommendation(IntEnum):
"""Indicates a qualitative suggestion to the user about what should
be done with a given match.
"""
none = 0
low = 1
medium = 2
strong = 3
# A structure for holding a set of possible matches to choose between. This
# consists of a list of possible candidates (i.e., AlbumInfo or TrackInfo
# objects) and a recommendation value.
class Proposal(NamedTuple):
candidates: Sequence[AlbumMatch | TrackMatch]
recommendation: Recommendation
# Primary matching functionality.
def assign_items(
items: Sequence[Item],
tracks: Sequence[TrackInfo],
) -> tuple[list[tuple[Item, TrackInfo]], list[Item], list[TrackInfo]]:
"""Given a list of Items and a list of TrackInfo objects, find the
best mapping between them. Returns a mapping from Items to TrackInfo
objects, a set of extra Items, and a set of extra TrackInfo
objects. These "extra" objects occur when there is an unequal number
of objects of the two types.
"""
log.debug("Computing track assignment...")
# Construct the cost matrix.
costs = [[float(track_distance(i, t)) for t in tracks] for i in items]
# Assign items to tracks
_, _, assigned_item_idxs = lap.lapjv(np.array(costs), extend_cost=True)
log.debug("...done.")
# Each item in `assigned_item_idxs` list corresponds to a track in the
# `tracks` list. Each value is either an index into the assigned item in
# `items` list, or -1 if that track has no match.
mapping = {
items[iidx]: t
for iidx, t in zip(assigned_item_idxs, tracks)
if iidx != -1
}
extra_items = list(set(items) - mapping.keys())
extra_items.sort(key=lambda i: (i.disc, i.track, i.title))
extra_tracks = list(set(tracks) - set(mapping.values()))
extra_tracks.sort(key=lambda t: (t.index, t.title))
return list(mapping.items()), extra_items, extra_tracks
def match_by_id(album_id: str | None, consensus: bool) -> Iterable[AlbumInfo]:
"""Return album candidates for the given album id.
Make sure that the ID is present and that there is consensus on it among
the items being tagged.
"""
if not album_id:
log.debug("No album ID found.")
elif not consensus:
log.debug("No album ID consensus.")
else:
log.debug("Searching for discovered album ID: {}", album_id)
return metadata_plugins.albums_for_ids([album_id])
return ()
def _recommendation(
results: Sequence[AlbumMatch | TrackMatch],
) -> Recommendation:
"""Given a sorted list of AlbumMatch or TrackMatch objects, return a
recommendation based on the results' distances.
If the recommendation is higher than the configured maximum for
an applied penalty, the recommendation will be downgraded to the
configured maximum for that penalty.
"""
if not results:
# No candidates: no recommendation.
return Recommendation.none
# Basic distance thresholding.
min_dist = results[0].distance
if min_dist < config["match"]["strong_rec_thresh"].as_number():
# Strong recommendation level.
rec = Recommendation.strong
elif min_dist <= config["match"]["medium_rec_thresh"].as_number():
# Medium recommendation level.
rec = Recommendation.medium
elif len(results) == 1:
# Only a single candidate.
rec = Recommendation.low
elif (
results[1].distance - min_dist
>= config["match"]["rec_gap_thresh"].as_number()
):
# Gap between first two candidates is large.
rec = Recommendation.low
else:
# No conclusion. Return immediately. Can't be downgraded any further.
return Recommendation.none
# Downgrade to the max rec if it is lower than the current rec for an
# applied penalty.
keys = set(min_dist.keys())
if isinstance(results[0], hooks.AlbumMatch):
for track_dist in min_dist.tracks.values():
keys.update(list(track_dist.keys()))
max_rec_view = config["match"]["max_rec"]
for key in keys:
if key in list(max_rec_view.keys()):
max_rec = max_rec_view[key].as_choice(
{
"strong": Recommendation.strong,
"medium": Recommendation.medium,
"low": Recommendation.low,
"none": Recommendation.none,
}
)
rec = min(rec, max_rec)
return rec
def _sort_candidates(candidates: Iterable[AnyMatch]) -> Sequence[AnyMatch]:
"""Sort candidates by distance."""
return sorted(candidates, key=lambda match: match.distance)
def _add_candidate(
items: Sequence[Item],
results: Candidates[AlbumMatch],
info: AlbumInfo,
):
"""Given a candidate AlbumInfo object, attempt to add the candidate
to the output dictionary of AlbumMatch objects. This involves
checking the track count, ordering the items, checking for
duplicates, and calculating the distance.
"""
log.debug(
"Candidate: {0.artist} - {0.album} ({0.album_id}) from {0.data_source}",
info,
)
# Discard albums with zero tracks.
if not info.tracks:
log.debug("No tracks.")
return
# Prevent duplicates.
if info.album_id and info.identifier in results:
log.debug("Duplicate.")
return
# Discard matches without required tags.
required_tags: Sequence[str] = config["match"]["required"].as_str_seq()
for req_tag in required_tags:
if getattr(info, req_tag) is None:
log.debug("Ignored. Missing required tag: {}", req_tag)
return
# Find mapping between the items and the track info.
item_info_pairs, extra_items, extra_tracks = assign_items(
items, info.tracks
)
# Get the change distance.
dist = distance(items, info, item_info_pairs)
# Skip matches with ignored penalties.
penalties = [key for key, _ in dist]
ignored_tags: Sequence[str] = config["match"]["ignored"].as_str_seq()
for penalty in ignored_tags:
if penalty in penalties:
log.debug("Ignored. Penalty: {}", penalty)
return
log.debug("Success. Distance: {}", dist)
results[info.identifier] = hooks.AlbumMatch(
dist, info, dict(item_info_pairs), extra_items, extra_tracks
)
def tag_album(
items,
search_artist: str | None = None,
search_name: str | None = None,
search_ids: list[str] = [],
) -> tuple[str, str, Proposal]:
"""Return a tuple of the current artist name, the current album
name, and a `Proposal` containing `AlbumMatch` candidates.
The artist and album are the most common values of these fields
among `items`.
The `AlbumMatch` objects are generated by searching the metadata
backends. By default, the metadata of the items is used for the
search. This can be customized by setting the parameters.
`search_ids` is a list of metadata backend IDs: if specified,
it will restrict the candidates to those IDs, ignoring
`search_artist` and `search album`. The `mapping` field of the
album has the matched `items` as keys.
The recommendation is calculated from the match quality of the
candidates.
"""
# Get current metadata.
likelies, consensus = get_most_common_tags(items)
cur_artist: str = likelies["artist"]
cur_album: str = likelies["album"]
log.debug("Tagging {} - {}", cur_artist, cur_album)
# The output result, keys are (data_source, album_id) pairs, values are
# AlbumMatch objects.
candidates: Candidates[AlbumMatch] = {}
# Search by explicit ID.
if search_ids:
log.debug("Searching for album IDs: {}", ", ".join(search_ids))
for _info in metadata_plugins.albums_for_ids(search_ids):
_add_candidate(items, candidates, _info)
# Use existing metadata or text search.
else:
# Try search based on current ID.
for info in match_by_id(
likelies["mb_albumid"], consensus["mb_albumid"]
):
_add_candidate(items, candidates, info)
rec = _recommendation(list(candidates.values()))
log.debug("Album ID match recommendation is {}", rec)
if candidates and not config["import"]["timid"]:
# If we have a very good MBID match, return immediately.
# Otherwise, this match will compete against metadata-based
# matches.
if rec == Recommendation.strong:
log.debug("ID match.")
return (
cur_artist,
cur_album,
Proposal(list(candidates.values()), rec),
)
# Search terms.
if not (search_artist and search_name):
# No explicit search terms -- use current metadata.
search_artist, search_name = cur_artist, cur_album
log.debug("Search terms: {} - {}", search_artist, search_name)
# Is this album likely to be a "various artist" release?
va_likely = (
(not consensus["artist"])
or (search_artist.lower() in VA_ARTISTS)
or any(item.comp for item in items)
)
log.debug("Album might be VA: {}", va_likely)
# Get the results from the data sources.
for matched_candidate in metadata_plugins.candidates(
items, search_artist, search_name, va_likely
):
_add_candidate(items, candidates, matched_candidate)
log.debug("Evaluating {} candidates.", len(candidates))
# Sort and get the recommendation.
candidates_sorted = _sort_candidates(candidates.values())
rec = _recommendation(candidates_sorted)
return cur_artist, cur_album, Proposal(candidates_sorted, rec)
def tag_item(
item,
search_artist: str | None = None,
search_name: str | None = None,
search_ids: list[str] | None = None,
) -> Proposal:
"""Find metadata for a single track. Return a `Proposal` consisting
of `TrackMatch` objects.
`search_artist` and `search_title` may be used to override the item
metadata in the search query. `search_ids` may be used for restricting the
search to a list of metadata backend IDs.
"""
# Holds candidates found so far: keys are (data_source, track_id) pairs,
# values TrackMatch objects
candidates: Candidates[TrackMatch] = {}
rec: Recommendation | None = None
# First, try matching by the external source ID.
trackids = search_ids or [t for t in [item.mb_trackid] if t]
if trackids:
log.debug("Searching for track IDs: {}", ", ".join(trackids))
for info in metadata_plugins.tracks_for_ids(trackids):
dist = track_distance(item, info, incl_artist=True)
candidates[info.identifier] = hooks.TrackMatch(dist, info, item)
# If this is a good match, then don't keep searching.
rec = _recommendation(_sort_candidates(candidates.values()))
if rec == Recommendation.strong and not config["import"]["timid"]:
log.debug("Track ID match.")
return Proposal(_sort_candidates(candidates.values()), rec)
# If we're searching by ID, don't proceed.
if search_ids:
if candidates:
assert rec is not None
return Proposal(_sort_candidates(candidates.values()), rec)
else:
return Proposal([], Recommendation.none)
# Search terms.
search_artist = search_artist or item.artist
search_name = search_name or item.title
log.debug("Item search terms: {} - {}", search_artist, search_name)
# Get and evaluate candidate metadata.
for track_info in metadata_plugins.item_candidates(
item, search_artist, search_name
):
dist = track_distance(item, track_info, incl_artist=True)
candidates[track_info.identifier] = hooks.TrackMatch(
dist, track_info, item
)
# Sort by distance and return with recommendation.
log.debug("Found {} candidates.", len(candidates))
candidates_sorted = _sort_candidates(candidates.values())
rec = _recommendation(candidates_sorted)
return Proposal(candidates_sorted, rec)

View file

@ -18,10 +18,12 @@ import logging
import os
import re
import shutil
import subprocess
import time
from collections import defaultdict
from collections.abc import Callable
from enum import Enum
from pathlib import Path
from tempfile import mkdtemp
from typing import TYPE_CHECKING, Any
@ -1072,6 +1074,12 @@ class ImportTaskFactory:
If an item cannot be read, return `None` instead and log an
error.
"""
# Check if the file has an extension,
# Add an extension if there isn't one.
if os.path.isfile(path):
path = self.check_extension(path)
try:
return library.Item.from_path(path)
except library.ReadError as exc:
@ -1085,6 +1093,103 @@ class ImportTaskFactory:
"error reading {}: {}", util.displayable_path(path), exc
)
def check_extension(self, path_bytes: util.PathBytes):
path = Path(os.fsdecode(path_bytes))
# if there is an extension, ignore
if path.suffix != "":
return path_bytes
# no extension detected
# use ffprobe to find the format
formats = []
output = subprocess.run(
[
"ffprobe",
"-hide_banner",
"-loglevel",
"fatal",
"-show_format",
"--",
str(path),
],
capture_output=True,
)
out = output.stdout.decode("utf-8")
err = output.stderr.decode("utf-8")
if err != "":
log.error("ffprobe error: %s", err)
for line in out.split("\n"):
if line.startswith("format_name="):
formats = line.split("=")[1].split(",")
# a list of audio formats I got from wikipedia https://en.wikipedia.org/wiki/Audio_file_format
wiki_formats = {
"3gp",
"aa",
"aac",
"aax",
"act",
"aiff",
"alac",
"amr",
"ape",
"au",
"awb",
"dss",
"dvf",
"flac",
"gsm",
"iklax",
"ivs",
"m4a",
"m4b",
"m4p",
"mmf",
"movpkg",
"mp1",
"mp2",
"mp3",
"mpc",
"msv",
"nmf",
"ogg",
"oga",
"mogg",
"opus",
"ra",
"rm",
"raw",
"rf64",
"sln",
"tta",
"voc",
"vox",
"wav",
"wma",
"wv",
"webm",
"8svx",
"cda",
}
detected_format = ""
# The first format from ffprobe that is on this list is taken
for f in formats:
if f in wiki_formats:
detected_format = f
break
# if ffprobe can't find a format, the file is prob not music
if detected_format == "":
return path_bytes
# cp and add ext. If already exist, use that file
# assume, for example, the only diff between 'asdf.mp3' and 'asdf' is format
new_path = path.with_suffix("." + detected_format)
if not new_path.exists():
util.move(bytes(path), bytes(new_path))
else:
log.info("Import file with matching format to original target")
return new_path
MULTIDISC_MARKERS = (rb"dis[ck]", rb"cd")
MULTIDISC_PAT_FMT = rb"^(.*%s[\W_]*)\d"

View file

@ -1,397 +1,397 @@
from __future__ import annotations
import os
import textwrap
from dataclasses import dataclass
from functools import cached_property
from typing import TYPE_CHECKING
from beets import config, ui
from beets.autotag import hooks
from beets.util import displayable_path
from beets.util.color import colorize
from beets.util.diff import colordiff
from beets.util.layout import Side, get_layout_lines, indent
from beets.util.units import human_seconds_short
if TYPE_CHECKING:
import confuse
from beets import autotag
from beets.library.models import Item
from beets.util.color import ColorName
VARIOUS_ARTISTS = "Various Artists"
@dataclass
class ChangeRepresentation:
"""Keeps track of all information needed to generate a (colored) text
representation of the changes that will be made if an album or singleton's
tags are changed according to `match`, which must be an AlbumMatch or
TrackMatch object, accordingly.
"""
cur_artist: str
cur_name: str
match: autotag.hooks.Match
@cached_property
def changed_prefix(self) -> str:
return colorize("changed", "\u2260")
@cached_property
def _indentation_config(self) -> confuse.Subview:
return config["ui"]["import"]["indentation"]
@cached_property
def indent(self) -> int:
return self._indentation_config["match_header"].get(int)
@cached_property
def indent_header(self) -> str:
return indent(self.indent)
@cached_property
def indent_detail(self) -> str:
return indent(self._indentation_config["match_details"].get(int))
@cached_property
def indent_tracklist(self) -> str:
return indent(self._indentation_config["match_tracklist"].get(int))
def print_layout(self, indent: str, left: Side, right: Side) -> None:
for line in get_layout_lines(indent, left, right, ui.term_width()):
ui.print_(line)
def show_match_header(self) -> None:
"""Print out a 'header' identifying the suggested match (album name,
artist name,...) and summarizing the changes that would be made should
the user accept the match.
"""
# Print newline at beginning of change block.
parts = [""]
# 'Match' line and similarity.
parts.append(f"Match ({self.match.distance.string}):")
parts.append(
ui.colorize(
self.match.distance.color,
f"{self.match.info.artist} - {self.match.info.name}",
)
)
if penalty_keys := self.match.distance.generic_penalty_keys:
parts.append(
ui.colorize("changed", f"\u2260 {', '.join(penalty_keys)}")
)
if disambig := self.match.disambig_string:
parts.append(disambig)
if data_url := self.match.info.data_url:
parts.append(ui.colorize("text_faint", f"{data_url}"))
ui.print_(textwrap.indent("\n".join(parts), self.indent_header))
def show_match_details(self) -> None:
"""Print out the details of the match, including changes in album name
and artist name.
"""
# Artist.
artist_l, artist_r = self.cur_artist or "", self.match.info.artist or ""
if artist_r == VARIOUS_ARTISTS:
# Hide artists for VA releases.
artist_l, artist_r = "", ""
if artist_l != artist_r:
artist_l, artist_r = colordiff(artist_l, artist_r)
left = Side(f"{self.changed_prefix} Artist: ", artist_l, "")
right = Side("", artist_r, "")
self.print_layout(self.indent_detail, left, right)
else:
ui.print_(f"{self.indent_detail}*", "Artist:", artist_r)
if self.cur_name:
type_ = self.match.type
name_l, name_r = self.cur_name or "", self.match.info.name
if self.cur_name != self.match.info.name != VARIOUS_ARTISTS:
name_l, name_r = colordiff(name_l, name_r)
left = Side(f"{self.changed_prefix} {type_}: ", name_l, "")
right = Side("", name_r, "")
self.print_layout(self.indent_detail, left, right)
else:
ui.print_(f"{self.indent_detail}*", f"{type_}:", name_r)
def make_medium_info_line(self, track_info: hooks.TrackInfo) -> str:
"""Construct a line with the current medium's info."""
track_media = track_info.get("media", "Media")
# Build output string.
if self.match.info.mediums > 1 and track_info.disctitle:
return (
f"* {track_media} {track_info.medium}: {track_info.disctitle}"
)
elif self.match.info.mediums > 1:
return f"* {track_media} {track_info.medium}"
elif track_info.disctitle:
return f"* {track_media}: {track_info.disctitle}"
else:
return ""
def format_index(self, track_info: hooks.TrackInfo | Item) -> str:
"""Return a string representing the track index of the given
TrackInfo or Item object.
"""
if isinstance(track_info, hooks.TrackInfo):
index = track_info.index
medium_index = track_info.medium_index
medium = track_info.medium
mediums = self.match.info.mediums
else:
index = medium_index = track_info.track
medium = track_info.disc
mediums = track_info.disctotal
if config["per_disc_numbering"]:
if mediums and mediums > 1:
return f"{medium}-{medium_index}"
else:
return str(medium_index if medium_index is not None else index)
else:
return str(index)
def make_track_numbers(
self, item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track indices."""
cur_track = self.format_index(item)
new_track = self.format_index(track_info)
changed = False
# Choose color based on change.
highlight_color: ColorName
if cur_track != new_track:
changed = True
if item.track in (track_info.index, track_info.medium_index):
highlight_color = "text_highlight_minor"
else:
highlight_color = "text_highlight"
else:
highlight_color = "text_faint"
lhs_track = colorize(highlight_color, f"(#{cur_track})")
rhs_track = colorize(highlight_color, f"(#{new_track})")
return lhs_track, rhs_track, changed
@staticmethod
def make_track_titles(
item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track titles."""
new_title = track_info.name
if not item.title.strip():
# If there's no title, we use the filename. Don't colordiff.
cur_title = displayable_path(os.path.basename(item.path))
return cur_title, new_title, True
else:
# If there is a title, highlight differences.
cur_title = item.title.strip()
cur_col, new_col = colordiff(cur_title, new_title)
return cur_col, new_col, cur_title != new_title
@staticmethod
def make_track_lengths(
item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track lengths."""
changed = False
highlight_color: ColorName
if (
item.length
and track_info.length
and abs(item.length - track_info.length)
>= config["ui"]["length_diff_thresh"].as_number()
):
highlight_color = "text_highlight"
changed = True
else:
highlight_color = "text_highlight_minor"
# Handle nonetype lengths by setting to 0
cur_length0 = item.length if item.length else 0
new_length0 = track_info.length if track_info.length else 0
# format into string
cur_length = f"({human_seconds_short(cur_length0)})"
new_length = f"({human_seconds_short(new_length0)})"
# colorize
lhs_length = colorize(highlight_color, cur_length)
rhs_length = colorize(highlight_color, new_length)
return lhs_length, rhs_length, changed
def make_line(
self, item: Item, track_info: hooks.TrackInfo
) -> tuple[Side, Side]:
"""Extract changes from item -> new TrackInfo object, and colorize
appropriately. Returns (lhs, rhs) for column printing.
"""
# Track titles.
lhs_title, rhs_title, diff_title = self.make_track_titles(
item, track_info
)
# Track number change.
lhs_track, rhs_track, diff_track = self.make_track_numbers(
item, track_info
)
# Length change.
lhs_length, rhs_length, diff_length = self.make_track_lengths(
item, track_info
)
changed = diff_title or diff_track or diff_length
# Construct lhs and rhs dicts.
# Previously, we printed the penalties, however this is no longer
# the case, thus the 'info' dictionary is unneeded.
# penalties = penalty_string(self.match.distance.tracks[track_info])
lhs = Side(
f"{self.changed_prefix if changed else '*'} {lhs_track} ",
lhs_title,
f" {lhs_length}",
)
if not changed:
# Only return the left side, as nothing changed.
return (lhs, Side("", "", ""))
return (lhs, Side(f"{rhs_track} ", rhs_title, f" {rhs_length}"))
def print_tracklist(self, lines: list[tuple[Side, Side]]) -> None:
"""Calculates column widths for tracks stored as line tuples:
(left, right). Then prints each line of tracklist.
"""
if len(lines) == 0:
# If no lines provided, e.g. details not required, do nothing.
return
# Check how to fit content into terminal window
indent_width = len(self.indent_tracklist)
terminal_width = ui.term_width()
joiner_width = len("* -> ")
col_width = (terminal_width - indent_width - joiner_width) // 2
max_width_l = max(left.rendered_width for left, _ in lines)
max_width_r = max(right.rendered_width for _, right in lines)
if ((max_width_l <= col_width) and (max_width_r <= col_width)) or (
((max_width_l > col_width) or (max_width_r > col_width))
and ((max_width_l + max_width_r) <= col_width * 2)
):
# All content fits. Either both maximum widths are below column
# widths, or one of the columns is larger than allowed but the
# other is smaller than allowed.
# In this case we can afford to shrink the columns to fit their
# largest string
col_width_l = max_width_l
col_width_r = max_width_r
else:
# Not all content fits - stick with original half/half split
col_width_l = col_width
col_width_r = col_width
# Print out each line, using the calculated width from above.
for left, right in lines:
left = left._replace(width=col_width_l)
right = right._replace(width=col_width_r)
self.print_layout(self.indent_tracklist, left, right)
class AlbumChange(ChangeRepresentation):
match: autotag.hooks.AlbumMatch
def show_match_tracks(self) -> None:
"""Print out the tracks of the match, summarizing changes the match
suggests for them.
"""
pairs = sorted(
self.match.item_info_pairs, key=lambda pair: pair[1].index or 0
)
# Build up LHS and RHS for track difference display. The `lines` list
# contains `(left, right)` tuples.
lines: list[tuple[Side, Side]] = []
medium = disctitle = None
for item, track_info in pairs:
# If the track is the first on a new medium, show medium
# number and title.
if medium != track_info.medium or disctitle != track_info.disctitle:
# Create header for new medium
header = self.make_medium_info_line(track_info)
if header != "":
# Print tracks from previous medium
self.print_tracklist(lines)
lines = []
ui.print_(f"{self.indent_detail}{header}")
# Save new medium details for future comparison.
medium, disctitle = track_info.medium, track_info.disctitle
# Construct the line tuple for the track.
left, right = self.make_line(item, track_info)
if right.contents != "":
lines.append((left, right))
else:
if config["import"]["detail"]:
lines.append((left, right))
self.print_tracklist(lines)
# Missing and unmatched tracks.
if self.match.extra_tracks:
ui.print_(
"Missing tracks"
f" ({len(self.match.extra_tracks)}/{len(self.match.info.tracks)} -"
f" {len(self.match.extra_tracks) / len(self.match.info.tracks):.1%}):"
)
for track_info in self.match.extra_tracks:
line = f" ! {track_info.title} (#{self.format_index(track_info)})"
if track_info.length:
line += f" ({human_seconds_short(track_info.length)})"
ui.print_(colorize("text_warning", line))
if self.match.extra_items:
ui.print_(f"Unmatched tracks ({len(self.match.extra_items)}):")
for item in self.match.extra_items:
line = f" ! {item.title} (#{self.format_index(item)})"
if item.length:
line += f" ({human_seconds_short(item.length)})"
ui.print_(colorize("text_warning", line))
class TrackChange(ChangeRepresentation):
"""Track change representation, comparing item with match."""
match: autotag.hooks.TrackMatch
def show_change(
cur_artist: str, cur_album: str, match: hooks.AlbumMatch
) -> None:
"""Print out a representation of the changes that will be made if an
album's tags are changed according to `match`, which must be an AlbumMatch
object.
"""
change = AlbumChange(cur_artist, cur_album, match)
# Print the match header.
change.show_match_header()
# Print the match details.
change.show_match_details()
# Print the match tracks.
change.show_match_tracks()
def show_item_change(item: Item, match: hooks.TrackMatch) -> None:
"""Print out the change that would occur by tagging `item` with the
metadata from `match`, a TrackMatch object.
"""
change = TrackChange(item.artist, item.title, match)
# Print the match header.
change.show_match_header()
# Print the match details.
change.show_match_details()
from __future__ import annotations
import os
import textwrap
from dataclasses import dataclass
from functools import cached_property
from typing import TYPE_CHECKING
from beets import config, ui
from beets.autotag import hooks
from beets.util import displayable_path
from beets.util.color import colorize
from beets.util.diff import colordiff
from beets.util.layout import Side, get_layout_lines, indent
from beets.util.units import human_seconds_short
if TYPE_CHECKING:
import confuse
from beets import autotag
from beets.library.models import Item
from beets.util.color import ColorName
VARIOUS_ARTISTS = "Various Artists"
@dataclass
class ChangeRepresentation:
"""Keeps track of all information needed to generate a (colored) text
representation of the changes that will be made if an album or singleton's
tags are changed according to `match`, which must be an AlbumMatch or
TrackMatch object, accordingly.
"""
cur_artist: str
cur_name: str
match: autotag.hooks.Match
@cached_property
def changed_prefix(self) -> str:
return colorize("changed", "\u2260")
@cached_property
def _indentation_config(self) -> confuse.Subview:
return config["ui"]["import"]["indentation"]
@cached_property
def indent(self) -> int:
return self._indentation_config["match_header"].get(int)
@cached_property
def indent_header(self) -> str:
return indent(self.indent)
@cached_property
def indent_detail(self) -> str:
return indent(self._indentation_config["match_details"].get(int))
@cached_property
def indent_tracklist(self) -> str:
return indent(self._indentation_config["match_tracklist"].get(int))
def print_layout(self, indent: str, left: Side, right: Side) -> None:
for line in get_layout_lines(indent, left, right, ui.term_width()):
ui.print_(line)
def show_match_header(self) -> None:
"""Print out a 'header' identifying the suggested match (album name,
artist name,...) and summarizing the changes that would be made should
the user accept the match.
"""
# Print newline at beginning of change block.
parts = [""]
# 'Match' line and similarity.
parts.append(f"Match ({self.match.distance.string}):")
parts.append(
ui.colorize(
self.match.distance.color,
f"{self.match.info.artist} - {self.match.info.name}",
)
)
if penalty_keys := self.match.distance.generic_penalty_keys:
parts.append(
ui.colorize("changed", f"\u2260 {', '.join(penalty_keys)}")
)
if disambig := self.match.disambig_string:
parts.append(disambig)
if data_url := self.match.info.data_url:
parts.append(ui.colorize("text_faint", f"{data_url}"))
ui.print_(textwrap.indent("\n".join(parts), self.indent_header))
def show_match_details(self) -> None:
"""Print out the details of the match, including changes in album name
and artist name.
"""
# Artist.
artist_l, artist_r = self.cur_artist or "", self.match.info.artist or ""
if artist_r == VARIOUS_ARTISTS:
# Hide artists for VA releases.
artist_l, artist_r = "", ""
if artist_l != artist_r:
artist_l, artist_r = colordiff(artist_l, artist_r)
left = Side(f"{self.changed_prefix} Artist: ", artist_l, "")
right = Side("", artist_r, "")
self.print_layout(self.indent_detail, left, right)
else:
ui.print_(f"{self.indent_detail}*", "Artist:", artist_r)
if self.cur_name:
type_ = self.match.type
name_l, name_r = self.cur_name or "", self.match.info.name
if self.cur_name != self.match.info.name != VARIOUS_ARTISTS:
name_l, name_r = colordiff(name_l, name_r)
left = Side(f"{self.changed_prefix} {type_}: ", name_l, "")
right = Side("", name_r, "")
self.print_layout(self.indent_detail, left, right)
else:
ui.print_(f"{self.indent_detail}*", f"{type_}:", name_r)
def make_medium_info_line(self, track_info: hooks.TrackInfo) -> str:
"""Construct a line with the current medium's info."""
track_media = track_info.get("media", "Media")
# Build output string.
if self.match.info.mediums > 1 and track_info.disctitle:
return (
f"* {track_media} {track_info.medium}: {track_info.disctitle}"
)
elif self.match.info.mediums > 1:
return f"* {track_media} {track_info.medium}"
elif track_info.disctitle:
return f"* {track_media}: {track_info.disctitle}"
else:
return ""
def format_index(self, track_info: hooks.TrackInfo | Item) -> str:
"""Return a string representing the track index of the given
TrackInfo or Item object.
"""
if isinstance(track_info, hooks.TrackInfo):
index = track_info.index
medium_index = track_info.medium_index
medium = track_info.medium
mediums = self.match.info.mediums
else:
index = medium_index = track_info.track
medium = track_info.disc
mediums = track_info.disctotal
if config["per_disc_numbering"]:
if mediums and mediums > 1:
return f"{medium}-{medium_index}"
else:
return str(medium_index if medium_index is not None else index)
else:
return str(index)
def make_track_numbers(
self, item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track indices."""
cur_track = self.format_index(item)
new_track = self.format_index(track_info)
changed = False
# Choose color based on change.
highlight_color: ColorName
if cur_track != new_track:
changed = True
if item.track in (track_info.index, track_info.medium_index):
highlight_color = "text_highlight_minor"
else:
highlight_color = "text_highlight"
else:
highlight_color = "text_faint"
lhs_track = colorize(highlight_color, f"(#{cur_track})")
rhs_track = colorize(highlight_color, f"(#{new_track})")
return lhs_track, rhs_track, changed
@staticmethod
def make_track_titles(
item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track titles."""
new_title = track_info.name
if not item.title.strip():
# If there's no title, we use the filename. Don't colordiff.
cur_title = displayable_path(os.path.basename(item.path))
return cur_title, new_title, True
else:
# If there is a title, highlight differences.
cur_title = item.title.strip()
cur_col, new_col = colordiff(cur_title, new_title)
return cur_col, new_col, cur_title != new_title
@staticmethod
def make_track_lengths(
item: Item, track_info: hooks.TrackInfo
) -> tuple[str, str, bool]:
"""Format colored track lengths."""
changed = False
highlight_color: ColorName
if (
item.length
and track_info.length
and abs(item.length - track_info.length)
>= config["ui"]["length_diff_thresh"].as_number()
):
highlight_color = "text_highlight"
changed = True
else:
highlight_color = "text_highlight_minor"
# Handle nonetype lengths by setting to 0
cur_length0 = item.length if item.length else 0
new_length0 = track_info.length if track_info.length else 0
# format into string
cur_length = f"({human_seconds_short(cur_length0)})"
new_length = f"({human_seconds_short(new_length0)})"
# colorize
lhs_length = colorize(highlight_color, cur_length)
rhs_length = colorize(highlight_color, new_length)
return lhs_length, rhs_length, changed
def make_line(
self, item: Item, track_info: hooks.TrackInfo
) -> tuple[Side, Side]:
"""Extract changes from item -> new TrackInfo object, and colorize
appropriately. Returns (lhs, rhs) for column printing.
"""
# Track titles.
lhs_title, rhs_title, diff_title = self.make_track_titles(
item, track_info
)
# Track number change.
lhs_track, rhs_track, diff_track = self.make_track_numbers(
item, track_info
)
# Length change.
lhs_length, rhs_length, diff_length = self.make_track_lengths(
item, track_info
)
changed = diff_title or diff_track or diff_length
# Construct lhs and rhs dicts.
# Previously, we printed the penalties, however this is no longer
# the case, thus the 'info' dictionary is unneeded.
# penalties = penalty_string(self.match.distance.tracks[track_info])
lhs = Side(
f"{self.changed_prefix if changed else '*'} {lhs_track} ",
lhs_title,
f" {lhs_length}",
)
if not changed:
# Only return the left side, as nothing changed.
return (lhs, Side("", "", ""))
return (lhs, Side(f"{rhs_track} ", rhs_title, f" {rhs_length}"))
def print_tracklist(self, lines: list[tuple[Side, Side]]) -> None:
"""Calculates column widths for tracks stored as line tuples:
(left, right). Then prints each line of tracklist.
"""
if len(lines) == 0:
# If no lines provided, e.g. details not required, do nothing.
return
# Check how to fit content into terminal window
indent_width = len(self.indent_tracklist)
terminal_width = ui.term_width()
joiner_width = len("* -> ")
col_width = (terminal_width - indent_width - joiner_width) // 2
max_width_l = max(left.rendered_width for left, _ in lines)
max_width_r = max(right.rendered_width for _, right in lines)
if ((max_width_l <= col_width) and (max_width_r <= col_width)) or (
((max_width_l > col_width) or (max_width_r > col_width))
and ((max_width_l + max_width_r) <= col_width * 2)
):
# All content fits. Either both maximum widths are below column
# widths, or one of the columns is larger than allowed but the
# other is smaller than allowed.
# In this case we can afford to shrink the columns to fit their
# largest string
col_width_l = max_width_l
col_width_r = max_width_r
else:
# Not all content fits - stick with original half/half split
col_width_l = col_width
col_width_r = col_width
# Print out each line, using the calculated width from above.
for left, right in lines:
left = left._replace(width=col_width_l)
right = right._replace(width=col_width_r)
self.print_layout(self.indent_tracklist, left, right)
class AlbumChange(ChangeRepresentation):
match: autotag.hooks.AlbumMatch
def show_match_tracks(self) -> None:
"""Print out the tracks of the match, summarizing changes the match
suggests for them.
"""
pairs = sorted(
self.match.item_info_pairs, key=lambda pair: pair[1].index or 0
)
# Build up LHS and RHS for track difference display. The `lines` list
# contains `(left, right)` tuples.
lines: list[tuple[Side, Side]] = []
medium = disctitle = None
for item, track_info in pairs:
# If the track is the first on a new medium, show medium
# number and title.
if medium != track_info.medium or disctitle != track_info.disctitle:
# Create header for new medium
header = self.make_medium_info_line(track_info)
if header != "":
# Print tracks from previous medium
self.print_tracklist(lines)
lines = []
ui.print_(f"{self.indent_detail}{header}")
# Save new medium details for future comparison.
medium, disctitle = track_info.medium, track_info.disctitle
# Construct the line tuple for the track.
left, right = self.make_line(item, track_info)
if right.contents != "":
lines.append((left, right))
else:
if config["import"]["detail"]:
lines.append((left, right))
self.print_tracklist(lines)
# Missing and unmatched tracks.
if self.match.extra_tracks:
ui.print_(
"Missing tracks"
f" ({len(self.match.extra_tracks)}/{len(self.match.info.tracks)} -"
f" {len(self.match.extra_tracks) / len(self.match.info.tracks):.1%}):"
)
for track_info in self.match.extra_tracks:
line = f" ! {track_info.title} (#{self.format_index(track_info)})"
if track_info.length:
line += f" ({human_seconds_short(track_info.length)})"
ui.print_(colorize("text_warning", line))
if self.match.extra_items:
ui.print_(f"Unmatched tracks ({len(self.match.extra_items)}):")
for item in self.match.extra_items:
line = f" ! {item.title} (#{self.format_index(item)})"
if item.length:
line += f" ({human_seconds_short(item.length)})"
ui.print_(colorize("text_warning", line))
class TrackChange(ChangeRepresentation):
"""Track change representation, comparing item with match."""
match: autotag.hooks.TrackMatch
def show_change(
cur_artist: str, cur_album: str, match: hooks.AlbumMatch
) -> None:
"""Print out a representation of the changes that will be made if an
album's tags are changed according to `match`, which must be an AlbumMatch
object.
"""
change = AlbumChange(cur_artist, cur_album, match)
# Print the match header.
change.show_match_header()
# Print the match details.
change.show_match_details()
# Print the match tracks.
change.show_match_tracks()
def show_item_change(item: Item, match: hooks.TrackMatch) -> None:
"""Print out the change that would occur by tagging `item` with the
metadata from `match`, a TrackMatch object.
"""
change = TrackChange(item.artist, item.title, match)
# Print the match header.
change.show_match_header()
# Print the match details.
change.show_match_details()

File diff suppressed because it is too large Load diff

View file

@ -1,215 +1,215 @@
from __future__ import annotations
import os
import re
from functools import cache
from typing import Literal
import confuse
from beets import config
# ANSI terminal colorization code heavily inspired by pygments:
# https://bitbucket.org/birkenfeld/pygments-main/src/default/pygments/console.py
# (pygments is by Tim Hatch, Armin Ronacher, et al.)
COLOR_ESCAPE = "\x1b"
LEGACY_COLORS = {
"black": ["black"],
"darkred": ["red"],
"darkgreen": ["green"],
"brown": ["yellow"],
"darkyellow": ["yellow"],
"darkblue": ["blue"],
"purple": ["magenta"],
"darkmagenta": ["magenta"],
"teal": ["cyan"],
"darkcyan": ["cyan"],
"lightgray": ["white"],
"darkgray": ["bold", "black"],
"red": ["bold", "red"],
"green": ["bold", "green"],
"yellow": ["bold", "yellow"],
"blue": ["bold", "blue"],
"fuchsia": ["bold", "magenta"],
"magenta": ["bold", "magenta"],
"turquoise": ["bold", "cyan"],
"cyan": ["bold", "cyan"],
"white": ["bold", "white"],
}
# All ANSI Colors.
CODE_BY_COLOR = {
# Styles.
"normal": 0,
"bold": 1,
"faint": 2,
"italic": 3,
"underline": 4,
"blink_slow": 5,
"blink_rapid": 6,
"inverse": 7,
"conceal": 8,
"crossed_out": 9,
# Text colors.
"black": 30,
"red": 31,
"green": 32,
"yellow": 33,
"blue": 34,
"magenta": 35,
"cyan": 36,
"white": 37,
"bright_black": 90,
"bright_red": 91,
"bright_green": 92,
"bright_yellow": 93,
"bright_blue": 94,
"bright_magenta": 95,
"bright_cyan": 96,
"bright_white": 97,
# Background colors.
"bg_black": 40,
"bg_red": 41,
"bg_green": 42,
"bg_yellow": 43,
"bg_blue": 44,
"bg_magenta": 45,
"bg_cyan": 46,
"bg_white": 47,
"bg_bright_black": 100,
"bg_bright_red": 101,
"bg_bright_green": 102,
"bg_bright_yellow": 103,
"bg_bright_blue": 104,
"bg_bright_magenta": 105,
"bg_bright_cyan": 106,
"bg_bright_white": 107,
}
RESET_COLOR = f"{COLOR_ESCAPE}[39;49;00m"
# Precompile common ANSI-escape regex patterns
ANSI_CODE_REGEX = re.compile(rf"({COLOR_ESCAPE}\[[;0-9]*m)")
ESC_TEXT_REGEX = re.compile(
rf"""(?P<pretext>[^{COLOR_ESCAPE}]*)
(?P<esc>(?:{ANSI_CODE_REGEX.pattern})+)
(?P<text>[^{COLOR_ESCAPE}]+)(?P<reset>{re.escape(RESET_COLOR)})
(?P<posttext>[^{COLOR_ESCAPE}]*)""",
re.VERBOSE,
)
ColorName = Literal[
"text_success",
"text_warning",
"text_error",
"text_highlight",
"text_highlight_minor",
"action_default",
"action",
# New Colors
"text_faint",
"import_path",
"import_path_items",
"action_description",
"changed",
"text_diff_added",
"text_diff_removed",
]
@cache
def get_color_config() -> dict[ColorName, str]:
"""Parse and validate color configuration, converting names to ANSI codes.
Processes the UI color configuration, handling both new list format and
legacy single-color format. Validates all color names against known codes
and raises an error for any invalid entries.
"""
template_dict: dict[ColorName, confuse.OneOf[str | list[str]]] = {
n: confuse.OneOf(
[
confuse.Choice(sorted(LEGACY_COLORS)),
confuse.Sequence(confuse.Choice(sorted(CODE_BY_COLOR))),
]
)
for n in ColorName.__args__ # type: ignore[attr-defined]
}
template = confuse.MappingTemplate(template_dict)
colors_by_color_name = {
k: (v if isinstance(v, list) else LEGACY_COLORS.get(v, [v]))
for k, v in config["ui"]["colors"].get(template).items()
}
return {
n: ";".join(str(CODE_BY_COLOR[c]) for c in colors)
for n, colors in colors_by_color_name.items()
}
def _colorize(color_name: ColorName, text: str) -> str:
"""Apply ANSI color formatting to text based on configuration settings."""
color_code = get_color_config()[color_name]
return f"{COLOR_ESCAPE}[{color_code}m{text}{RESET_COLOR}"
def colorize(color_name: ColorName, text: str) -> str:
"""Colorize text when color output is enabled."""
if config["ui"]["color"] and "NO_COLOR" not in os.environ:
return _colorize(color_name, text)
return text
def uncolorize(colored_text: str) -> str:
"""Remove colors from a string."""
# Define a regular expression to match ANSI codes.
# See: http://stackoverflow.com/a/2187024/1382707
# Explanation of regular expression:
# \x1b - matches ESC character
# \[ - matches opening square bracket
# [;\d]* - matches a sequence consisting of one or more digits or
# semicola
# [A-Za-z] - matches a letter
return ANSI_CODE_REGEX.sub("", colored_text)
def color_split(colored_text: str, index: int) -> tuple[str, str]:
length = 0
pre_split = ""
post_split = ""
found_color_code = None
found_split = False
for part in ANSI_CODE_REGEX.split(colored_text):
# Count how many real letters we have passed
length += color_len(part)
if found_split:
post_split += part
else:
if ANSI_CODE_REGEX.match(part):
# This is a color code
if part == RESET_COLOR:
found_color_code = None
else:
found_color_code = part
pre_split += part
else:
if index < length:
# Found part with our split in.
split_index = index - (length - color_len(part))
found_split = True
if found_color_code:
pre_split += f"{part[:split_index]}{RESET_COLOR}"
post_split += f"{found_color_code}{part[split_index:]}"
else:
pre_split += part[:split_index]
post_split += part[split_index:]
else:
# Not found, add this part to the pre split
pre_split += part
return pre_split, post_split
def color_len(colored_text: str) -> int:
"""Measure the length of a string while excluding ANSI codes from the
measurement. The standard `len(my_string)` method also counts ANSI codes
to the string length, which is counterproductive when layouting a
Terminal interface.
"""
# Return the length of the uncolored string.
return len(uncolorize(colored_text))
from __future__ import annotations
import os
import re
from functools import cache
from typing import Literal
import confuse
from beets import config
# ANSI terminal colorization code heavily inspired by pygments:
# https://bitbucket.org/birkenfeld/pygments-main/src/default/pygments/console.py
# (pygments is by Tim Hatch, Armin Ronacher, et al.)
COLOR_ESCAPE = "\x1b"
LEGACY_COLORS = {
"black": ["black"],
"darkred": ["red"],
"darkgreen": ["green"],
"brown": ["yellow"],
"darkyellow": ["yellow"],
"darkblue": ["blue"],
"purple": ["magenta"],
"darkmagenta": ["magenta"],
"teal": ["cyan"],
"darkcyan": ["cyan"],
"lightgray": ["white"],
"darkgray": ["bold", "black"],
"red": ["bold", "red"],
"green": ["bold", "green"],
"yellow": ["bold", "yellow"],
"blue": ["bold", "blue"],
"fuchsia": ["bold", "magenta"],
"magenta": ["bold", "magenta"],
"turquoise": ["bold", "cyan"],
"cyan": ["bold", "cyan"],
"white": ["bold", "white"],
}
# All ANSI Colors.
CODE_BY_COLOR = {
# Styles.
"normal": 0,
"bold": 1,
"faint": 2,
"italic": 3,
"underline": 4,
"blink_slow": 5,
"blink_rapid": 6,
"inverse": 7,
"conceal": 8,
"crossed_out": 9,
# Text colors.
"black": 30,
"red": 31,
"green": 32,
"yellow": 33,
"blue": 34,
"magenta": 35,
"cyan": 36,
"white": 37,
"bright_black": 90,
"bright_red": 91,
"bright_green": 92,
"bright_yellow": 93,
"bright_blue": 94,
"bright_magenta": 95,
"bright_cyan": 96,
"bright_white": 97,
# Background colors.
"bg_black": 40,
"bg_red": 41,
"bg_green": 42,
"bg_yellow": 43,
"bg_blue": 44,
"bg_magenta": 45,
"bg_cyan": 46,
"bg_white": 47,
"bg_bright_black": 100,
"bg_bright_red": 101,
"bg_bright_green": 102,
"bg_bright_yellow": 103,
"bg_bright_blue": 104,
"bg_bright_magenta": 105,
"bg_bright_cyan": 106,
"bg_bright_white": 107,
}
RESET_COLOR = f"{COLOR_ESCAPE}[39;49;00m"
# Precompile common ANSI-escape regex patterns
ANSI_CODE_REGEX = re.compile(rf"({COLOR_ESCAPE}\[[;0-9]*m)")
ESC_TEXT_REGEX = re.compile(
rf"""(?P<pretext>[^{COLOR_ESCAPE}]*)
(?P<esc>(?:{ANSI_CODE_REGEX.pattern})+)
(?P<text>[^{COLOR_ESCAPE}]+)(?P<reset>{re.escape(RESET_COLOR)})
(?P<posttext>[^{COLOR_ESCAPE}]*)""",
re.VERBOSE,
)
ColorName = Literal[
"text_success",
"text_warning",
"text_error",
"text_highlight",
"text_highlight_minor",
"action_default",
"action",
# New Colors
"text_faint",
"import_path",
"import_path_items",
"action_description",
"changed",
"text_diff_added",
"text_diff_removed",
]
@cache
def get_color_config() -> dict[ColorName, str]:
"""Parse and validate color configuration, converting names to ANSI codes.
Processes the UI color configuration, handling both new list format and
legacy single-color format. Validates all color names against known codes
and raises an error for any invalid entries.
"""
template_dict: dict[ColorName, confuse.OneOf[str | list[str]]] = {
n: confuse.OneOf(
[
confuse.Choice(sorted(LEGACY_COLORS)),
confuse.Sequence(confuse.Choice(sorted(CODE_BY_COLOR))),
]
)
for n in ColorName.__args__ # type: ignore[attr-defined]
}
template = confuse.MappingTemplate(template_dict)
colors_by_color_name = {
k: (v if isinstance(v, list) else LEGACY_COLORS.get(v, [v]))
for k, v in config["ui"]["colors"].get(template).items()
}
return {
n: ";".join(str(CODE_BY_COLOR[c]) for c in colors)
for n, colors in colors_by_color_name.items()
}
def _colorize(color_name: ColorName, text: str) -> str:
"""Apply ANSI color formatting to text based on configuration settings."""
color_code = get_color_config()[color_name]
return f"{COLOR_ESCAPE}[{color_code}m{text}{RESET_COLOR}"
def colorize(color_name: ColorName, text: str) -> str:
"""Colorize text when color output is enabled."""
if config["ui"]["color"] and "NO_COLOR" not in os.environ:
return _colorize(color_name, text)
return text
def uncolorize(colored_text: str) -> str:
"""Remove colors from a string."""
# Define a regular expression to match ANSI codes.
# See: http://stackoverflow.com/a/2187024/1382707
# Explanation of regular expression:
# \x1b - matches ESC character
# \[ - matches opening square bracket
# [;\d]* - matches a sequence consisting of one or more digits or
# semicola
# [A-Za-z] - matches a letter
return ANSI_CODE_REGEX.sub("", colored_text)
def color_split(colored_text: str, index: int) -> tuple[str, str]:
length = 0
pre_split = ""
post_split = ""
found_color_code = None
found_split = False
for part in ANSI_CODE_REGEX.split(colored_text):
# Count how many real letters we have passed
length += color_len(part)
if found_split:
post_split += part
else:
if ANSI_CODE_REGEX.match(part):
# This is a color code
if part == RESET_COLOR:
found_color_code = None
else:
found_color_code = part
pre_split += part
else:
if index < length:
# Found part with our split in.
split_index = index - (length - color_len(part))
found_split = True
if found_color_code:
pre_split += f"{part[:split_index]}{RESET_COLOR}"
post_split += f"{found_color_code}{part[split_index:]}"
else:
pre_split += part[:split_index]
post_split += part[split_index:]
else:
# Not found, add this part to the pre split
pre_split += part
return pre_split, post_split
def color_len(colored_text: str) -> int:
"""Measure the length of a string while excluding ANSI codes from the
measurement. The standard `len(my_string)` method also counts ANSI codes
to the string length, which is counterproductive when layouting a
Terminal interface.
"""
# Return the length of the uncolored string.
return len(uncolorize(colored_text))

View file

@ -1,187 +1,187 @@
# This file is part of beets.
# Copyright 2019, Rahul Ahuja.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Update library's tags using Beatport."""
from beets import library, ui, util
from beets.autotag.distance import Distance
from beets.autotag.hooks import AlbumMatch, TrackMatch
from beets.plugins import BeetsPlugin, apply_item_changes
from beets.util.deprecation import deprecate_for_user
from .beatport import BeatportPlugin
class BPSyncPlugin(BeetsPlugin):
def __init__(self):
super().__init__()
deprecate_for_user(self._log, "The 'bpsync' plugin")
self.beatport_plugin = BeatportPlugin()
self.beatport_plugin.setup()
def commands(self):
cmd = ui.Subcommand("bpsync", help="update metadata from Beatport")
cmd.parser.add_option(
"-p",
"--pretend",
action="store_true",
help="show all changes but do nothing",
)
cmd.parser.add_option(
"-m",
"--move",
action="store_true",
dest="move",
help="move files in the library directory",
)
cmd.parser.add_option(
"-M",
"--nomove",
action="store_false",
dest="move",
help="don't move files in library",
)
cmd.parser.add_option(
"-W",
"--nowrite",
action="store_false",
default=None,
dest="write",
help="don't write updated metadata to files",
)
cmd.parser.add_format_option()
cmd.func = self.func
return [cmd]
def func(self, lib, opts, args):
"""Command handler for the bpsync function."""
move = ui.should_move(opts.move)
pretend = opts.pretend
write = ui.should_write(opts.write)
self.singletons(lib, args, move, pretend, write)
self.albums(lib, args, move, pretend, write)
def singletons(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for items matched by
query.
"""
for item in lib.items([*query, "singleton:true"]):
if not item.mb_trackid:
self._log.info(
"Skipping singleton with no mb_trackid: {}", item
)
continue
if not self.is_beatport_track(item):
self._log.info(
"Skipping non-{.beatport_plugin.data_source} singleton: {}",
self,
item,
)
continue
# Apply.
trackinfo = self.beatport_plugin.track_for_id(item.mb_trackid)
with lib.transaction():
TrackMatch(Distance(), trackinfo, item).apply_metadata()
apply_item_changes(lib, item, move, pretend, write)
@staticmethod
def is_beatport_track(item):
return (
item.get("data_source") == BeatportPlugin.data_source
and item.mb_trackid.isnumeric()
)
def get_album_tracks(self, album):
if not album.mb_albumid:
self._log.info("Skipping album with no mb_albumid: {}", album)
return False
if not album.mb_albumid.isnumeric():
self._log.info(
"Skipping album with invalid {.beatport_plugin.data_source} ID: {}",
self,
album,
)
return False
items = list(album.items())
if album.get("data_source") == self.beatport_plugin.data_source:
return items
if not all(self.is_beatport_track(item) for item in items):
self._log.info(
"Skipping non-{.beatport_plugin.data_source} release: {}",
self,
album,
)
return False
return items
def albums(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for albums matched by
query and their items.
"""
# Process matching albums.
for album in lib.albums(query):
# Do we have a valid Beatport album?
items = self.get_album_tracks(album)
if not items:
continue
# Get the Beatport album information.
albuminfo = self.beatport_plugin.album_for_id(album.mb_albumid)
if not albuminfo:
self._log.info(
"Release ID {0.mb_albumid} not found for album {0}", album
)
continue
beatport_trackid_to_trackinfo = {
track.track_id: track for track in albuminfo.tracks
}
library_trackid_to_item = {
int(item.mb_trackid): item for item in items
}
item_info_pairs = [
(item, beatport_trackid_to_trackinfo[track_id])
for track_id, item in library_trackid_to_item.items()
]
self._log.info("applying changes to {}", album)
with lib.transaction():
AlbumMatch(
Distance(), albuminfo, dict(item_info_pairs)
).apply_metadata()
changed = False
# Find any changed item to apply Beatport changes to album.
any_changed_item = items[0]
for item in items:
item_changed = ui.show_model_changes(item)
changed |= item_changed
if item_changed:
any_changed_item = item
apply_item_changes(lib, item, move, pretend, write)
if pretend or not changed:
continue
# Update album structure to reflect an item in it.
for key in library.Album.item_keys:
album[key] = any_changed_item[key]
album.store()
# Move album art (and any inconsistent items).
if move and lib.directory in util.ancestry(items[0].path):
self._log.debug("moving album {}", album)
album.move()
# This file is part of beets.
# Copyright 2019, Rahul Ahuja.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Update library's tags using Beatport."""
from beets import library, ui, util
from beets.autotag.distance import Distance
from beets.autotag.hooks import AlbumMatch, TrackMatch
from beets.plugins import BeetsPlugin, apply_item_changes
from beets.util.deprecation import deprecate_for_user
from .beatport import BeatportPlugin
class BPSyncPlugin(BeetsPlugin):
def __init__(self):
super().__init__()
deprecate_for_user(self._log, "The 'bpsync' plugin")
self.beatport_plugin = BeatportPlugin()
self.beatport_plugin.setup()
def commands(self):
cmd = ui.Subcommand("bpsync", help="update metadata from Beatport")
cmd.parser.add_option(
"-p",
"--pretend",
action="store_true",
help="show all changes but do nothing",
)
cmd.parser.add_option(
"-m",
"--move",
action="store_true",
dest="move",
help="move files in the library directory",
)
cmd.parser.add_option(
"-M",
"--nomove",
action="store_false",
dest="move",
help="don't move files in library",
)
cmd.parser.add_option(
"-W",
"--nowrite",
action="store_false",
default=None,
dest="write",
help="don't write updated metadata to files",
)
cmd.parser.add_format_option()
cmd.func = self.func
return [cmd]
def func(self, lib, opts, args):
"""Command handler for the bpsync function."""
move = ui.should_move(opts.move)
pretend = opts.pretend
write = ui.should_write(opts.write)
self.singletons(lib, args, move, pretend, write)
self.albums(lib, args, move, pretend, write)
def singletons(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for items matched by
query.
"""
for item in lib.items([*query, "singleton:true"]):
if not item.mb_trackid:
self._log.info(
"Skipping singleton with no mb_trackid: {}", item
)
continue
if not self.is_beatport_track(item):
self._log.info(
"Skipping non-{.beatport_plugin.data_source} singleton: {}",
self,
item,
)
continue
# Apply.
trackinfo = self.beatport_plugin.track_for_id(item.mb_trackid)
with lib.transaction():
TrackMatch(Distance(), trackinfo, item).apply_metadata()
apply_item_changes(lib, item, move, pretend, write)
@staticmethod
def is_beatport_track(item):
return (
item.get("data_source") == BeatportPlugin.data_source
and item.mb_trackid.isnumeric()
)
def get_album_tracks(self, album):
if not album.mb_albumid:
self._log.info("Skipping album with no mb_albumid: {}", album)
return False
if not album.mb_albumid.isnumeric():
self._log.info(
"Skipping album with invalid {.beatport_plugin.data_source} ID: {}",
self,
album,
)
return False
items = list(album.items())
if album.get("data_source") == self.beatport_plugin.data_source:
return items
if not all(self.is_beatport_track(item) for item in items):
self._log.info(
"Skipping non-{.beatport_plugin.data_source} release: {}",
self,
album,
)
return False
return items
def albums(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for albums matched by
query and their items.
"""
# Process matching albums.
for album in lib.albums(query):
# Do we have a valid Beatport album?
items = self.get_album_tracks(album)
if not items:
continue
# Get the Beatport album information.
albuminfo = self.beatport_plugin.album_for_id(album.mb_albumid)
if not albuminfo:
self._log.info(
"Release ID {0.mb_albumid} not found for album {0}", album
)
continue
beatport_trackid_to_trackinfo = {
track.track_id: track for track in albuminfo.tracks
}
library_trackid_to_item = {
int(item.mb_trackid): item for item in items
}
item_info_pairs = [
(item, beatport_trackid_to_trackinfo[track_id])
for track_id, item in library_trackid_to_item.items()
]
self._log.info("applying changes to {}", album)
with lib.transaction():
AlbumMatch(
Distance(), albuminfo, dict(item_info_pairs)
).apply_metadata()
changed = False
# Find any changed item to apply Beatport changes to album.
any_changed_item = items[0]
for item in items:
item_changed = ui.show_model_changes(item)
changed |= item_changed
if item_changed:
any_changed_item = item
apply_item_changes(lib, item, move, pretend, write)
if pretend or not changed:
continue
# Update album structure to reflect an item in it.
for key in library.Album.item_keys:
album[key] = any_changed_item[key]
album.store()
# Move album art (and any inconsistent items).
if move and lib.directory in util.ancestry(items[0].path):
self._log.debug("moving album {}", album)
album.move()

File diff suppressed because it is too large Load diff

View file

@ -1,187 +1,187 @@
# This file is part of beets.
# Copyright 2016, Jakob Schnitzer.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Synchronise library metadata with metadata source backends."""
from collections import defaultdict
from beets import library, metadata_plugins, ui, util
from beets.autotag.distance import Distance
from beets.autotag.hooks import AlbumMatch, TrackMatch
from beets.plugins import BeetsPlugin, apply_item_changes
class MBSyncPlugin(BeetsPlugin):
def __init__(self):
super().__init__()
def commands(self):
cmd = ui.Subcommand("mbsync", help="update metadata from musicbrainz")
cmd.parser.add_option(
"-p",
"--pretend",
action="store_true",
help="show all changes but do nothing",
)
cmd.parser.add_option(
"-m",
"--move",
action="store_true",
dest="move",
help="move files in the library directory",
)
cmd.parser.add_option(
"-M",
"--nomove",
action="store_false",
dest="move",
help="don't move files in library",
)
cmd.parser.add_option(
"-W",
"--nowrite",
action="store_false",
default=None,
dest="write",
help="don't write updated metadata to files",
)
cmd.parser.add_format_option()
cmd.func = self.func
return [cmd]
def func(self, lib, opts, args):
"""Command handler for the mbsync function."""
move = ui.should_move(opts.move)
pretend = opts.pretend
write = ui.should_write(opts.write)
self.singletons(lib, args, move, pretend, write)
self.albums(lib, args, move, pretend, write)
def singletons(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for items matched by
query.
"""
for item in lib.items([*query, "singleton:true"]):
if not (track_id := item.mb_trackid):
self._log.info(
"Skipping singleton with no mb_trackid: {}", item
)
continue
if not (
track_info := metadata_plugins.track_for_id(
track_id, item.get("data_source", "MusicBrainz")
)
):
self._log.info(
"Recording ID not found: {} for track {}", track_id, item
)
continue
# Apply.
with lib.transaction():
TrackMatch(Distance(), track_info, item).apply_metadata()
apply_item_changes(lib, item, move, pretend, write)
def albums(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for albums matched by
query and their items.
"""
# Process matching albums.
for album in lib.albums(query):
if not (album_id := album.mb_albumid):
self._log.info("Skipping album with no mb_albumid: {}", album)
continue
data_source = album.get("data_source") or album.items()[0].get(
"data_source", "MusicBrainz"
)
if not (
album_info := metadata_plugins.album_for_id(
album_id, data_source
)
):
self._log.info(
"Release ID {} not found for album {}", album_id, album
)
continue
# Map release track and recording MBIDs to their information.
# Recordings can appear multiple times on a release, so each MBID
# maps to a list of TrackInfo objects.
releasetrack_index = {}
track_index = defaultdict(list)
for track_info in album_info.tracks:
releasetrack_index[track_info.release_track_id] = track_info
track_index[track_info.track_id].append(track_info)
# Construct a track mapping according to MBIDs (release track MBIDs
# first, if available, and recording MBIDs otherwise). This should
# work for albums that have missing or extra tracks.
item_info_pairs = []
items = list(album.items())
for item in items:
if (
item.mb_releasetrackid
and item.mb_releasetrackid in releasetrack_index
):
item_info_pairs.append(
(item, releasetrack_index[item.mb_releasetrackid])
)
else:
candidates = track_index[item.mb_trackid]
if len(candidates) == 1:
item_info_pairs.append((item, candidates[0]))
else:
# If there are multiple copies of a recording, they are
# disambiguated using their disc and track number.
for c in candidates:
if (
c.medium_index == item.track
and c.medium == item.disc
):
item_info_pairs.append((item, c))
break
# Apply.
self._log.debug("applying changes to {}", album)
with lib.transaction():
AlbumMatch(
Distance(), album_info, dict(item_info_pairs)
).apply_metadata()
changed = False
# Find any changed item to apply changes to album.
any_changed_item = items[0]
for item in items:
item_changed = ui.show_model_changes(item)
changed |= item_changed
if item_changed:
any_changed_item = item
apply_item_changes(lib, item, move, pretend, write)
if not changed:
# No change to any item.
continue
if not pretend:
# Update album structure to reflect an item in it.
for key in library.Album.item_keys:
album[key] = any_changed_item[key]
album.store()
# Move album art (and any inconsistent items).
if move and lib.directory in util.ancestry(items[0].path):
self._log.debug("moving album {}", album)
album.move()
# This file is part of beets.
# Copyright 2016, Jakob Schnitzer.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Synchronise library metadata with metadata source backends."""
from collections import defaultdict
from beets import library, metadata_plugins, ui, util
from beets.autotag.distance import Distance
from beets.autotag.hooks import AlbumMatch, TrackMatch
from beets.plugins import BeetsPlugin, apply_item_changes
class MBSyncPlugin(BeetsPlugin):
def __init__(self):
super().__init__()
def commands(self):
cmd = ui.Subcommand("mbsync", help="update metadata from musicbrainz")
cmd.parser.add_option(
"-p",
"--pretend",
action="store_true",
help="show all changes but do nothing",
)
cmd.parser.add_option(
"-m",
"--move",
action="store_true",
dest="move",
help="move files in the library directory",
)
cmd.parser.add_option(
"-M",
"--nomove",
action="store_false",
dest="move",
help="don't move files in library",
)
cmd.parser.add_option(
"-W",
"--nowrite",
action="store_false",
default=None,
dest="write",
help="don't write updated metadata to files",
)
cmd.parser.add_format_option()
cmd.func = self.func
return [cmd]
def func(self, lib, opts, args):
"""Command handler for the mbsync function."""
move = ui.should_move(opts.move)
pretend = opts.pretend
write = ui.should_write(opts.write)
self.singletons(lib, args, move, pretend, write)
self.albums(lib, args, move, pretend, write)
def singletons(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for items matched by
query.
"""
for item in lib.items([*query, "singleton:true"]):
if not (track_id := item.mb_trackid):
self._log.info(
"Skipping singleton with no mb_trackid: {}", item
)
continue
if not (
track_info := metadata_plugins.track_for_id(
track_id, item.get("data_source", "MusicBrainz")
)
):
self._log.info(
"Recording ID not found: {} for track {}", track_id, item
)
continue
# Apply.
with lib.transaction():
TrackMatch(Distance(), track_info, item).apply_metadata()
apply_item_changes(lib, item, move, pretend, write)
def albums(self, lib, query, move, pretend, write):
"""Retrieve and apply info from the autotagger for albums matched by
query and their items.
"""
# Process matching albums.
for album in lib.albums(query):
if not (album_id := album.mb_albumid):
self._log.info("Skipping album with no mb_albumid: {}", album)
continue
data_source = album.get("data_source") or album.items()[0].get(
"data_source", "MusicBrainz"
)
if not (
album_info := metadata_plugins.album_for_id(
album_id, data_source
)
):
self._log.info(
"Release ID {} not found for album {}", album_id, album
)
continue
# Map release track and recording MBIDs to their information.
# Recordings can appear multiple times on a release, so each MBID
# maps to a list of TrackInfo objects.
releasetrack_index = {}
track_index = defaultdict(list)
for track_info in album_info.tracks:
releasetrack_index[track_info.release_track_id] = track_info
track_index[track_info.track_id].append(track_info)
# Construct a track mapping according to MBIDs (release track MBIDs
# first, if available, and recording MBIDs otherwise). This should
# work for albums that have missing or extra tracks.
item_info_pairs = []
items = list(album.items())
for item in items:
if (
item.mb_releasetrackid
and item.mb_releasetrackid in releasetrack_index
):
item_info_pairs.append(
(item, releasetrack_index[item.mb_releasetrackid])
)
else:
candidates = track_index[item.mb_trackid]
if len(candidates) == 1:
item_info_pairs.append((item, candidates[0]))
else:
# If there are multiple copies of a recording, they are
# disambiguated using their disc and track number.
for c in candidates:
if (
c.medium_index == item.track
and c.medium == item.disc
):
item_info_pairs.append((item, c))
break
# Apply.
self._log.debug("applying changes to {}", album)
with lib.transaction():
AlbumMatch(
Distance(), album_info, dict(item_info_pairs)
).apply_metadata()
changed = False
# Find any changed item to apply changes to album.
any_changed_item = items[0]
for item in items:
item_changed = ui.show_model_changes(item)
changed |= item_changed
if item_changed:
any_changed_item = item
apply_item_changes(lib, item, move, pretend, write)
if not changed:
# No change to any item.
continue
if not pretend:
# Update album structure to reflect an item in it.
for key in library.Album.item_keys:
album[key] = any_changed_item[key]
album.store()
# Move album art (and any inconsistent items).
if move and lib.directory in util.ancestry(items[0].path):
self._log.debug("moving album {}", album)
album.move()

File diff suppressed because it is too large Load diff

View file

@ -1,180 +1,180 @@
Installation
============
Beets requires `Python 3.10 or later`_. You can install it using pipx_ or pip_.
.. _python 3.10 or later: https://www.python.org/downloads/
Using ``pipx`` or ``pip``
-------------------------
We recommend installing with pipx_ as it isolates beets and its dependencies
from your system Python and other Python packages. This helps avoid dependency
conflicts and keeps your system clean.
.. <!-- start-quick-install -->
.. tab-set::
.. tab-item:: pipx
.. code-block:: console
pipx install beets
.. tab-item:: pip
.. code-block:: console
pip install beets
.. tab-item:: pip (user install)
.. code-block:: console
pip install --user beets
.. <!-- end-quick-install -->
If you don't have pipx_ installed, you can follow the instructions on the `pipx
installation page`_ to get it set up.
.. _pip: https://pip.pypa.io/en/stable/
.. _pipx: https://pipx.pypa.io/stable
.. _pipx installation page: https://pipx.pypa.io/stable/how-to/install-pipx/
Managing Plugins with ``pipx``
------------------------------
When using pipx_, you can install beets with built-in plugin dependencies using
extras, inject third-party packages, and upgrade everything cleanly.
Install beets with extras for built-in plugins:
.. code-block:: console
pipx install "beets[lyrics,lastgenre]"
If you already have beets installed, reinstall with a new set of extras:
.. code-block:: console
pipx install --force "beets[lyrics,lastgenre]"
Inject additional packages into the beets environment (useful for third-party
plugins):
.. code-block:: console
pipx inject beets <package-name>
To upgrade beets and all injected packages:
.. code-block:: console
pipx upgrade beets
Installation FAQ
----------------
Windows Installation
~~~~~~~~~~~~~~~~~~~~
**Q: What's the process for installing on Windows?**
Installing beets on Windows can be tricky. Following these steps might help you
get it right:
1. `Install Python`_ (check "Add Python to PATH" skip to 3)
2. Ensure Python is in your ``PATH`` (add if needed):
- Settings → System → About → Advanced system settings → Environment
Variables
- Edit "PATH" and add: `;C:\Python39;C:\Python39\Scripts`
- *Guide: [Adding Python to
PATH](https://realpython.com/add-python-to-path/)*
3. Now install beets by running: ``pip install beets``
4. You're all set! Type ``beet version`` in a new command prompt to verify the
installation.
**Bonus: Windows Context Menu Integration**
Windows users may also want to install a context menu item for importing files
into beets. Download the beets.reg_ file and open it in a text file to make sure
the paths to Python match your system. Then double-click the file add the
necessary keys to your registry. You can then right-click a directory and choose
"Import with beets".
.. _beets.reg: https://github.com/beetbox/beets/blob/master/extra/beets.reg
.. _install pip: https://pip.pypa.io/en/stable/installing/
.. _install python: https://www.python.org/downloads/
ARM Installation
~~~~~~~~~~~~~~~~
**Q: Can I run beets on a Raspberry Pi or other ARM device?**
Yes, but with some considerations: Beets on ARM devices is not recommended for
Linux novices. If you are comfortable with troubleshooting tools like ``pip``,
``make``, and binary dependencies (e.g. ``ffmpeg`` and ``ImageMagick``), you
will be fine. We have `notes for ARM`_ and an `older ARM reference`_. Beets is
generally developed on x86-64 based devices, and most plugins target that
platform as well.
.. _notes for arm: https://github.com/beetbox/beets/discussions/4910
.. _older arm reference: https://discourse.beets.io/t/diary-of-beets-on-arm-odroid-hc4-armbian/1993
Package Manager Installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Q: Can I install beets using my operating system's built-in package manager?**
We generally don't recommend this route. OS package managers tend to ship
outdated versions of beets, and installing third-party plugins into a
system-managed environment ranges from awkward to impossible. You'll have a much
better time with pipx_ or pip_ as described above.
That said, if you know what you're doing and prefer your system package manager,
here are the options available:
- **Debian/Ubuntu** (`Debian <debian details_>`_, `Ubuntu <ubuntu details_>`_):
``apt-get install beets``
- **Arch Linux** (`extra <arch btw_>`_, `AUR dev <aur_>`_): ``pacman -S beets``
- **Alpine Linux** (`package <alpine package_>`_): ``apk add beets``
- **Void Linux** (`package <void package_>`_): ``xbps-install -S beets``
- **Gentoo Linux**: ``emerge beets`` (USE flags available for optional plugin
deps)
- **FreeBSD** (`port <freebsd_>`_): ``audio/beets``
- **OpenBSD** (`port <openbsd_>`_): ``pkg_add beets``
- **Fedora** (`package <dnf package_>`_): ``dnf install beets beets-plugins
beets-doc``
- **Solus**: ``eopkg install beets``
- **NixOS** (`package <nixos_>`_): ``nix-env -i beets``
- **MacPorts**: ``port install beets`` or ``port install beets-full`` (includes
third-party plugins)
.. _alpine package: https://pkgs.alpinelinux.org/package/edge/community/x86_64/beets
.. _arch btw: https://archlinux.org/packages/extra/any/beets/
.. _aur: https://aur.archlinux.org/packages/beets-git/
.. _debian details: https://tracker.debian.org/pkg/beets
.. _dnf package: https://packages.fedoraproject.org/pkgs/beets/
.. _freebsd: https://www.freshports.org/audio/beets/
.. _nixos: https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/python-modules/beets
.. _openbsd: https://openports.pl/path/audio/beets
.. _ubuntu details: https://launchpad.net/ubuntu/+source/beets
.. _void package: https://github.com/void-linux/void-packages/tree/master/srcpkgs/beets
Installation
============
Beets requires `Python 3.10 or later`_. You can install it using pipx_ or pip_.
.. _python 3.10 or later: https://www.python.org/downloads/
Using ``pipx`` or ``pip``
-------------------------
We recommend installing with pipx_ as it isolates beets and its dependencies
from your system Python and other Python packages. This helps avoid dependency
conflicts and keeps your system clean.
.. <!-- start-quick-install -->
.. tab-set::
.. tab-item:: pipx
.. code-block:: console
pipx install beets
.. tab-item:: pip
.. code-block:: console
pip install beets
.. tab-item:: pip (user install)
.. code-block:: console
pip install --user beets
.. <!-- end-quick-install -->
If you don't have pipx_ installed, you can follow the instructions on the `pipx
installation page`_ to get it set up.
.. _pip: https://pip.pypa.io/en/stable/
.. _pipx: https://pipx.pypa.io/stable
.. _pipx installation page: https://pipx.pypa.io/stable/how-to/install-pipx/
Managing Plugins with ``pipx``
------------------------------
When using pipx_, you can install beets with built-in plugin dependencies using
extras, inject third-party packages, and upgrade everything cleanly.
Install beets with extras for built-in plugins:
.. code-block:: console
pipx install "beets[lyrics,lastgenre]"
If you already have beets installed, reinstall with a new set of extras:
.. code-block:: console
pipx install --force "beets[lyrics,lastgenre]"
Inject additional packages into the beets environment (useful for third-party
plugins):
.. code-block:: console
pipx inject beets <package-name>
To upgrade beets and all injected packages:
.. code-block:: console
pipx upgrade beets
Installation FAQ
----------------
Windows Installation
~~~~~~~~~~~~~~~~~~~~
**Q: What's the process for installing on Windows?**
Installing beets on Windows can be tricky. Following these steps might help you
get it right:
1. `Install Python`_ (check "Add Python to PATH" skip to 3)
2. Ensure Python is in your ``PATH`` (add if needed):
- Settings → System → About → Advanced system settings → Environment
Variables
- Edit "PATH" and add: `;C:\Python39;C:\Python39\Scripts`
- *Guide: [Adding Python to
PATH](https://realpython.com/add-python-to-path/)*
3. Now install beets by running: ``pip install beets``
4. You're all set! Type ``beet version`` in a new command prompt to verify the
installation.
**Bonus: Windows Context Menu Integration**
Windows users may also want to install a context menu item for importing files
into beets. Download the beets.reg_ file and open it in a text file to make sure
the paths to Python match your system. Then double-click the file add the
necessary keys to your registry. You can then right-click a directory and choose
"Import with beets".
.. _beets.reg: https://github.com/beetbox/beets/blob/master/extra/beets.reg
.. _install pip: https://pip.pypa.io/en/stable/installing/
.. _install python: https://www.python.org/downloads/
ARM Installation
~~~~~~~~~~~~~~~~
**Q: Can I run beets on a Raspberry Pi or other ARM device?**
Yes, but with some considerations: Beets on ARM devices is not recommended for
Linux novices. If you are comfortable with troubleshooting tools like ``pip``,
``make``, and binary dependencies (e.g. ``ffmpeg`` and ``ImageMagick``), you
will be fine. We have `notes for ARM`_ and an `older ARM reference`_. Beets is
generally developed on x86-64 based devices, and most plugins target that
platform as well.
.. _notes for arm: https://github.com/beetbox/beets/discussions/4910
.. _older arm reference: https://discourse.beets.io/t/diary-of-beets-on-arm-odroid-hc4-armbian/1993
Package Manager Installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Q: Can I install beets using my operating system's built-in package manager?**
We generally don't recommend this route. OS package managers tend to ship
outdated versions of beets, and installing third-party plugins into a
system-managed environment ranges from awkward to impossible. You'll have a much
better time with pipx_ or pip_ as described above.
That said, if you know what you're doing and prefer your system package manager,
here are the options available:
- **Debian/Ubuntu** (`Debian <debian details_>`_, `Ubuntu <ubuntu details_>`_):
``apt-get install beets``
- **Arch Linux** (`extra <arch btw_>`_, `AUR dev <aur_>`_): ``pacman -S beets``
- **Alpine Linux** (`package <alpine package_>`_): ``apk add beets``
- **Void Linux** (`package <void package_>`_): ``xbps-install -S beets``
- **Gentoo Linux**: ``emerge beets`` (USE flags available for optional plugin
deps)
- **FreeBSD** (`port <freebsd_>`_): ``audio/beets``
- **OpenBSD** (`port <openbsd_>`_): ``pkg_add beets``
- **Fedora** (`package <dnf package_>`_): ``dnf install beets beets-plugins
beets-doc``
- **Solus**: ``eopkg install beets``
- **NixOS** (`package <nixos_>`_): ``nix-env -i beets``
- **MacPorts**: ``port install beets`` or ``port install beets-full`` (includes
third-party plugins)
.. _alpine package: https://pkgs.alpinelinux.org/package/edge/community/x86_64/beets
.. _arch btw: https://archlinux.org/packages/extra/any/beets/
.. _aur: https://aur.archlinux.org/packages/beets-git/
.. _debian details: https://tracker.debian.org/pkg/beets
.. _dnf package: https://packages.fedoraproject.org/pkgs/beets/
.. _freebsd: https://www.freshports.org/audio/beets/
.. _nixos: https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/python-modules/beets
.. _openbsd: https://openports.pl/path/audio/beets
.. _ubuntu details: https://launchpad.net/ubuntu/+source/beets
.. _void package: https://github.com/void-linux/void-packages/tree/master/srcpkgs/beets

View file

@ -1,228 +1,228 @@
Lyrics Plugin
=============
The ``lyrics`` plugin fetches and stores song lyrics from databases on the Web.
Namely, the current version of the plugin uses Genius.com_, Tekstowo.pl_,
LRCLIB_ and, optionally, the Google Custom Search API.
.. _genius.com: https://genius.com/
.. _lrclib: https://lrclib.net/
.. _tekstowo.pl: https://www.tekstowo.pl/
Install
-------
Firstly, enable ``lyrics`` plugin in your configuration (see
:ref:`using-plugins`). Then, install ``beets`` with ``lyrics`` extra
.. code-block:: bash
pip install "beets[lyrics]"
Fetch Lyrics During Import
--------------------------
When importing new files, beets will now fetch lyrics for files that don't
already have them. The lyrics will be stored in the beets database. The plugin
also sets a few useful flexible attributes:
- ``lyrics_backend``: name of the backend that provided the lyrics
- ``lyrics_url``: URL of the page where the lyrics were found
- ``lyrics_language``: original language of the lyrics
- ``lyrics_translation_language``: language of the lyrics translation (if
translation is enabled)
If the ``import.write`` config option is on, then the lyrics will also be
written to the files' tags.
Configuration
-------------
To configure the plugin, make a ``lyrics:`` section in your configuration file.
Default configuration:
.. code-block:: yaml
lyrics:
auto: yes
auto_ignore: null
translate:
api_key:
from_languages: []
to_language:
dist_thresh: 0.11
fallback: null
force: no
google_API_key: null
google_engine_ID: 009217259823014548361:lndtuqkycfu
print: no
sources: [lrclib, google, genius]
synced: no
The available options are:
- **auto**: Fetch lyrics automatically during import.
- **auto_ignore**: A beets query string of items to skip when fetching lyrics
during auto import. For example, to skip tracks from Bandcamp or with a Techno
genre:
.. code-block:: yaml
lyrics:
auto_ignore: |
data_source:bandcamp
,
genres:techno
Default: ``null`` (nothing is ignored). See :doc:`/reference/query` for the
query syntax.
- **translate**:
- **api_key**: Api key to access your Azure Translator resource. (see
:ref:`lyrics-translation`)
- **from_languages**: By default all lyrics with a language other than
``translate_to`` are translated. Use a list of language codes to restrict
them.
- **to_language**: Language code to translate lyrics to.
- **dist_thresh**: The maximum distance between the artist and title combination
of the music file and lyrics candidate to consider them a match. Lower values
will make the plugin more strict, higher values will make it more lenient.
This does not apply to the ``lrclib`` backend as it matches durations.
- **fallback**: By default, the file will be left unchanged when no lyrics are
found. Use the empty string ``''`` to reset the lyrics in such a case.
- **force**: By default, beets won't fetch lyrics if the files already have
ones. To instead always fetch lyrics, set the ``force`` option to ``yes``.
- **google_API_key**: Your Google API key (to enable the Google Custom Search
backend).
- **google_engine_ID**: The custom search engine to use. Default: The `beets
custom search engine`_, which gathers an updated list of sources known to be
scrapeable.
- **print**: Print lyrics to the console.
- **sources**: List of sources to search for lyrics. An asterisk ``*`` expands
to all available sources. The ``google`` source will be automatically
deactivated if no ``google_API_key`` is setup. By default, ``musixmatch`` and
``tekstowo`` are excluded because they block the beets User-Agent.
- **synced**: Prefer synced lyrics over plain lyrics if a source offers them.
Currently ``lrclib`` is the only source that provides them. Using this option,
existing synced lyrics are not replaced by newly fetched plain lyrics (even
when ``force`` is enabled). To allow that replacement, disable ``synced``.
.. _beets custom search engine: https://cse.google.com/cse?cx=009217259823014548361:lndtuqkycfu
Fetching Lyrics Manually
------------------------
The ``lyrics`` command provided by this plugin fetches lyrics for items that
match a query (see :doc:`/reference/query`). For example, ``beet lyrics magnetic
fields absolutely cuckoo`` will get the lyrics for the appropriate Magnetic
Fields song, ``beet lyrics magnetic fields`` will get lyrics for all my tracks
by that band, and ``beet lyrics`` will get lyrics for my entire library. The
lyrics will be added to the beets database and, if ``import.write`` is on,
embedded into files' metadata.
The ``-p, --print`` option to the ``lyrics`` command makes it print lyrics out
to the console so you can view the fetched (or previously-stored) lyrics.
The ``-f, --force`` option forces the command to fetch lyrics, even for tracks
that already have lyrics.
Inversely, the ``-l, --local`` option restricts operations to lyrics that are
locally available, which show lyrics faster without using the network at all.
Rendering Lyrics into Other Formats
-----------------------------------
The ``-r directory, --write-rest directory`` option renders all lyrics as
reStructuredText_ (ReST) documents in ``directory``. That directory, in turn,
can be parsed by tools like Sphinx_ to generate HTML, ePUB, or PDF documents.
Minimal ``conf.py`` and ``index.rst`` files are created the first time the
command is run. They are not overwritten on subsequent runs, so you can safely
modify these files to customize the output.
Sphinx supports various builders_, see a few suggestions:
.. admonition:: Build an HTML version
::
sphinx-build -b html <dir> <dir>/html
.. admonition:: Build an ePUB3 formatted file, usable on ebook readers
::
sphinx-build -b epub3 <dir> <dir>/epub
.. admonition:: Build a PDF file, which incidentally also builds a LaTeX file
::
sphinx-build -b latex <dir> <dir>/latex && make -C <dir>/latex all-pdf
.. _builders: https://www.sphinx-doc.org/en/master/usage/builders/index.html
.. _restructuredtext: https://sourceforge.net/projects/docutils/
.. _sphinx: https://www.sphinx-doc.org/en/master/
Activate Google Custom Search
-----------------------------
You need to `register for a Google API key
<https://console.developers.google.com/>`__. Set the ``google_API_key``
configuration option to your key.
Then add ``google`` to the list of sources in your configuration (or use default
list, which includes it as long as you have an API key). If you use default
``google_engine_ID``, we recommend limiting the sources to ``google`` as the
other sources are already included in the Google results.
Optionally, you can `define a custom search engine`_. Get your search engine's
token and use it for your ``google_engine_ID`` configuration option. By default,
beets use a list of sources known to be scrapeable.
Note that the Google custom search API is limited to 100 queries per day. After
that, the lyrics plugin will fall back on other declared data sources.
.. _define a custom search engine: https://programmablesearchengine.google.com/about/
.. _lyrics-translation:
Activate On-the-Fly Translation
-------------------------------
We use Azure to optionally translate your lyrics. To set up the integration,
follow these steps:
1. `Create a Translator resource`_ on Azure.
Make sure the region of the translator resource is set to Global. You
will get 401 unauthorized errors if not. The region of the resource group
does not matter.
2. `Obtain its API key`_.
3. Add the API key to your configuration as ``translate.api_key``.
4. Configure your target language using the ``translate.to_language`` option.
For example, with the following configuration
.. code-block:: yaml
lyrics:
translate:
api_key: YOUR_TRANSLATOR_API_KEY
to_language: de
You should expect lyrics like this:
::
Original verse / Ursprünglicher Vers
Some other verse / Ein anderer Vers
.. _create a translator resource: https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource
.. _obtain its api key: https://learn.microsoft.com/en-us/python/api/overview/azure/ai-translation-text-readme?view=azure-python&preserve-view=true#get-an-api-key
Lyrics Plugin
=============
The ``lyrics`` plugin fetches and stores song lyrics from databases on the Web.
Namely, the current version of the plugin uses Genius.com_, Tekstowo.pl_,
LRCLIB_ and, optionally, the Google Custom Search API.
.. _genius.com: https://genius.com/
.. _lrclib: https://lrclib.net/
.. _tekstowo.pl: https://www.tekstowo.pl/
Install
-------
Firstly, enable ``lyrics`` plugin in your configuration (see
:ref:`using-plugins`). Then, install ``beets`` with ``lyrics`` extra
.. code-block:: bash
pip install "beets[lyrics]"
Fetch Lyrics During Import
--------------------------
When importing new files, beets will now fetch lyrics for files that don't
already have them. The lyrics will be stored in the beets database. The plugin
also sets a few useful flexible attributes:
- ``lyrics_backend``: name of the backend that provided the lyrics
- ``lyrics_url``: URL of the page where the lyrics were found
- ``lyrics_language``: original language of the lyrics
- ``lyrics_translation_language``: language of the lyrics translation (if
translation is enabled)
If the ``import.write`` config option is on, then the lyrics will also be
written to the files' tags.
Configuration
-------------
To configure the plugin, make a ``lyrics:`` section in your configuration file.
Default configuration:
.. code-block:: yaml
lyrics:
auto: yes
auto_ignore: null
translate:
api_key:
from_languages: []
to_language:
dist_thresh: 0.11
fallback: null
force: no
google_API_key: null
google_engine_ID: 009217259823014548361:lndtuqkycfu
print: no
sources: [lrclib, google, genius]
synced: no
The available options are:
- **auto**: Fetch lyrics automatically during import.
- **auto_ignore**: A beets query string of items to skip when fetching lyrics
during auto import. For example, to skip tracks from Bandcamp or with a Techno
genre:
.. code-block:: yaml
lyrics:
auto_ignore: |
data_source:bandcamp
,
genres:techno
Default: ``null`` (nothing is ignored). See :doc:`/reference/query` for the
query syntax.
- **translate**:
- **api_key**: Api key to access your Azure Translator resource. (see
:ref:`lyrics-translation`)
- **from_languages**: By default all lyrics with a language other than
``translate_to`` are translated. Use a list of language codes to restrict
them.
- **to_language**: Language code to translate lyrics to.
- **dist_thresh**: The maximum distance between the artist and title combination
of the music file and lyrics candidate to consider them a match. Lower values
will make the plugin more strict, higher values will make it more lenient.
This does not apply to the ``lrclib`` backend as it matches durations.
- **fallback**: By default, the file will be left unchanged when no lyrics are
found. Use the empty string ``''`` to reset the lyrics in such a case.
- **force**: By default, beets won't fetch lyrics if the files already have
ones. To instead always fetch lyrics, set the ``force`` option to ``yes``.
- **google_API_key**: Your Google API key (to enable the Google Custom Search
backend).
- **google_engine_ID**: The custom search engine to use. Default: The `beets
custom search engine`_, which gathers an updated list of sources known to be
scrapeable.
- **print**: Print lyrics to the console.
- **sources**: List of sources to search for lyrics. An asterisk ``*`` expands
to all available sources. The ``google`` source will be automatically
deactivated if no ``google_API_key`` is setup. By default, ``musixmatch`` and
``tekstowo`` are excluded because they block the beets User-Agent.
- **synced**: Prefer synced lyrics over plain lyrics if a source offers them.
Currently ``lrclib`` is the only source that provides them. Using this option,
existing synced lyrics are not replaced by newly fetched plain lyrics (even
when ``force`` is enabled). To allow that replacement, disable ``synced``.
.. _beets custom search engine: https://cse.google.com/cse?cx=009217259823014548361:lndtuqkycfu
Fetching Lyrics Manually
------------------------
The ``lyrics`` command provided by this plugin fetches lyrics for items that
match a query (see :doc:`/reference/query`). For example, ``beet lyrics magnetic
fields absolutely cuckoo`` will get the lyrics for the appropriate Magnetic
Fields song, ``beet lyrics magnetic fields`` will get lyrics for all my tracks
by that band, and ``beet lyrics`` will get lyrics for my entire library. The
lyrics will be added to the beets database and, if ``import.write`` is on,
embedded into files' metadata.
The ``-p, --print`` option to the ``lyrics`` command makes it print lyrics out
to the console so you can view the fetched (or previously-stored) lyrics.
The ``-f, --force`` option forces the command to fetch lyrics, even for tracks
that already have lyrics.
Inversely, the ``-l, --local`` option restricts operations to lyrics that are
locally available, which show lyrics faster without using the network at all.
Rendering Lyrics into Other Formats
-----------------------------------
The ``-r directory, --write-rest directory`` option renders all lyrics as
reStructuredText_ (ReST) documents in ``directory``. That directory, in turn,
can be parsed by tools like Sphinx_ to generate HTML, ePUB, or PDF documents.
Minimal ``conf.py`` and ``index.rst`` files are created the first time the
command is run. They are not overwritten on subsequent runs, so you can safely
modify these files to customize the output.
Sphinx supports various builders_, see a few suggestions:
.. admonition:: Build an HTML version
::
sphinx-build -b html <dir> <dir>/html
.. admonition:: Build an ePUB3 formatted file, usable on ebook readers
::
sphinx-build -b epub3 <dir> <dir>/epub
.. admonition:: Build a PDF file, which incidentally also builds a LaTeX file
::
sphinx-build -b latex <dir> <dir>/latex && make -C <dir>/latex all-pdf
.. _builders: https://www.sphinx-doc.org/en/master/usage/builders/index.html
.. _restructuredtext: https://sourceforge.net/projects/docutils/
.. _sphinx: https://www.sphinx-doc.org/en/master/
Activate Google Custom Search
-----------------------------
You need to `register for a Google API key
<https://console.developers.google.com/>`__. Set the ``google_API_key``
configuration option to your key.
Then add ``google`` to the list of sources in your configuration (or use default
list, which includes it as long as you have an API key). If you use default
``google_engine_ID``, we recommend limiting the sources to ``google`` as the
other sources are already included in the Google results.
Optionally, you can `define a custom search engine`_. Get your search engine's
token and use it for your ``google_engine_ID`` configuration option. By default,
beets use a list of sources known to be scrapeable.
Note that the Google custom search API is limited to 100 queries per day. After
that, the lyrics plugin will fall back on other declared data sources.
.. _define a custom search engine: https://programmablesearchengine.google.com/about/
.. _lyrics-translation:
Activate On-the-Fly Translation
-------------------------------
We use Azure to optionally translate your lyrics. To set up the integration,
follow these steps:
1. `Create a Translator resource`_ on Azure.
Make sure the region of the translator resource is set to Global. You
will get 401 unauthorized errors if not. The region of the resource group
does not matter.
2. `Obtain its API key`_.
3. Add the API key to your configuration as ``translate.api_key``.
4. Configure your target language using the ``translate.to_language`` option.
For example, with the following configuration
.. code-block:: yaml
lyrics:
translate:
api_key: YOUR_TRANSLATOR_API_KEY
to_language: de
You should expect lyrics like this:
::
Original verse / Ursprünglicher Vers
Some other verse / Ein anderer Vers
.. _create a translator resource: https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource
.. _obtain its api key: https://learn.microsoft.com/en-us/python/api/overview/azure/ai-translation-text-readme?view=azure-python&preserve-view=true#get-an-api-key

View file

@ -1,357 +1,357 @@
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Tests for autotagging functionality."""
import operator
import pytest
from beets.autotag.distance import Distance
from beets.autotag.hooks import (
AlbumInfo,
AlbumMatch,
Info,
TrackInfo,
TrackMatch,
correct_list_fields,
)
from beets.library import Item
from beets.test.helper import BeetsTestCase
@pytest.mark.parametrize(
"genre, expected_genres",
[
("Rock", ("Rock",)),
("Rock; Alternative", ("Rock", "Alternative")),
],
)
def test_genre_deprecation(genre, expected_genres):
with pytest.warns(
DeprecationWarning, match="The 'genre' parameter is deprecated"
):
assert tuple(Info(genre=genre).genres) == expected_genres
class ApplyTest(BeetsTestCase):
def _apply(
self,
per_disc_numbering=False,
artist_credit=False,
original_date=False,
from_scratch=False,
):
info = self.info
mapping = dict(zip(self.items, info.tracks))
self.config["per_disc_numbering"] = per_disc_numbering
self.config["artist_credit"] = artist_credit
self.config["original_date"] = original_date
self.config["import"]["from_scratch"] = from_scratch
amatch = AlbumMatch(Distance(), self.info, mapping)
amatch.apply_metadata()
def setUp(self):
super().setUp()
self.items = [Item(), Item()]
self.info = AlbumInfo(
tracks=[
TrackInfo(
title="title",
track_id="dfa939ec-118c-4d0f-84a0-60f3d1e6522c",
medium=1,
medium_index=1,
medium_total=1,
index=1,
artist="trackArtist",
artist_credit="trackArtistCredit",
artists_credit=["trackArtistCredit"],
artist_sort="trackArtistSort",
artists_sort=["trackArtistSort"],
),
TrackInfo(
title="title2",
track_id="40130ed1-a27c-42fd-a328-1ebefb6caef4",
medium=2,
medium_index=1,
index=2,
medium_total=1,
),
],
artist="albumArtist",
artists=["albumArtist", "albumArtist2"],
album="album",
album_id="7edb51cb-77d6-4416-a23c-3a8c2994a2c7",
artist_id="a6623d39-2d8e-4f70-8242-0a9553b91e50",
artists_ids=None,
artist_credit="albumArtistCredit",
artists_credit=["albumArtistCredit1", "albumArtistCredit2"],
artist_sort=None,
artists_sort=["albumArtistSort", "albumArtistSort2"],
albumtype="album",
va=True,
mediums=2,
data_source="MusicBrainz",
year=2013,
month=12,
day=18,
genres=["Rock", "Pop"],
)
common_expected = {
"album": "album",
"albumartist_credit": "albumArtistCredit",
"albumartist": "albumArtist",
"albumartists": ["albumArtist", "albumArtist2"],
"albumartists_credit": [
"albumArtistCredit",
"albumArtistCredit1",
"albumArtistCredit2",
],
"albumartist_sort": "albumArtistSort",
"albumartists_sort": ["albumArtistSort", "albumArtistSort2"],
"albumtype": "album",
"albumtypes": ["album"],
"comp": True,
"disctotal": 2,
"mb_albumartistid": "a6623d39-2d8e-4f70-8242-0a9553b91e50",
"mb_albumartistids": ["a6623d39-2d8e-4f70-8242-0a9553b91e50"],
"mb_albumid": "7edb51cb-77d6-4416-a23c-3a8c2994a2c7",
"mb_artistid": "a6623d39-2d8e-4f70-8242-0a9553b91e50",
"mb_artistids": ["a6623d39-2d8e-4f70-8242-0a9553b91e50"],
"tracktotal": 2,
"year": 2013,
"month": 12,
"day": 18,
"genres": ["Rock", "Pop"],
}
self.expected_tracks = [
{
**common_expected,
"artist": "trackArtist",
"artists": ["trackArtist"],
"artist_credit": "trackArtistCredit",
"artist_sort": "trackArtistSort",
"artists_credit": ["trackArtistCredit"],
"artists_sort": ["trackArtistSort"],
"disc": 1,
"mb_trackid": "dfa939ec-118c-4d0f-84a0-60f3d1e6522c",
"title": "title",
"track": 1,
},
{
**common_expected,
"artist": "albumArtist",
"artists": ["albumArtist", "albumArtist2"],
"artist_credit": "albumArtistCredit",
"artist_sort": "albumArtistSort",
"artists_credit": [
"albumArtistCredit",
"albumArtistCredit1",
"albumArtistCredit2",
],
"artists_sort": ["albumArtistSort", "albumArtistSort2"],
"disc": 2,
"mb_trackid": "40130ed1-a27c-42fd-a328-1ebefb6caef4",
"title": "title2",
"track": 2,
},
]
def test_autotag_items(self):
self._apply()
keys = self.expected_tracks[0].keys()
get_values = operator.itemgetter(*keys)
applied_data = [
dict(zip(keys, get_values(dict(i)))) for i in self.items
]
assert applied_data == self.expected_tracks
def test_per_disc_numbering(self):
self._apply(per_disc_numbering=True)
assert self.items[0].track == 1
assert self.items[1].track == 1
assert self.items[0].tracktotal == 1
assert self.items[1].tracktotal == 1
def test_artist_credit_prefers_artist_over_albumartist_credit(self):
self.info.tracks[0].update(artist="oldArtist", artist_credit=None)
self._apply(artist_credit=True)
assert self.items[0].artist == "oldArtist"
def test_artist_credit_falls_back_to_albumartist(self):
self.info.artist_credit = None
self._apply(artist_credit=True)
assert self.items[1].artist == "albumArtist"
def test_date_only_zeroes_month_and_day(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(year=2013, month=None, day=None)
self._apply()
assert self.items[0].year == 2013
assert self.items[0].month == 0
assert self.items[0].day == 0
def test_missing_date_applies_nothing(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(year=None, month=None, day=None)
self._apply()
assert self.items[0].year == 1
assert self.items[0].month == 2
assert self.items[0].day == 3
def test_original_date_overrides_release_date(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(
year=2013,
month=12,
day=18,
original_year=1999,
original_month=4,
original_day=7,
)
self._apply(original_date=True)
assert self.items[0].year == 1999
assert self.items[0].month == 4
assert self.items[0].day == 7
class TestFromScratch:
@pytest.fixture(autouse=True)
def config(self, config):
config["import"]["from_scratch"] = True
@pytest.fixture
def album_info(self):
return AlbumInfo(
tracks=[TrackInfo(title="title", artist="track artist", index=1)]
)
@pytest.fixture
def item(self):
return Item(artist="old artist", comments="stale comment")
def test_album_match_clears_stale_metadata(self, album_info, item):
match = AlbumMatch(Distance(), album_info, {item: album_info.tracks[0]})
match.apply_metadata()
assert item.artist == "track artist"
assert item.comments == ""
def test_singleton_match_clears_stale_metadata(self, item):
match = TrackMatch(Distance(), TrackInfo(artist="track artist"), item)
match.apply_metadata()
assert item.artist == "track artist"
assert item.comments == ""
@pytest.mark.parametrize(
"overwrite_fields,expected_item_artist",
[
pytest.param(["artist"], "", id="overwrite artist"),
pytest.param([], "artist", id="do not overwrite artist"),
],
)
class TestOverwriteNull:
@pytest.fixture(autouse=True)
def config(self, config, overwrite_fields):
config["overwrite_null"]["album"] = overwrite_fields
config["overwrite_null"]["track"] = overwrite_fields
config["import"]["from_scratch"] = False
@pytest.fixture
def item(self):
return Item(artist="artist")
@pytest.fixture
def track_info(self):
return TrackInfo(artist=None)
def test_album(self, item, track_info, expected_item_artist):
match = AlbumMatch(
Distance(), AlbumInfo([track_info]), {item: track_info}
)
match.apply_metadata()
assert item.artist == expected_item_artist
def test_singleton(self, item, track_info, expected_item_artist):
match = TrackMatch(Distance(), track_info, item)
match.apply_metadata()
assert item.artist == expected_item_artist
@pytest.mark.parametrize(
"single_field,list_field",
[
("albumtype", "albumtypes"),
("artist", "artists"),
("artist_credit", "artists_credit"),
("artist_id", "artists_ids"),
("artist_sort", "artists_sort"),
],
)
@pytest.mark.parametrize(
"single_value,list_value,expected_values",
[
(None, [], (None, [])),
(None, ["1"], ("1", ["1"])),
(None, ["1", "2"], ("1", ["1", "2"])),
("1", [], ("1", ["1"])),
("1", ["1"], ("1", ["1"])),
("1", ["1", "2"], ("1", ["1", "2"])),
("1", ["2", "1"], ("1", ["1", "2"])),
("1", ["2"], ("1", ["1", "2"])),
("1 ft 2", ["1", "1 ft 2"], ("1 ft 2", ["1 ft 2", "1"])),
("1 FT 2", ["1", "1 ft 2"], ("1 FT 2", ["1", "1 ft 2"])),
("a", ["b", "A"], ("a", ["b", "A"])),
("1 ft 2", ["2", "1"], ("1 ft 2", ["2", "1"])),
],
)
def test_correct_list_fields(
single_field, list_field, single_value, list_value, expected_values
):
"""Verify that singular and plural field variants are kept consistent.
Checks that when both a single-value field and its list counterpart are
present, the function reconciles them: ensuring the single value appears
in the list and the list drives the canonical single value when needed.
"""
input_data = {single_field: single_value, list_field: list_value}
data = correct_list_fields(input_data)
assert (data[single_field], data[list_field]) == expected_values
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Tests for autotagging functionality."""
import operator
import pytest
from beets.autotag.distance import Distance
from beets.autotag.hooks import (
AlbumInfo,
AlbumMatch,
Info,
TrackInfo,
TrackMatch,
correct_list_fields,
)
from beets.library import Item
from beets.test.helper import BeetsTestCase
@pytest.mark.parametrize(
"genre, expected_genres",
[
("Rock", ("Rock",)),
("Rock; Alternative", ("Rock", "Alternative")),
],
)
def test_genre_deprecation(genre, expected_genres):
with pytest.warns(
DeprecationWarning, match="The 'genre' parameter is deprecated"
):
assert tuple(Info(genre=genre).genres) == expected_genres
class ApplyTest(BeetsTestCase):
def _apply(
self,
per_disc_numbering=False,
artist_credit=False,
original_date=False,
from_scratch=False,
):
info = self.info
mapping = dict(zip(self.items, info.tracks))
self.config["per_disc_numbering"] = per_disc_numbering
self.config["artist_credit"] = artist_credit
self.config["original_date"] = original_date
self.config["import"]["from_scratch"] = from_scratch
amatch = AlbumMatch(Distance(), self.info, mapping)
amatch.apply_metadata()
def setUp(self):
super().setUp()
self.items = [Item(), Item()]
self.info = AlbumInfo(
tracks=[
TrackInfo(
title="title",
track_id="dfa939ec-118c-4d0f-84a0-60f3d1e6522c",
medium=1,
medium_index=1,
medium_total=1,
index=1,
artist="trackArtist",
artist_credit="trackArtistCredit",
artists_credit=["trackArtistCredit"],
artist_sort="trackArtistSort",
artists_sort=["trackArtistSort"],
),
TrackInfo(
title="title2",
track_id="40130ed1-a27c-42fd-a328-1ebefb6caef4",
medium=2,
medium_index=1,
index=2,
medium_total=1,
),
],
artist="albumArtist",
artists=["albumArtist", "albumArtist2"],
album="album",
album_id="7edb51cb-77d6-4416-a23c-3a8c2994a2c7",
artist_id="a6623d39-2d8e-4f70-8242-0a9553b91e50",
artists_ids=None,
artist_credit="albumArtistCredit",
artists_credit=["albumArtistCredit1", "albumArtistCredit2"],
artist_sort=None,
artists_sort=["albumArtistSort", "albumArtistSort2"],
albumtype="album",
va=True,
mediums=2,
data_source="MusicBrainz",
year=2013,
month=12,
day=18,
genres=["Rock", "Pop"],
)
common_expected = {
"album": "album",
"albumartist_credit": "albumArtistCredit",
"albumartist": "albumArtist",
"albumartists": ["albumArtist", "albumArtist2"],
"albumartists_credit": [
"albumArtistCredit",
"albumArtistCredit1",
"albumArtistCredit2",
],
"albumartist_sort": "albumArtistSort",
"albumartists_sort": ["albumArtistSort", "albumArtistSort2"],
"albumtype": "album",
"albumtypes": ["album"],
"comp": True,
"disctotal": 2,
"mb_albumartistid": "a6623d39-2d8e-4f70-8242-0a9553b91e50",
"mb_albumartistids": ["a6623d39-2d8e-4f70-8242-0a9553b91e50"],
"mb_albumid": "7edb51cb-77d6-4416-a23c-3a8c2994a2c7",
"mb_artistid": "a6623d39-2d8e-4f70-8242-0a9553b91e50",
"mb_artistids": ["a6623d39-2d8e-4f70-8242-0a9553b91e50"],
"tracktotal": 2,
"year": 2013,
"month": 12,
"day": 18,
"genres": ["Rock", "Pop"],
}
self.expected_tracks = [
{
**common_expected,
"artist": "trackArtist",
"artists": ["trackArtist"],
"artist_credit": "trackArtistCredit",
"artist_sort": "trackArtistSort",
"artists_credit": ["trackArtistCredit"],
"artists_sort": ["trackArtistSort"],
"disc": 1,
"mb_trackid": "dfa939ec-118c-4d0f-84a0-60f3d1e6522c",
"title": "title",
"track": 1,
},
{
**common_expected,
"artist": "albumArtist",
"artists": ["albumArtist", "albumArtist2"],
"artist_credit": "albumArtistCredit",
"artist_sort": "albumArtistSort",
"artists_credit": [
"albumArtistCredit",
"albumArtistCredit1",
"albumArtistCredit2",
],
"artists_sort": ["albumArtistSort", "albumArtistSort2"],
"disc": 2,
"mb_trackid": "40130ed1-a27c-42fd-a328-1ebefb6caef4",
"title": "title2",
"track": 2,
},
]
def test_autotag_items(self):
self._apply()
keys = self.expected_tracks[0].keys()
get_values = operator.itemgetter(*keys)
applied_data = [
dict(zip(keys, get_values(dict(i)))) for i in self.items
]
assert applied_data == self.expected_tracks
def test_per_disc_numbering(self):
self._apply(per_disc_numbering=True)
assert self.items[0].track == 1
assert self.items[1].track == 1
assert self.items[0].tracktotal == 1
assert self.items[1].tracktotal == 1
def test_artist_credit_prefers_artist_over_albumartist_credit(self):
self.info.tracks[0].update(artist="oldArtist", artist_credit=None)
self._apply(artist_credit=True)
assert self.items[0].artist == "oldArtist"
def test_artist_credit_falls_back_to_albumartist(self):
self.info.artist_credit = None
self._apply(artist_credit=True)
assert self.items[1].artist == "albumArtist"
def test_date_only_zeroes_month_and_day(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(year=2013, month=None, day=None)
self._apply()
assert self.items[0].year == 2013
assert self.items[0].month == 0
assert self.items[0].day == 0
def test_missing_date_applies_nothing(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(year=None, month=None, day=None)
self._apply()
assert self.items[0].year == 1
assert self.items[0].month == 2
assert self.items[0].day == 3
def test_original_date_overrides_release_date(self):
self.items = [Item(year=1, month=2, day=3)]
self.info.update(
year=2013,
month=12,
day=18,
original_year=1999,
original_month=4,
original_day=7,
)
self._apply(original_date=True)
assert self.items[0].year == 1999
assert self.items[0].month == 4
assert self.items[0].day == 7
class TestFromScratch:
@pytest.fixture(autouse=True)
def config(self, config):
config["import"]["from_scratch"] = True
@pytest.fixture
def album_info(self):
return AlbumInfo(
tracks=[TrackInfo(title="title", artist="track artist", index=1)]
)
@pytest.fixture
def item(self):
return Item(artist="old artist", comments="stale comment")
def test_album_match_clears_stale_metadata(self, album_info, item):
match = AlbumMatch(Distance(), album_info, {item: album_info.tracks[0]})
match.apply_metadata()
assert item.artist == "track artist"
assert item.comments == ""
def test_singleton_match_clears_stale_metadata(self, item):
match = TrackMatch(Distance(), TrackInfo(artist="track artist"), item)
match.apply_metadata()
assert item.artist == "track artist"
assert item.comments == ""
@pytest.mark.parametrize(
"overwrite_fields,expected_item_artist",
[
pytest.param(["artist"], "", id="overwrite artist"),
pytest.param([], "artist", id="do not overwrite artist"),
],
)
class TestOverwriteNull:
@pytest.fixture(autouse=True)
def config(self, config, overwrite_fields):
config["overwrite_null"]["album"] = overwrite_fields
config["overwrite_null"]["track"] = overwrite_fields
config["import"]["from_scratch"] = False
@pytest.fixture
def item(self):
return Item(artist="artist")
@pytest.fixture
def track_info(self):
return TrackInfo(artist=None)
def test_album(self, item, track_info, expected_item_artist):
match = AlbumMatch(
Distance(), AlbumInfo([track_info]), {item: track_info}
)
match.apply_metadata()
assert item.artist == expected_item_artist
def test_singleton(self, item, track_info, expected_item_artist):
match = TrackMatch(Distance(), track_info, item)
match.apply_metadata()
assert item.artist == expected_item_artist
@pytest.mark.parametrize(
"single_field,list_field",
[
("albumtype", "albumtypes"),
("artist", "artists"),
("artist_credit", "artists_credit"),
("artist_id", "artists_ids"),
("artist_sort", "artists_sort"),
],
)
@pytest.mark.parametrize(
"single_value,list_value,expected_values",
[
(None, [], (None, [])),
(None, ["1"], ("1", ["1"])),
(None, ["1", "2"], ("1", ["1", "2"])),
("1", [], ("1", ["1"])),
("1", ["1"], ("1", ["1"])),
("1", ["1", "2"], ("1", ["1", "2"])),
("1", ["2", "1"], ("1", ["1", "2"])),
("1", ["2"], ("1", ["1", "2"])),
("1 ft 2", ["1", "1 ft 2"], ("1 ft 2", ["1 ft 2", "1"])),
("1 FT 2", ["1", "1 ft 2"], ("1 FT 2", ["1", "1 ft 2"])),
("a", ["b", "A"], ("a", ["b", "A"])),
("1 ft 2", ["2", "1"], ("1 ft 2", ["2", "1"])),
],
)
def test_correct_list_fields(
single_field, list_field, single_value, list_value, expected_values
):
"""Verify that singular and plural field variants are kept consistent.
Checks that when both a single-value field and its list counterpart are
present, the function reconciles them: ensuring the single value appears
in the list and the list drives the canonical single value when needed.
"""
input_data = {single_field: single_value, list_field: list_value}
data = correct_list_fields(input_data)
assert (data[single_field], data[list_field]) == expected_values

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,494 +1,494 @@
# This file is part of beets.
# Copyright 2016, Adrian Sampson and Diego Moreda.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
import codecs
from typing import ClassVar
from unittest.mock import patch
from beets.dbcore.query import TrueQuery
from beets.importer import Action
from beets.library import Item
from beets.test import _common
from beets.test.helper import (
AutotagImportTestCase,
AutotagStub,
BeetsTestCase,
IOMixin,
PluginMixin,
TerminalImportMixin,
)
class ModifyFileMocker:
"""Helper for modifying a file, replacing or editing its contents. Used for
mocking the calls to the external editor during testing.
"""
def __init__(self, contents=None, replacements=None):
"""`self.contents` and `self.replacements` are initialized here, in
order to keep the rest of the functions of this class with the same
signature as `EditPlugin.get_editor()`, making mocking easier.
- `contents`: string with the contents of the file to be used for
`overwrite_contents()`
- `replacement`: dict with the in-place replacements to be used for
`replace_contents()`, in the form {'previous string': 'new string'}
TODO: check if it can be solved more elegantly with a decorator
"""
self.contents = contents
self.replacements = replacements
self.action = self.overwrite_contents
if replacements:
self.action = self.replace_contents
# The two methods below mock the `edit` utility function in the plugin.
def overwrite_contents(self, filename, log):
"""Modify `filename`, replacing its contents with `self.contents`. If
`self.contents` is empty, the file remains unchanged.
"""
if self.contents:
with codecs.open(filename, "w", encoding="utf-8") as f:
f.write(self.contents)
def replace_contents(self, filename, log):
"""Modify `filename`, reading its contents and replacing the strings
specified in `self.replacements`.
"""
with codecs.open(filename, "r", encoding="utf-8") as f:
contents = f.read()
for old, new_ in self.replacements.items():
contents = contents.replace(old, new_)
with codecs.open(filename, "w", encoding="utf-8") as f:
f.write(contents)
class EditMixin(PluginMixin):
"""Helper containing some common functionality used for the Edit tests."""
plugin = "edit"
def assertItemFieldsModified(
self, library_items, items, fields=[], allowed=["path"]
):
"""Assert that items in the library (`lib_items`) have different values
on the specified `fields` (and *only* on those fields), compared to
`items`.
An empty `fields` list results in asserting that no modifications have
been performed. `allowed` is a list of field changes that are ignored
(they may or may not have changed; the assertion doesn't care).
"""
for lib_item, item in zip(library_items, items):
diff_fields = [
field
for field in lib_item._fields
if lib_item[field] != item[field]
]
assert set(diff_fields).difference(allowed) == set(fields)
def run_mocked_interpreter(self, modify_file_args={}, stdin=[]):
"""Run the edit command during an import session, with mocked stdin and
yaml writing.
"""
m = ModifyFileMocker(**modify_file_args)
with patch("beetsplug.edit.edit", side_effect=m.action):
for char in stdin:
self.importer.add_choice(char)
self.importer.run()
def run_mocked_command(self, modify_file_args={}, stdin=[], args=[]):
"""Run the edit command, with mocked stdin and yaml writing, and
passing `args` to `run_command`."""
m = ModifyFileMocker(**modify_file_args)
with patch("beetsplug.edit.edit", side_effect=m.action):
for char in stdin:
self.io.addinput(char)
self.run_command("edit", *args)
@_common.slow_test()
@patch("beets.library.Item.write")
class EditCommandTest(IOMixin, EditMixin, BeetsTestCase):
"""Black box tests for `beetsplug.edit`. Command line interaction is
simulated using mocked stdin, and yaml editing via an external editor is
simulated using `ModifyFileMocker`.
"""
ALBUM_COUNT = 1
TRACK_COUNT = 10
def setUp(self):
super().setUp()
# Add an album, storing the original fields for comparison.
self.album = self.add_album_fixture(track_count=self.TRACK_COUNT)
self.album_orig = {f: self.album[f] for f in self.album._fields}
self.items_orig = [
{f: item[f] for f in item._fields} for item in self.album.items()
]
def test_title_edit_discard(self, mock_write):
"""Edit title for all items in the library, then discard changes."""
# Edit track titles.
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# Cancel.
["c"],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(self.album.items(), self.items_orig, [])
def test_title_edit_apply(self, mock_write):
"""Edit title for all items in the library, then apply changes."""
# Edit track titles.
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["title", "mtime"]
)
def test_single_title_edit_apply(self, mock_write):
"""Edit title for one item in the library, then apply changes."""
# Edit one track title.
self.run_mocked_command(
{"replacements": {"t\u00eftle 9": "modified t\u00eftle 9"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == 1
# No changes except on last item.
self.assertItemFieldsModified(
list(self.album.items())[:-1], self.items_orig[:-1], []
)
assert list(self.album.items())[-1].title == "modified t\u00eftle 9"
def test_title_edit_keep_editing_then_apply(self, mock_write):
"""Edit titles, keep editing once, then apply changes."""
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# keep Editing, then Apply
["e", "a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(),
self.items_orig,
["title", "mtime"],
)
def test_title_edit_keep_editing_then_cancel(self, mock_write):
"""Edit titles, keep editing once, then cancel."""
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# keep Editing, then Cancel
["e", "c"],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(
self.album.items(),
self.items_orig,
[],
)
def test_noedit(self, mock_write):
"""Do not edit anything."""
# Do not edit anything.
self.run_mocked_command(
{"contents": None},
# No stdin.
[],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(self.album.items(), self.items_orig, [])
def test_album_edit_apply(self, mock_write):
"""Edit the album field for all items in the library, apply changes.
By design, the album should not be updated.""
"""
# Edit album.
self.run_mocked_command(
{"replacements": {"\u00e4lbum": "modified \u00e4lbum"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["album", "mtime"]
)
# Ensure album is *not* modified.
self.album.load()
assert self.album.album == "\u00e4lbum"
def test_single_edit_add_field(self, mock_write):
"""Edit the yaml file appending an extra field to the first item, then
apply changes."""
# Append "foo: bar" to item with id == 2. ("id: 1" would match both
# "id: 1" and "id: 10")
self.run_mocked_command(
{"replacements": {"id: 2": "id: 2\nfoo: bar"}},
# Apply changes.
["a"],
)
assert self.lib.items("id:2")[0].foo == "bar"
# Even though a flexible attribute was written (which is not directly
# written to the tags), write should still be called since templates
# might use it.
assert mock_write.call_count == 1
def test_a_album_edit_apply(self, mock_write):
"""Album query (-a), edit album field, apply changes."""
self.run_mocked_command(
{"replacements": {"\u00e4lbum": "modified \u00e4lbum"}},
# Apply changes.
["a"],
args=["-a"],
)
self.album.load()
assert mock_write.call_count == self.TRACK_COUNT
assert self.album.album == "modified \u00e4lbum"
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["album", "mtime"]
)
def test_a_albumartist_edit_apply(self, mock_write):
"""Album query (-a), edit albumartist field, apply changes."""
self.run_mocked_command(
{"replacements": {"album artist": "modified album artist"}},
# Apply changes.
["a"],
args=["-a"],
)
self.album.load()
assert mock_write.call_count == self.TRACK_COUNT
assert self.album.albumartist == "the modified album artist"
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["albumartist", "mtime"]
)
def test_malformed_yaml(self, mock_write):
"""Edit the yaml file incorrectly (resulting in a malformed yaml
document)."""
# Edit the yaml file to an invalid file.
self.run_mocked_command(
{"contents": "!MALFORMED"},
# Edit again to fix? No.
["n"],
)
assert mock_write.call_count == 0
def test_invalid_yaml(self, mock_write):
"""Edit the yaml file incorrectly (resulting in a well-formed but
invalid yaml document)."""
# Edit the yaml file to an invalid but parseable file.
self.run_mocked_command(
{"contents": "wellformed: yes, but invalid"},
# No stdin.
[],
)
assert mock_write.call_count == 0
@_common.slow_test()
class EditDuringImporterTestCase(
EditMixin, TerminalImportMixin, AutotagImportTestCase
):
"""TODO"""
matching = AutotagStub.GOOD
IGNORED: ClassVar[list[str]] = ["added", "album_id", "id", "mtime", "path"]
def setUp(self):
super().setUp()
# Create some mediafiles, and store them for comparison.
self.prepare_album_for_import(1)
self.items_orig = [Item.from_path(f.path) for f in self.import_media]
@_common.slow_test()
class EditDuringImporterNonSingletonTest(EditDuringImporterTestCase):
def setUp(self):
super().setUp()
self.importer = self.setup_importer()
def test_edit_apply_asis(self):
"""Edit the album field for all items in the library, apply changes,
using the original item tags.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Apply changes.
["d", "a"],
)
# Check that only the 'title' field is modified.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
["title", "albumartist", "albumartists"],
[*self.IGNORED, "mb_albumartistid", "mb_albumartistids"],
)
assert all("Edited Track" in i.title for i in self.lib.items())
# Ensure album is *not* fetched from a candidate.
assert self.lib.albums()[0].mb_albumid == ""
def test_edit_discard_asis(self):
"""Edit the album field for all items in the library, discard changes,
using the original item tags.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Cancel, Use as-is.
["d", "c", "u"],
)
# Check that nothing is modified, the album is imported ASIS.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
[],
[*self.IGNORED, "albumartist", "mb_albumartistid"],
)
assert all("Tag Track" in i.title for i in self.lib.items())
# Ensure album is *not* fetched from a candidate.
assert self.lib.albums()[0].mb_albumid == ""
def test_edit_apply_candidate(self):
"""Edit the album field for all items in the library, apply changes,
using a candidate.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes.
["c", "1", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_retag_apply(self):
"""Import the album using a candidate, then retag and edit and apply
changes.
"""
self.run_mocked_interpreter(
{},
# 1, Apply changes.
["1", Action.APPLY],
)
# Retag and edit track titles. On retag, the importer will reset items
# ids but not the db connections.
self.importer.paths = []
self.importer.query = TrueQuery()
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# eDit, Apply changes.
["d", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_discard_candidate(self):
"""Edit the album field for all items in the library, discard changes,
using a candidate.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes.
["c", "1", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_apply_candidate_singleton(self):
"""Edit the album field for all items in the library, apply changes,
using a candidate and singleton mode.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes, aBort.
["c", "1", "a", "b"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
@_common.slow_test()
class EditDuringImporterSingletonTest(EditDuringImporterTestCase):
def setUp(self):
super().setUp()
self.importer = self.setup_singleton_importer()
def test_edit_apply_asis_singleton(self):
"""Edit the album field for all items in the library, apply changes,
using the original item tags and singleton mode.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Apply changes, aBort.
["d", "a", "b"],
)
# Check that only the 'title' field is modified.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
["title"],
[*self.IGNORED, "albumartist", "mb_albumartistid"],
)
assert all("Edited Track" in i.title for i in self.lib.items())
# This file is part of beets.
# Copyright 2016, Adrian Sampson and Diego Moreda.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
import codecs
from typing import ClassVar
from unittest.mock import patch
from beets.dbcore.query import TrueQuery
from beets.importer import Action
from beets.library import Item
from beets.test import _common
from beets.test.helper import (
AutotagImportTestCase,
AutotagStub,
BeetsTestCase,
IOMixin,
PluginMixin,
TerminalImportMixin,
)
class ModifyFileMocker:
"""Helper for modifying a file, replacing or editing its contents. Used for
mocking the calls to the external editor during testing.
"""
def __init__(self, contents=None, replacements=None):
"""`self.contents` and `self.replacements` are initialized here, in
order to keep the rest of the functions of this class with the same
signature as `EditPlugin.get_editor()`, making mocking easier.
- `contents`: string with the contents of the file to be used for
`overwrite_contents()`
- `replacement`: dict with the in-place replacements to be used for
`replace_contents()`, in the form {'previous string': 'new string'}
TODO: check if it can be solved more elegantly with a decorator
"""
self.contents = contents
self.replacements = replacements
self.action = self.overwrite_contents
if replacements:
self.action = self.replace_contents
# The two methods below mock the `edit` utility function in the plugin.
def overwrite_contents(self, filename, log):
"""Modify `filename`, replacing its contents with `self.contents`. If
`self.contents` is empty, the file remains unchanged.
"""
if self.contents:
with codecs.open(filename, "w", encoding="utf-8") as f:
f.write(self.contents)
def replace_contents(self, filename, log):
"""Modify `filename`, reading its contents and replacing the strings
specified in `self.replacements`.
"""
with codecs.open(filename, "r", encoding="utf-8") as f:
contents = f.read()
for old, new_ in self.replacements.items():
contents = contents.replace(old, new_)
with codecs.open(filename, "w", encoding="utf-8") as f:
f.write(contents)
class EditMixin(PluginMixin):
"""Helper containing some common functionality used for the Edit tests."""
plugin = "edit"
def assertItemFieldsModified(
self, library_items, items, fields=[], allowed=["path"]
):
"""Assert that items in the library (`lib_items`) have different values
on the specified `fields` (and *only* on those fields), compared to
`items`.
An empty `fields` list results in asserting that no modifications have
been performed. `allowed` is a list of field changes that are ignored
(they may or may not have changed; the assertion doesn't care).
"""
for lib_item, item in zip(library_items, items):
diff_fields = [
field
for field in lib_item._fields
if lib_item[field] != item[field]
]
assert set(diff_fields).difference(allowed) == set(fields)
def run_mocked_interpreter(self, modify_file_args={}, stdin=[]):
"""Run the edit command during an import session, with mocked stdin and
yaml writing.
"""
m = ModifyFileMocker(**modify_file_args)
with patch("beetsplug.edit.edit", side_effect=m.action):
for char in stdin:
self.importer.add_choice(char)
self.importer.run()
def run_mocked_command(self, modify_file_args={}, stdin=[], args=[]):
"""Run the edit command, with mocked stdin and yaml writing, and
passing `args` to `run_command`."""
m = ModifyFileMocker(**modify_file_args)
with patch("beetsplug.edit.edit", side_effect=m.action):
for char in stdin:
self.io.addinput(char)
self.run_command("edit", *args)
@_common.slow_test()
@patch("beets.library.Item.write")
class EditCommandTest(IOMixin, EditMixin, BeetsTestCase):
"""Black box tests for `beetsplug.edit`. Command line interaction is
simulated using mocked stdin, and yaml editing via an external editor is
simulated using `ModifyFileMocker`.
"""
ALBUM_COUNT = 1
TRACK_COUNT = 10
def setUp(self):
super().setUp()
# Add an album, storing the original fields for comparison.
self.album = self.add_album_fixture(track_count=self.TRACK_COUNT)
self.album_orig = {f: self.album[f] for f in self.album._fields}
self.items_orig = [
{f: item[f] for f in item._fields} for item in self.album.items()
]
def test_title_edit_discard(self, mock_write):
"""Edit title for all items in the library, then discard changes."""
# Edit track titles.
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# Cancel.
["c"],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(self.album.items(), self.items_orig, [])
def test_title_edit_apply(self, mock_write):
"""Edit title for all items in the library, then apply changes."""
# Edit track titles.
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["title", "mtime"]
)
def test_single_title_edit_apply(self, mock_write):
"""Edit title for one item in the library, then apply changes."""
# Edit one track title.
self.run_mocked_command(
{"replacements": {"t\u00eftle 9": "modified t\u00eftle 9"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == 1
# No changes except on last item.
self.assertItemFieldsModified(
list(self.album.items())[:-1], self.items_orig[:-1], []
)
assert list(self.album.items())[-1].title == "modified t\u00eftle 9"
def test_title_edit_keep_editing_then_apply(self, mock_write):
"""Edit titles, keep editing once, then apply changes."""
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# keep Editing, then Apply
["e", "a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(),
self.items_orig,
["title", "mtime"],
)
def test_title_edit_keep_editing_then_cancel(self, mock_write):
"""Edit titles, keep editing once, then cancel."""
self.run_mocked_command(
{"replacements": {"t\u00eftle": "modified t\u00eftle"}},
# keep Editing, then Cancel
["e", "c"],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(
self.album.items(),
self.items_orig,
[],
)
def test_noedit(self, mock_write):
"""Do not edit anything."""
# Do not edit anything.
self.run_mocked_command(
{"contents": None},
# No stdin.
[],
)
assert mock_write.call_count == 0
self.assertItemFieldsModified(self.album.items(), self.items_orig, [])
def test_album_edit_apply(self, mock_write):
"""Edit the album field for all items in the library, apply changes.
By design, the album should not be updated.""
"""
# Edit album.
self.run_mocked_command(
{"replacements": {"\u00e4lbum": "modified \u00e4lbum"}},
# Apply changes.
["a"],
)
assert mock_write.call_count == self.TRACK_COUNT
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["album", "mtime"]
)
# Ensure album is *not* modified.
self.album.load()
assert self.album.album == "\u00e4lbum"
def test_single_edit_add_field(self, mock_write):
"""Edit the yaml file appending an extra field to the first item, then
apply changes."""
# Append "foo: bar" to item with id == 2. ("id: 1" would match both
# "id: 1" and "id: 10")
self.run_mocked_command(
{"replacements": {"id: 2": "id: 2\nfoo: bar"}},
# Apply changes.
["a"],
)
assert self.lib.items("id:2")[0].foo == "bar"
# Even though a flexible attribute was written (which is not directly
# written to the tags), write should still be called since templates
# might use it.
assert mock_write.call_count == 1
def test_a_album_edit_apply(self, mock_write):
"""Album query (-a), edit album field, apply changes."""
self.run_mocked_command(
{"replacements": {"\u00e4lbum": "modified \u00e4lbum"}},
# Apply changes.
["a"],
args=["-a"],
)
self.album.load()
assert mock_write.call_count == self.TRACK_COUNT
assert self.album.album == "modified \u00e4lbum"
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["album", "mtime"]
)
def test_a_albumartist_edit_apply(self, mock_write):
"""Album query (-a), edit albumartist field, apply changes."""
self.run_mocked_command(
{"replacements": {"album artist": "modified album artist"}},
# Apply changes.
["a"],
args=["-a"],
)
self.album.load()
assert mock_write.call_count == self.TRACK_COUNT
assert self.album.albumartist == "the modified album artist"
self.assertItemFieldsModified(
self.album.items(), self.items_orig, ["albumartist", "mtime"]
)
def test_malformed_yaml(self, mock_write):
"""Edit the yaml file incorrectly (resulting in a malformed yaml
document)."""
# Edit the yaml file to an invalid file.
self.run_mocked_command(
{"contents": "!MALFORMED"},
# Edit again to fix? No.
["n"],
)
assert mock_write.call_count == 0
def test_invalid_yaml(self, mock_write):
"""Edit the yaml file incorrectly (resulting in a well-formed but
invalid yaml document)."""
# Edit the yaml file to an invalid but parseable file.
self.run_mocked_command(
{"contents": "wellformed: yes, but invalid"},
# No stdin.
[],
)
assert mock_write.call_count == 0
@_common.slow_test()
class EditDuringImporterTestCase(
EditMixin, TerminalImportMixin, AutotagImportTestCase
):
"""TODO"""
matching = AutotagStub.GOOD
IGNORED: ClassVar[list[str]] = ["added", "album_id", "id", "mtime", "path"]
def setUp(self):
super().setUp()
# Create some mediafiles, and store them for comparison.
self.prepare_album_for_import(1)
self.items_orig = [Item.from_path(f.path) for f in self.import_media]
@_common.slow_test()
class EditDuringImporterNonSingletonTest(EditDuringImporterTestCase):
def setUp(self):
super().setUp()
self.importer = self.setup_importer()
def test_edit_apply_asis(self):
"""Edit the album field for all items in the library, apply changes,
using the original item tags.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Apply changes.
["d", "a"],
)
# Check that only the 'title' field is modified.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
["title", "albumartist", "albumartists"],
[*self.IGNORED, "mb_albumartistid", "mb_albumartistids"],
)
assert all("Edited Track" in i.title for i in self.lib.items())
# Ensure album is *not* fetched from a candidate.
assert self.lib.albums()[0].mb_albumid == ""
def test_edit_discard_asis(self):
"""Edit the album field for all items in the library, discard changes,
using the original item tags.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Cancel, Use as-is.
["d", "c", "u"],
)
# Check that nothing is modified, the album is imported ASIS.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
[],
[*self.IGNORED, "albumartist", "mb_albumartistid"],
)
assert all("Tag Track" in i.title for i in self.lib.items())
# Ensure album is *not* fetched from a candidate.
assert self.lib.albums()[0].mb_albumid == ""
def test_edit_apply_candidate(self):
"""Edit the album field for all items in the library, apply changes,
using a candidate.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes.
["c", "1", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_retag_apply(self):
"""Import the album using a candidate, then retag and edit and apply
changes.
"""
self.run_mocked_interpreter(
{},
# 1, Apply changes.
["1", Action.APPLY],
)
# Retag and edit track titles. On retag, the importer will reset items
# ids but not the db connections.
self.importer.paths = []
self.importer.query = TrueQuery()
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# eDit, Apply changes.
["d", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_discard_candidate(self):
"""Edit the album field for all items in the library, discard changes,
using a candidate.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes.
["c", "1", "a"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
# Ensure album is fetched from a candidate.
assert "albumid" in self.lib.albums()[0].mb_albumid
def test_edit_apply_candidate_singleton(self):
"""Edit the album field for all items in the library, apply changes,
using a candidate and singleton mode.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Applied Track": "Edited Track"}},
# edit Candidates, 1, Apply changes, aBort.
["c", "1", "a", "b"],
)
# Check that 'title' field is modified, and other fields come from
# the candidate.
assert all("Edited Track " in i.title for i in self.lib.items())
assert all("match " in i.mb_trackid for i in self.lib.items())
@_common.slow_test()
class EditDuringImporterSingletonTest(EditDuringImporterTestCase):
def setUp(self):
super().setUp()
self.importer = self.setup_singleton_importer()
def test_edit_apply_asis_singleton(self):
"""Edit the album field for all items in the library, apply changes,
using the original item tags and singleton mode.
"""
# Edit track titles.
self.run_mocked_interpreter(
{"replacements": {"Tag Track": "Edited Track"}},
# eDit, Apply changes, aBort.
["d", "a", "b"],
)
# Check that only the 'title' field is modified.
self.assertItemFieldsModified(
self.lib.items(),
self.items_orig,
["title"],
[*self.IGNORED, "albumartist", "mb_albumartistid"],
)
assert all("Edited Track" in i.title for i in self.lib.items())

File diff suppressed because it is too large Load diff

BIN
test/rsrc/no_ext Normal file

Binary file not shown.

View file

View file

@ -352,6 +352,47 @@ class ImportTest(PathsMixin, AutotagImportTestCase):
self.prepare_album_for_import(1)
self.setup_importer()
@unittest.skipIf(
not has_program("ffprobe", ["-L"]),
"need ffprobe for format recognition",
)
def test_recognize_format(self):
resource_src = os.path.join(_common.RSRC, b"no_ext")
resource_path = os.path.join(self.import_dir, b"no_ext")
util.copy(resource_src, resource_path)
self.setup_importer()
self.importer.paths = [resource_path]
self.importer.run()
assert self.lib.items().get().path.endswith(b".mp3")
@unittest.skipIf(
not has_program("ffprobe", ["-L"]),
"need ffprobe for format recognition",
)
def test_recognize_format_already_exist(self):
resource_path = os.path.join(_common.RSRC, b"no_ext")
temp_resource_path = os.path.join(self.temp_dir, b"no_ext")
util.copy(resource_path, temp_resource_path)
new_path = os.path.join(self.temp_dir, b"no_ext.mp3")
util.copy(temp_resource_path, new_path)
self.setup_importer()
self.importer.paths = [temp_resource_path]
with capture_log() as logs:
self.importer.run()
assert "Import file with matching format to original target" in logs
assert self.lib.items().get().path.endswith(b".mp3")
@unittest.skipIf(
not has_program("ffprobe", ["-L"]),
"need ffprobe for format recognition",
)
def test_recognize_format_not_music(self):
resource_path = os.path.join(_common.RSRC, b"no_ext_not_music")
self.setup_importer()
self.importer.paths = [resource_path]
self.importer.run()
assert len(self.lib.items()) == 0
def test_asis_moves_album_and_track(self):
self.importer.add_choice(importer.Action.ASIS)
self.importer.run()