The strings.Replace function counts the number of replacements. If 0,
the original string is returned. Hence, there is no need to check if a
replacement will happen before doing the work.
* Remove stuff which isn't being used
Some fields, functions and structs aren't in use by the project. Remove
them for janitorial reasons.
* Remove more unused code
All of these functions are currently not in use. Clean up the code by
removal, since the version control has the code if need be.
* Remove unused functions
There's a large set of unused functions and variables in the code base.
Remove these, so it clearer what code to support going forward.
Dead code has been eliminated.
Where applicable, comment const-sections in tests, so reserved
identifiers are still known.
* Fix use-def of tsURL
The first def of tsURL doesn't matter because there's no use before
we hit the 2nd def.
* Remove dead code assignment
Setting logFile = "" is effectively dead code, because there's no use
of it later.
* Comment out found
The variable 'found' is dead in the function (because no post-process
action is following it). Comment it for now.
* Comment dead code in tests
These might provide hints as to what isn't covered at the moment.
* Dead code removal
In the case of constants where iota is involved, move the iota so it
matches the current key values.
This avoids problems with persistently stored key IDs.
* Fix error string capitalization
Error strings often follow another string. Hence, they should not be
capitalized, unless referencing a name.
* Uncapitalize more error strings
While here, use %v on the error directly, which makes it easier to wrap
the error later with %w if need be.
* Uncapitalize more error strings
While here, rename Url to URL as a nitpick.
* Unify scraped types
* Make name fields optional
* Unify single scrape queries
* Change UI to use new interfaces
* Add multi scrape interfaces
* Use images instead of image
* Add config option for scraper tag exclusion patterns
Add a config option for exclusing tags / tag patterns from the scraper
results.
* Handle tag exclusion patterns during scraping
* Add Tag Update/UpdateFull
* Tag alias implementation
* Refactor tag page
* Add aliases in UI
* Include tag aliases in q filter
* Include aliases in tag select
* Add aliases to auto-tagger
* Use aliases in scraper
* Add tag aliases for filename parser
* cleanup: remove dead code
removing some code that does nothing
* cleanup: fixing usage of deprecated gqlgen/graphql api in api/changeset_translator
* cleanup: changing to recommended comparison methods
Changing byte and case-insensitive string comparison to the recommended methods.
* cleanup: making staticcheck happy
* Apply all post processors to performer
Scraping a performer by fragment doesn't correctly work with tags.
When tags are returned to the scraper then all are recognized as new.
This is due to the post process method not being applied while it should
be, as is done when scraping a performer by URL.
* Make config instance-based
* Remove config dependency in paths
* Refactor config init
* Allow startup without database
* Get system status at UI initialise
* Add setup wizard
* Cache and Metadata optional. Database mandatory
* Handle metadata not set during full import/export
* Add links
* Remove config check middleware
* Stash not mandatory
* Panic on missing mandatory config fields
* Redirect setup to main page if setup not required
* Add migration UI
* Remove unused stuff
* Move UI initialisation into App
* Don't create metadata paths on RefreshConfig
* Add folder selector for generated in setup
* Env variable to set and create config file.
Make docker images use a fixed config file.
* Set config file during setup
* Add scraping support for performer tags
* Add performer count to tag cards
* Refactor sqlite test setup
* Add performer tag filtering in gallery and image
* Add bulk update performer
* Add Performers tab to tag page
* Add count filters and sort bys for tags
* Move scene count to icon in performer card #1148
* find correct python executable
For script scrapers using python, both python and python3 are valid depending on the OS and running environment. To save users from having any issues, this change will find the correct executable for them.
Co-authored-by: bnkai <bnkai@users.noreply.github.com>
* Expose url for URLReplace in JSON scrapeByURL and scrapeByFragment
* Apply queryURLReplace to xpath scrapers
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Fix potential image errors
* Fix issue preventing favoriting of tagged performers
* Add error handling in case of network issues
* Show individual search errors
* Unset scene results if query fails
* Don't abort scene submission if scene id isn't found
* Add gql client generation files
* Update dependencies
* Add stash-box client generation to the makefile
* Move scraped scene object matchers to models
* Add stash-box to scrape with dropdown
* Add scrape scene from fingerprint in UI
* api/urlbuilders/movie: Auto format.
* graphql+pkg+ui: Implement scraping movies by URL.
This patch implements the missing required boilerplate for scraping
movies by URL, using performers and scenes as a reference.
Although this patch contains a big chunck of ground work for enabling
scraping movies by fragment, the feature would require additional
changes to be completely implemented and was not tested.
* graphql+pkg+ui: Scrape movie studio.
Extends and corrects the movie model for the ability to store and
dereference studio IDs with received studio string from the scraper.
This was done with Scenes as a reference. For simplicity the duplication
of having `ScrapedMovieStudio` and `ScrapedSceneStudio` was kept, which
should probably be refactored to be the same type in the model in the
future.
* ui/movies: Add movie scrape dialog.
Adds possibility to update existing movie entries with the URL scraper.
For this the MovieScrapeDialog.tsx was implemented with Performers and
Scenes as a reference. In addition DurationUtils needs to be called one
time for converting seconds from the model to the string that is
displayed in the component. This seemed the least intrusive to me as it
kept a ScrapeResult<string> type compatible with ScrapedInputGroupRow.
* Refactor xpath scraper code
* Make post-process a list
* Add map post-process action
* Add fixed xpath values
* Refactor scrapers into cache
* Refactor into mapped config
* Trim test html
* Add reload scraper option to performer details
* Add scraper reload to scene edit page
* Show scene scraper menu when no queryable scrapers
* Add 0.3 changelog
* Change scrape matching (studio, movies, tag, performers) to case insensitive
* * fix collate order
* * make filename parser findbyname calls case insensitive
* * add unit testing for Tags GetFindbyName/s
* Add lint/format checks to build
* Make travis get full repo to get tags
* Run packr2 once in cross-compile
* Fix quotes in package.json
* Fix linting issues
* Formatting
* Fix vet issue
* Fix go lint issues
* Show start of each platform compilation
* Add validate target
* Set gitattributes for go fmt and mod vendor
* Fix tag name
* Add fmt-ui target
* Add sub-scraper functionality
* Add scraping of performer image
* Add scene cover image scraping
* Port UI changes to v2.5
* Fix v2.5 dialog suggest color
* Don't convert eol of UI to support pretty
* Add stash scraper type
* Add graphql client to vendor
* Embed stash credentials in URL
* Fill URL from scraped scene
* Nil IDs returned from remote stash
* Nil check