Add check to skip HTTP fetch for non-HTTP URLs in processImageField(),
matching the existing behavior in setPerformerImage() and setStudioImage().
This allows scrapers to return base64 data URIs (e.g.,
`data:image/jpeg;base64,...`) directly without triggering an HTTP fetch
error. Previously, processImageField() would attempt to create an HTTP
request with the data URI as the URL, causing "Could not set image using
URL" warnings.
* Refactor scraper post-processing and process related objects consistently
* Refactor image processing
* Scrape related studio fields consistently
* Don't set image on related objects
* Sort tags by name while scraping scenes
* TagStore.All should sort by sort_name first
* Sort tag by sort name/name in TagIDSelect
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Move stashbox package under pkg
* Remove StashBox from method names
* Add fingerprint conversion methods to Fingerprint
Refactor Fingerprints methods
* Make FindSceneByFingerprints accept fingerprints not scene ids
* Refactor SubmitSceneDraft to not require readers
* Have SubmitFingerprints accept scenes
Remove SceneReader dependency
* Move ScrapedScene to models package
* Move ScrapedImage into models package
* Move ScrapedGallery into models package
* Move Scene relationship matching out of stashbox package
This is now expected to be done in the client code
* Remove TagFinder dependency from stashbox.Client
* Make stashbox scene find full hierarchy of studios
* Move studio resolution into separate method
* Move studio matching out of stashbox package
This is now client code responsibility
* Move performer matching out of FindPerformerByID and FindPerformerByName
* Refactor performer querying logic and remove unused stashbox models
Renames FindStashBoxPerformersByPerformerNames to QueryPerformers and accepts names instead of performer ids
* Refactor SubmitPerformerDraft to not load relationships
This will be the responsibility of the calling code
* Remove repository references
* Move tag exclusion code back into scraper package
Reverts #2391
* Rearrange stash box client code
* Filter excluded tags in stashbox queries
Re-application of fix for #2379
* Upgrade gqlgenc and regenerate stash-box client
* Fix go version
* Don't generate resolvers
* Bump go version in compiler image. Bump freebsd version
* Rename Movie and MoviePartial to Group/GroupPartial
* Rename Movie interfaces
* Update movie url builders to use group
* Rename movieRoutes to groupRoutes
* Update dataloader
* Update names in sqlite package
* Rename in resolvers
* Add GroupByURL to scraper config
* Scraper backward compatibility hacks
* Forward non-http single performer images
* Don't set if Images already set
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Set PYTHONPATH environment variable for Python script scrapers
* Convert PYTHONPATH to absolute
* Generalise and apply to plugins
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Added Studio Code and Photographer to Galleries
* Fix gallery display on mobile
* Fixed potential panic when scraping with a bad configuration
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Log more when resolving Python
Users often have problems configuring their Python installations
* Convert if-else ladder to switch statement
* Consolidate Python resolution
Adds additional logging to plugin tasks to
align with the logging that scrapers output.
* Update a number of dependencies (incl. CVE fixes)
Includes some dependencies that were upgraded in #4106 as well as a few more dependencies.
Some deps that have been upgraded had CVEs.
Notably, upgrades deprecated dependencies such as:
- `github.com/go-chi/chi` (replaced with `/v5`)
- `github.com/gofrs/uuid` (replaced with `/v5`)
- `github.com/hashicorp/golang-lru` (replaced with `/v2` which uses generics)
* Upgraded a few more deps
* lint
* reverted yaml library to v2
* remove unnecessary mod replace
* Update chromedp
Fixes#3733