* Support excludes field
* Refactor studio filter
* Refactor tags filter
* Support excludes in tags
---------
Co-authored-by: Kermie <kermie@isinthe.house>
* Add penis length stat to performers.
* Modified the UI to display and edit the stat.
* Added the ability to filter floats to allow filtering by penis length.
* Add circumcision stat to performer.
* Refactor enum filtering
* Change boolean filter to radio buttons
* Return null for empty enum values
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* GalleryInExClusion // Create Gallery from folder based on file, short description in setting
* GalleryInExClusion // No Folderiteration, expansion of docs
* GalleryInExClusion // Only accept lowercase files
* GalleryInExClusion // Correct text in settings
* Close input file so SafeMove can delete it
This is happening on Windows and over the network but at the end of SafeMove it fails the move with an error that it can't remove the input because it is in use.
It turns out it is in use by the SafeMove itself :)
* Copy the src file mod time
* Limit duplicate matching to files that have ~ same duration
* Add UI for duration diff
---------
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* initial commit of sort performer by o-count
* work on o_counter filter
* filter working
* sorting, filtering using combined scene+image count
* linting
* fix performer list view
---------
Co-authored-by: jpnsfw <none@none.com>
* Fix error if movie back image blob was not found
* Don't error out if scene cover get fails
* Don't error out on image get fails
* Add debug logging for fs blobs
* Remove old blob data when no longer referenced
* Refactor transaction hooks. Add preCommit
* Add BlobStore
* Use blobStore for tag images
* Use blobStore for studio images
* Use blobStore for performer images
* Use blobStore for scene covers
* Don't generate screenshots in legacy directory
* Run post-hooks outside original transaction
* Use blobStore for movie images
* Remove unnecessary DestroyImage methods
* Add missing filter for scene cover
* Add covers to generate options
* Add generate cover option to UI
* Add screenshot migration
* Delete thumb files as part of screenshot migration
* HW Accel
* CUDA Docker build and adjust the NVENC encoder
* Removed NVENC preset
Using legacy presets is removed in SDK 12 and deprecated since SDK 10.
This commit removed the preset to allow ffmpeg to select the default one.
---------
Co-authored-by: Nodude <>
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Added the ability to do Seqential Scans
* Modify pkg/txn to run hooks with the outer context, instead of the context that was in a transaction
* update in application manual
* Fix possible infinite loop/stack overflow with weird/broken zip files
* Fix path length calculation using bytes instead of characters (runes)
* Fix bug where oshash gets buffers with size not actually multiple of 8
* Add oshash tests
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Treat no output from ffmpeg as an error condition
* Distinguish file vs. video duration, and use later where appropriate
* Check for empty file in generateFile
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* track watchtime and view time
* add view count sorting, added continue position filter
* display metrics in file info
* add toggle for tracking activity
* save activity every 10 seconds
* reset resume when video is nearly complete
* start from beginning when playing scene in queue
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Make read-only operations use WithReadTxn
* Allow one database write thread
* Add unit test for concurrent transactions
* Perform some actions after commit to release txn
* Suppress some errors from cancelled context
* graphql: support date and timestamp filter types
* sql: add support for date & timestamp criterions
* ui: add support for date and timestamp criterions
* scenes: add support for filtering by date, created at and updated at
* image: support filtering by created at and updated at
* gallery: support filtering by date, created at and updated at
* movie: support filtering by date, created at and updated at
* studio: support filtering by date, created at and updated at
* tag: support filtering by date, created at and updated at
* performer: support filtering by bitrh & death date and created & updated at
* marker: support filtering by created & updated at and scene date, created & updated at
* Reassign scene file functionality
* Implement scene create
* Add scene create UI
* Add sceneMerge backend support
* Add merge scene to UI
* Populate split create with scene details
* Add merge button to duplicate checker
* Handle file-less scenes in marker preview generate
* Make unique file name for file-less scene exports
* Add o-counter to scene update input
* Hide rescan for file-less scenes
* Generate heatmap if no speed set on file
* Fix count in scene/image queries
* added schema migration and updated data models
* added code and director to UI
* new fields are exported and imported
* added filters
* Add changelog entry
* Change performer country value to be ISO code
* Localize country names
* Use country select for filter
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* add descriptions to tags
* display tag description and tag image on hover
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Fire handlers when file updated or moved
* Create galleries as needed
* Clean empty galleries
* Handle cleaning zip folders when path changed
* Fix gallery association on duplicate images
* Re-create missing folder-based galleries
* Only update fingerprints if changed
* Fix panic when loading primary file fails
* Fix gallery/scene association
* Fix display of scene gallery in card
* Use natural_cs collation with paths for title sorting
* Don't recalculate MD5 if not enabled
Remove MD5 if oshash has changed and MD5 was not calculated.
* Fix panic in paged DLNA
* Prevent identical hashes in stash-box drafts
* Fix incorrect timestamp updates
* Correct folder time fields
* Add migration with new indexes
* Correct mod_time format
* Add mod_time to data massage
* Load scene relationships on demand
* Load image relationships on demand
* Load gallery relationships on demand
* Add dataloaden
* Use dataloaders
* Use where in for other find many functions
* Do database txn in same thread. Retry on locked db
* Remove captions from slimscenedata
* Fix tracing
* Use where in instead of individual selects
* Remove scenes_query view
* Remove image query view
* Remove gallery query view
* Use where in for FindMany
* Don't interrupt scanning zip files
* Fix image filesize sort
* Use cache during migration
* Avoid use of query views
* Use FindMany to find related objects
* Log slow queries
* Add folders to generated files
* Use SlimScene for scene queries
* Include filename in migration error message
* Fix destroy gallery not destroying file
* Re-add minModTime functionality
* Deprecate useFileMetadata and stripFileExtension
* Optimise files post migration
* Decorate moved files. Use first missing file in move
* Include path in thumbnail generation error log
* Fix stash-box draft submission
* Don't destroy files unless deleting
* Call handler for files with no associated objects
* Fix moved zips causing error on scan
* Restructure data layer part 2 (#2599)
* Refactor and separate image model
* Refactor image query builder
* Handle relationships in image query builder
* Remove relationship management methods
* Refactor gallery model/query builder
* Add scenes to gallery model
* Convert scene model
* Refactor scene models
* Remove unused methods
* Add unit tests for gallery
* Add image tests
* Add scene tests
* Convert unnecessary scene value pointers to values
* Convert unnecessary pointer values to values
* Refactor scene partial
* Add scene partial tests
* Refactor ImagePartial
* Add image partial tests
* Refactor gallery partial update
* Add partial gallery update tests
* Use zero/null package for null values
* Add files and scan system
* Add sqlite implementation for files/folders
* Add unit tests for files/folders
* Image refactors
* Update image data layer
* Refactor gallery model and creation
* Refactor scene model
* Refactor scenes
* Don't set title from filename
* Allow galleries to freely add/remove images
* Add multiple scene file support to graphql and UI
* Add multiple file support for images in graphql/UI
* Add multiple file for galleries in graphql/UI
* Remove use of some deprecated fields
* Remove scene path usage
* Remove gallery path usage
* Remove path from image
* Move funscript to video file
* Refactor caption detection
* Migrate existing data
* Add post commit/rollback hook system
* Lint. Comment out import/export tests
* Add WithDatabase read only wrapper
* Prepend tasks to list
* Add 32 pre-migration
* Add warnings in release and migration notes
* refactored common code in recommendation row
* Implement front page options in config
* Allow customisation from front page
* Rename recommendations to front page
* Add generic UI settings
* Support adding premade filters
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Make the script scraper context-aware
Connect the context to the command execution. This means command
execution can be aborted if the context is canceled. The context is
usually bound to user-interaction, i.e., a scraper operation issued
by the user. Hence, it seems correct to abort a command if the user
aborts.
* Enable errchkjson
Some json marshal calls are *safe* in that they can never fail. This is
conditional on the types of the the data being encoded. errchkjson finds
those calls which are unsafe, and also not checked for errors.
Add logging warnings to the place where unsafe encodings might happen.
This can help uncover usage bugs early in stash if they are tripped,
making debugging easier.
While here, keep the checker enabled in the linter to capture future
uses of json marshalling.
* Pass the context for zip file scanning.
* Pass the context in scanning
* Pass context, replace context.TODO()
Where applicable, pass the context down toward the lower functions in
the call stack. Replace uses of context.TODO() with the passed context.
This makes the code more context-aware, and you can rely on aborting
contexts to clean up subsystems to a far greater extent now.
I've left the cases where there is a context in a struct. My gut feeling
is that they have solutions that are nice, but they require more deep
thinking to unveil how to handle it.
* Remove context from task-structs
As a rule, contexts are better passed explicitly to functions than they
are passed implicitly via structs. In the case of tasks, we already
have a valid context in scope when creating the struct, so remove ctx
from the struct and use the scoped context instead.
With this change it is clear that the scanning functions are under a
context, and the task-starting caller has jurisdiction over the context
and its lifetime. A reader of the code don't have to figure out where
the context are coming from anymore.
While here, connect context.TODO() to the newly scoped context in most
of the scan code.
* Remove context from autotag struct too
* Make more context-passing explicit
In all of these cases, there is an applicable context which is close
in the call-tree. Hook up to this context.
* Simplify context passing in manager
The managers context handling generally wants to use an outer context
if applicable. However, the code doesn't pass it explicitly, but stores
it in a struct. Pull out the context from the struct and use it to
explicitly pass it.
At a later point in time, we probably want to handle this by handing
over the job to a different (program-lifetime) context for background
jobs, but this will do for a start.
* Upgrade gqlgen to v0.17.2
This enables builds on Go 1.18. github.com/vektah/gqlparser is upgraded
to the newest version too.
Getting this to work is a bit of a hazzle. I had to first remove
vendoring from the repository, perform the upgrade and then re-introduce
the vendor directory. I think gqlgens analysis went wrong for some
reason on the upgrade. It would seem a clean-room installation fixed it.
* Bump project to 1.18
* Update all packages, address gqlgenc breaking changes
* Let `go mod tidy` handle the go.mod file
* Upgrade linter to 1.45.2
* Introduce v1.45.2 of the linter
The linter now correctly warns on `strings.Title` because it isn't
unicode-aware. Fix this by using the suggested fix from x/text/cases
to produce unicode-aware strings.
The mapping isn't entirely 1-1 as this new approach has a larger iface:
it spans all of unicode rather than just ASCII. It coincides for ASCII
however, so things should be largely the same.
* Ready ourselves for errchkjson and contextcheck.
* Revert dockerfile golang version changes for now
Co-authored-by: Kermie <kermie@isinthe.house>
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Move main to cmd
* Move api to internal
* Move logger and manager to internal
* Move shell hiding code to separate package
* Decouple job from desktop and utils
* Decouple session from config
* Move static into internal
* Decouple config from dlna
* Move desktop to internal
* Move dlna to internal
* Decouple remaining packages from config
* Move config into internal
* Move jsonschema and paths to models
* Make ffmpeg functions private
* Move file utility methods into fsutil package
* Move symwalk into fsutil
* Move single-use util functions into client package
* Move slice functions to separate packages
* Add env var to suppress windowsgui arg
* Move hash functions into separate package
* Move identify to internal
* Move autotag to internal
* Touch UI when generating backend
* Continue identify if source fails
* Handle empty result set correctly
* Parse null values from scraper script correctly
* Omit warning when json selector value missing
* Return nil when scraped item not found
* Fix graphql validation errors
* Add duration to autotag finish message
* No sorting scene/image/gallery where not specified
* Use an LRU cache for sqlite regexp function
* Compile path separator regex once
* Cache objects with single letter first names
* Move finished auto-tag log
* Add more verbose logging
* Add new changelog
* Remove single unicode character from autotag query
* Compile regex once where possible
* Fix CPU profiling
* Only match unicode characters if in path
* Delete funscripts while deleting scene
* Indicate that funscripts will be deleted
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* add InteractiveSpeed to scene model
* add InteractiveHeatmapSpeedGenerator
* add GenerateInteractiveHeatmapSpeedTask
* add InteractiveHeatmapSpeedTask to GenerateJob
* add InteractiveHeatmap on sceneRoutes
* delete heatmap when scene is destroyed
* render interactive heatmap in GridCard
* render InteractiveSpeed on SceneCard
* render InteractiveSpeed in SceneFileInfoPanel
* InteractiveSpeed filters
* Added joinType to join struct
* Added addInnerJoin function to perform INNER JOIN type of joins
* Added innerJoin function to perform INNER JOIN type of joins
* Use inner joins when querying images in a gallery
* Renamed addJoin to addLeftJoin
* Support a maxAge input on metadata scans.
Extend the GraphQL world with a Duration scalar. It is parsed as a
typical Go duration, i.e., "4h" is 4 hours. Alternatively, one can
pass an integer which is interpreted as seconds.
Extend Mutation.metadataScan(input: $input) to support a new optional
value, maxAge. If set, the scanner will exit early if the file it
is looking at has an mtime older than the cutOff point generated by
now() - maxAge
This speeds up scanning in the case where the user knows how old the
changes on disk are, by exiting the scan early if that is the case.
* Change maxAge into minModTime
Introduce a `Timestamp` scalar, so we have a scalar we control. Let
it accept three formats:
* RFC3339Nano
* @UNIX where UNIX is a unix-timestamp: seconds after 01-01-1970
* '<4h': a timestamp relative to the current server time
This scalar parses to a time.Time.
Use MinModTime in the scanner to filter out a large number of scan
analyzes by exiting the scan operation early.
* Heed the linter, perform errcheck
* Rename test vars for consistency.
* Code review: move minModTime into queuefiles
* Remove the ability to input Unix timestamps
Test failures on the CI-system explains why this is undesirable. It is
not clear what timezone one is operating in when entering a unix
timestamp. We could go with UTC, but it is so much easier to require an
RFC3339 timestamp, which avoids this problem entirely.
* Move the minModTime field into filters
Create a new filter input object for metadata scans, and push the
minModTime field in there. If we come up with new filters, they can
be added to that input object rather than cluttering the main input
object.
* Use utils.ParseDateStringAsTime
Replace time.Parse with utils.ParseDateStringAsTime
While here, add some more test cases for that parser.
* Push scrapeByURL into scrapers
Replace ScrapePerfomerByURL, ScrapeMovie..., ... with ScrapeByURL in
the scraperActionImpl interface. This allows us to delete a lot of
repeated code in the scrapers and replace the central part with a
switch on the scraper type.
* Fold name scraping into one call
Follow up on scraper refactoring. Name scrapers use the same code path.
This allows us to restructure some code and kill some functions, adding
variance to the name scraping code. It allows us to remove some code
repetition as well.
* Do not export loop refs.
* Simplify fragment scraping
Generalize fragment scrapers into ScrapeByFragment. This simplifies
fragment code flows into a simpler pathing which should be easier
to handle in the future.
* Eliminate more context.TODO()
In a number of cases, we have a context now. Use the context rather than
TODO() for those cases in order to make those operations cancellable.
* Pass the context for the stashbox scraper
This removes all context.TODO() in the path of the stashbox scraper,
and replaces it with the context that's present on each of the paths.
* Pass the context into subscrapers
Mostly a mechanical update, where we pass in the context for
subscraping. This removes the final context.TODO() in the scraper
code.
* Warn on unknown fields from scripts
A common mistake for new script writers are that they return fields
not known to stash. For instance the name "description" is used rather
than "details".
Decode disallowing unknown fields. If this fails, use a tee-reader to
fall back to the old behavior, but print a warning for the user in this
case. Thus, we retain the old behavior, but print warnings for scripts
which fails the more strict unknown-fields detection.
* Nil-check before running the postprocessing chain
Fixes panics when scraping returns nil values.
* Lift nil-ness in post-postprocessing
If the struct we are trying to post-process is nil, we shouldn't
enter the postprocessing flow at all. Pass the struct as a value
rather than a pointer, eliminating nil-checks as we go. Use the
top-level postProcess call to make the nil-check and then abort there
if the object we are looking at is nil.
* Allow conversion routines to handle values
If we have a non-pointer type in the interface, we should also convert
those into ScrapedContent. Otherwise we get errors on deprecated
functions.
* Add scan dialog
* Add Auto Tag dialog
* Refactor and combine Generate dialog
* Add clean dialog
* Support scan task default setting
* Support saving auto tag defaults
* Support for generate defaults
* Simplify scraper listing
Introduce an enum, scraper.Kind, which explains what we are looking
for. Make it possible to match this from a scraper struct.
Use the enum to rewrite all the listing code to use the same code path.
* Use a map, nitpick ScrapePerformerList
Let the cache store a map from ID of a scraper to the scraper. This
improves lookups when there are many scrapers, making it practically
O(1) rather than O(n). If many scrapers are stored, this is faster.
Since range expressions work unchanged, we don't have to change much,
and things will still work.
make Kind a Stringer
Rename ScraperPerformerList -> ScraperPerformerQuery since that name
is used in the other scrapers, and we value consistency.
Tune ScraperPerformerQuery:
* Return static errors
* Use the new functionality
* When loading scrapers, do so directly
Rather than first walking the directory structure to obtain file paths,
fold the load directly in the the filepath walk. This makes the code
for more direct.
* Use static ErrNotFound
If a scraper isn't found, return one static error. This paves the way
for eventually doing our own error-presenter in gqlgen.
* Store the cache in the Resolver state
Putting the scraperCache directly in the resolver avoids the need to
call manager.GetInstance() all over the place to get access to the
scraper cache. The cache is stored by pointer, so it should be safe,
since the cache will just update its internal state rather than being
overwritten.
We can now utilize the resolver state to grab the cache where needed.
While here, pass context.Context from the resolver down into a function,
which removes a context.TODO()
* Introduce ScrapedContent
Create a union in the GraphQL schema for all scraped content. This
simplifies the internal implementation because we get variance on
the output content type.
Introduce a new type ScrapedContentType which signifies the scraped
content you want as a caller.
Use these to generalize the List interface and the URL scraping
interface.
* Simplify the scraper API
Introduce a new interface for scraping. This interface is then
used in the upper half of the scraper code, to make the code use one
code flow rather than multiple code flows. Variance is currently at
the old scraper structure.
Add extending interfaces for the different ways of invoking scrapes.
Use interface conversions to convert a scraper from the cache to a
scraper supporting the extra methods.
The return path returns models.ScrapedContent.
Write a general postProcess function in the scraper, handling all
ScrapedContent via type switching. This consolidates all postprocessing
code flows.
Introduce marhsallers in the resolver code for converting ScrapedContent
into the underlying concrete types. Use this to plug the existing
fields in the Query resolver, so everything still works.
* ScrapedContent: add more marshalling functions
Handle all marshalling of ScrapedContent through marhsalling functions.
Removes some hand-rolled early variants of it, and replaces it with
a canonical code flow.
* Support loadByName via scraper_s
In order to temporarily plug a hole in the current implementation, we
use the older implementation as a hook to get the newer implementation
to run.
Later on, this can serve as a guide for how to implement the lower level
bits inside the scrapers themselves. For now, it just enables support.
* Plug the remaining scraper functions for now
Since we would like to have a scraper which works in between refactors,
plug the lower level parts of the scraper for now. It avoids us having
to tackle this part just yet.
* Move postprocessing to its own file
There's enough postprocessing to clutter the main scrapers.go file.
Move all of this into a new file, postprocessing to make the API
simpler. It now lives in scrapers.go.
* Scraper: Invoke API consistency
scraper.Cache.ScrapeByName -> ScrapeName
* Fix scraping scenes by URL
Simple typo. While here, also make a single marshaller nil-aware.
* Introduce scraper groups, consolidate loadByURL
Rename `scraper_s` into `group`. A group is a group of scrapers with
the same identity. This corresponds to a single YAML file for a scraper
configuration. It defines a group which supports different types of
scraping contexts.
Move config into the group, and lift txnManager and globalConfig to
the group.
Because we now return models.ScrapedContent we can use interfaces to
get variance from the different underlying scrapers. Use a type
switch for the URL matcher candidates. And then again for the scrapers.
This consolidates all URL scraping paths into one.
While here, remove the urlMatcher interface which isn't needed. Also
clean up the remaining interfaces for url scraping and delete code
which has no purpose anymore.
* Consolidate fragment scraping in one code path
While here, abide the linters checks.
* Refactor loadByFragment
Give it the same treatment as loadByURL:
Step 1: find a scraperActionImpl which works for the data.
Step 2: use that to scrape
Most of this is simple analysis on the data at hand. It can be pushed
down further in a later commit, but for now we leave it here.
* Remove configScraper, autotag is a scraper
Remove the remains of the configScraper struct. It now lives on in the
group struct. Kill the remaining interfaces from the old implementation
while here.
Remove group.specification since it can now be handled by a simple
func call to spec().
Work through the autotag scraper. It now implements the scraper
interface, so it can be used as a scraper. This also simplifies the
autotag scraper quite a bit since it doens't have to implement a number
of unsupported func calls.
* Simplify the fragment scraper flow
* Pass the context
Eliminate a round of context.TODO() in the scraper code by passing
the calling context down into the subsystem. This will gracefully
allow for termination of remote calls if the client goes away for some
reason in GraphQL requests.
* Improve listScrapers in the schema
Support lists of types we accept.
* Be graceful on nil values in conversion
Supporting nil-values make the API more robust in the
case of partial results in a multi-scrape situation.
* Improve listScrapers: output at-most-once
Use the ID of a scraper to reduce the output set. If a scraper has
been included, don't include it again.
* Consolidate all API level errors into resolver.go
* Reorder files and functions:
scrapers.go -> cache.go:
It almost contains nothing but the cache code.
Move errors into scraper.go from here because
It is a better place to have them living right now
group.go:
All of the group structure. This can now go from
scraper.go, making it more lean. Move group create
from config_scraper to here.
config.go:
Move the `(c config) spec()` call to here.
config_scraper.go:
Empty file by now
* Name-update the scraper interfaces
Use 'via' rather than 'loadBy'.
The scrape happens via a given scrape method, so I think this is a nice
name for it.
* Rename scrapers for consistency.
While here, improve the error formatting, so different errors come
back differently.
* Nuke the freeones field from the GraphQL schema
* Fix autotag interfacing, refactor
The autotag scraper uses a pointer receiver, but the rest of the code
we use for scraping doesn't expect a pointer-receiver. Hence, to fix
the autotag scraper, we change it to be a value receiver, like the
rest of the code.
Fix: viaScene, and viaGallery.
While here, remove a couple of pointer-receiver methods which can be
trivially rewritten into plain functions.
* Protect against pointer interfaces
The underlying code can be a bit inconsistent in what it returns.
Introduce pointer-types in the postprocessing layer and handle them
accordingly for now. Once a better understanding of the lower levels
are understood, we can lift this.
* Move ErrConversion into the models package.
The conversion error pertains to the logic of converting models.
Because of this, it should move there, so it is centralized.
* Be consistent in scraper resolver error handling
If we have a static error
Err = errors.New(..)
Then use it wrapped at the start:
fmt.Errorf("%w: ...context...", Err)
This reads better.
While here, avoid using the underlying Atoi errors: they are verbose,
and like 99% of the time, the user know what is wrong from the input
string, so just give that back.
Also, remove the scraper id from the error contexts: it is implicit,
and the error wouldn't change if we used a different scraper, which
the error message would imply.
* Mark the list*Scrapers() API as deprecated
The same functionality is now present in listScrapers.
* Improve error formatting
Think about how each error is going to be used and tweak them to be
nicer.
* Return a sorted list of scrapers
This helps testing, it's closer to what we had, caches like stable data,
and it is easier for humans. It also makes the output stable, because
map iteration is randomized.
* Fix listScrapers calls to return in ID-order
Since we need the ordering to be by ID in all situations, it is easier
to just generalize the cache listScrapers call to support multiple
scraper types.
This avoids a de-dupe map up the chain, since every scraper is only
considered once. Sorting now happens in the cache listScrapers call.
Use this generalized function in all resolvers, which are now simple
passthroughs.
* Remove UpdateConfig from the scraper cache.
This isn't needed, so get rid of it.
* Pull a context into identify
Scraping scenes in the identify tasks now use a context from up the
call chain.
* Do not store the scraper cache in the resolver.
Scraper caches are updated through
manager.singleton•RefreshScraperCache, so we can't keep a pointer to
it in the resolver. Instead, solve this by adding a fetcher method to
the resolver type. This keeps it local to the resolver, while handling
the problem of updating caches in the configuration.
* Separate overrides from config
* Don't allow changing overridden value
* Write default host and port to config file
* Use existing library value. Hide generated if set
* Support Is (not) null for all multi criterions
Add support for the Is null and Is not null modifiers for all cases of
the MultiCriterionInput and HierarchicalMultiCriterionInput. This
partially overlaps the "X Count" filter which sometimes is available
(because it would be the same as "X Count equals 0" and "X Count greater
than 0") but this also enables it for other criterions like the "Parent
Studio" filter for studios or just the "Studios" filter for scenes /
images / galleries, the "Movies" filter for scenes etc.
* Don't crash UI on bad saved filter
* Add missing code for tag parent/child
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* update tag hierarchy validation
* refactor MergeHierarchy
* update tag hierarchy error message
* rename tag hierarchy function
* add tag path to error message
* Rename EnsureHierarchy to ValidateHierarchy
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* add delete file and generated files by default config options
* add alert message with files to be deleted
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
The version checking code performs its own error management and will
not pass errors to the caller. Hence, it needs to be aware of the types
of errors which can be returned.
In particular, the context.Canceled error will be returned if the
context is aborted through cancelation. This happens when the request
is terminated by tapping CTRL-C or if the browser request is terminated
while we are sitting waiting for the GH API.
* Docker CI builds: half the size, less than half the build time
* Add an "Official Build" Designator
* Fix .git constantly invalidating build cache, use distro ffmpeg
* Fix official build detection, add some compiler image docs
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>