* Delete funscripts while deleting scene
* Indicate that funscripts will be deleted
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* add InteractiveSpeed to scene model
* add InteractiveHeatmapSpeedGenerator
* add GenerateInteractiveHeatmapSpeedTask
* add InteractiveHeatmapSpeedTask to GenerateJob
* add InteractiveHeatmap on sceneRoutes
* delete heatmap when scene is destroyed
* render interactive heatmap in GridCard
* render InteractiveSpeed on SceneCard
* render InteractiveSpeed in SceneFileInfoPanel
* InteractiveSpeed filters
* Added joinType to join struct
* Added addInnerJoin function to perform INNER JOIN type of joins
* Added innerJoin function to perform INNER JOIN type of joins
* Use inner joins when querying images in a gallery
* Renamed addJoin to addLeftJoin
* Support a maxAge input on metadata scans.
Extend the GraphQL world with a Duration scalar. It is parsed as a
typical Go duration, i.e., "4h" is 4 hours. Alternatively, one can
pass an integer which is interpreted as seconds.
Extend Mutation.metadataScan(input: $input) to support a new optional
value, maxAge. If set, the scanner will exit early if the file it
is looking at has an mtime older than the cutOff point generated by
now() - maxAge
This speeds up scanning in the case where the user knows how old the
changes on disk are, by exiting the scan early if that is the case.
* Change maxAge into minModTime
Introduce a `Timestamp` scalar, so we have a scalar we control. Let
it accept three formats:
* RFC3339Nano
* @UNIX where UNIX is a unix-timestamp: seconds after 01-01-1970
* '<4h': a timestamp relative to the current server time
This scalar parses to a time.Time.
Use MinModTime in the scanner to filter out a large number of scan
analyzes by exiting the scan operation early.
* Heed the linter, perform errcheck
* Rename test vars for consistency.
* Code review: move minModTime into queuefiles
* Remove the ability to input Unix timestamps
Test failures on the CI-system explains why this is undesirable. It is
not clear what timezone one is operating in when entering a unix
timestamp. We could go with UTC, but it is so much easier to require an
RFC3339 timestamp, which avoids this problem entirely.
* Move the minModTime field into filters
Create a new filter input object for metadata scans, and push the
minModTime field in there. If we come up with new filters, they can
be added to that input object rather than cluttering the main input
object.
* Use utils.ParseDateStringAsTime
Replace time.Parse with utils.ParseDateStringAsTime
While here, add some more test cases for that parser.
* Push scrapeByURL into scrapers
Replace ScrapePerfomerByURL, ScrapeMovie..., ... with ScrapeByURL in
the scraperActionImpl interface. This allows us to delete a lot of
repeated code in the scrapers and replace the central part with a
switch on the scraper type.
* Fold name scraping into one call
Follow up on scraper refactoring. Name scrapers use the same code path.
This allows us to restructure some code and kill some functions, adding
variance to the name scraping code. It allows us to remove some code
repetition as well.
* Do not export loop refs.
* Simplify fragment scraping
Generalize fragment scrapers into ScrapeByFragment. This simplifies
fragment code flows into a simpler pathing which should be easier
to handle in the future.
* Eliminate more context.TODO()
In a number of cases, we have a context now. Use the context rather than
TODO() for those cases in order to make those operations cancellable.
* Pass the context for the stashbox scraper
This removes all context.TODO() in the path of the stashbox scraper,
and replaces it with the context that's present on each of the paths.
* Pass the context into subscrapers
Mostly a mechanical update, where we pass in the context for
subscraping. This removes the final context.TODO() in the scraper
code.
* Warn on unknown fields from scripts
A common mistake for new script writers are that they return fields
not known to stash. For instance the name "description" is used rather
than "details".
Decode disallowing unknown fields. If this fails, use a tee-reader to
fall back to the old behavior, but print a warning for the user in this
case. Thus, we retain the old behavior, but print warnings for scripts
which fails the more strict unknown-fields detection.
* Nil-check before running the postprocessing chain
Fixes panics when scraping returns nil values.
* Lift nil-ness in post-postprocessing
If the struct we are trying to post-process is nil, we shouldn't
enter the postprocessing flow at all. Pass the struct as a value
rather than a pointer, eliminating nil-checks as we go. Use the
top-level postProcess call to make the nil-check and then abort there
if the object we are looking at is nil.
* Allow conversion routines to handle values
If we have a non-pointer type in the interface, we should also convert
those into ScrapedContent. Otherwise we get errors on deprecated
functions.