Commit graph

30 commits

Author SHA1 Message Date
its-josh4
2b8c2534dd
Update a number of dependencies (incl. CVE fixes) (#4107)
* Update a number of dependencies (incl. CVE fixes)

Includes some dependencies that were upgraded in #4106 as well as a few more dependencies.

Some deps that have been upgraded had CVEs.

Notably, upgrades deprecated dependencies such as:
- `github.com/go-chi/chi` (replaced with `/v5`)
- `github.com/gofrs/uuid` (replaced with `/v5`)
- `github.com/hashicorp/golang-lru` (replaced with `/v2` which uses generics)

* Upgraded a few more deps

* lint

* reverted yaml library to v2

* remove unnecessary mod replace

* Update chromedp

Fixes #3733
2023-10-26 16:24:32 +11:00
WithoutPants
2fd7141f0f
Javascript scraper postprocess (#4200)
* Add javascript post-process action
* Add documentation
2023-10-16 17:17:36 +11:00
WithoutPants
a1da626c9f
Return scrape results if only relationships are returned (#3954)
* Handle scene scrape results where basic fields unset
* Apply fix to other types
* Show scrape dialog if only new items scraped
2023-07-27 19:50:25 +10:00
bnkai
d80ec1d7a1
Fix scene studio results when doing a search scrape (#3246) 2023-01-30 09:40:53 +11:00
JackDawson94
554448594c
Add unix timestamp parsing to scrapers parseDate (#2817)
* Add unix timestamp parsing to scrapers parseDate
* Add documentation
* Update ScraperDevelopment.md
* Add unit test

Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
2022-09-30 15:35:56 +10:00
WithoutPants
7b5bd80515 Separate graphql API from rest of the system (#2503)
* Move graphql generated files to api
* Refactor identify options
* Remove models.StashBoxes
* Move ScraperSource to scraper package
* Rename field strategy enums
* Rename identify.TaskOptions to Options
2022-09-06 07:03:40 +00:00
WithoutPants
f69bd8a94f
Restructure go project (#2356)
* Move main to cmd
* Move api to internal
* Move logger and manager to internal
* Move shell hiding code to separate package
* Decouple job from desktop and utils
* Decouple session from config
* Move static into internal
* Decouple config from dlna
* Move desktop to internal
* Move dlna to internal
* Decouple remaining packages from config
* Move config into internal
* Move jsonschema and paths to models
* Make ffmpeg functions private
* Move file utility methods into fsutil package
* Move symwalk into fsutil
* Move single-use util functions into client package
* Move slice functions to separate packages
* Add env var to suppress windowsgui arg
* Move hash functions into separate package
* Move identify to internal
* Move autotag to internal
* Touch UI when generating backend
2022-03-17 11:33:59 +11:00
WithoutPants
9e3d56b22f
Fix identify and script scraper bugs (#2375)
* Continue identify if source fails
* Handle empty result set correctly
* Parse null values from scraper script correctly
* Omit warning when json selector value missing
* Return nil when scraped item not found
* Fix graphql validation errors
2022-03-15 09:42:22 +11:00
Releck
22321c2b62
Fix performer tags not applying on scene scrapers (#2339) 2022-02-22 10:18:29 +11:00
bnkai
66dd239732
Skip cleaning for search by name scrape queries (#2059)
* Skip pp for search by name queries
* upgrade htmlquery
2021-12-16 11:18:39 +11:00
SmallCoccinelle
4089fcf1e2
Scraper refactor middle (#2043)
* Push scrapeByURL into scrapers

Replace ScrapePerfomerByURL, ScrapeMovie..., ... with ScrapeByURL in
the scraperActionImpl interface. This allows us to delete a lot of
repeated code in the scrapers and replace the central part with a
switch on the scraper type.

* Fold name scraping into one call

Follow up on scraper refactoring. Name scrapers use the same code path.
This allows us to restructure some code and kill some functions, adding
variance to the name scraping code. It allows us to remove some code
repetition as well.

* Do not export loop refs.

* Simplify fragment scraping

Generalize fragment scrapers into ScrapeByFragment. This simplifies
fragment code flows into a simpler pathing which should be easier
to handle in the future.

* Eliminate more context.TODO()

In a number of cases, we have a context now. Use the context rather than
TODO() for those cases in order to make those operations cancellable.

* Pass the context for the stashbox scraper

This removes all context.TODO() in the path of the stashbox scraper,
and replaces it with the context that's present on each of the paths.

* Pass the context into subscrapers

Mostly a mechanical update, where we pass in the context for
subscraping. This removes the final context.TODO() in the scraper
code.

* Warn on unknown fields from scripts

A common mistake for new script writers are that they return fields
not known to stash. For instance the name "description" is used rather
than "details".

Decode disallowing unknown fields. If this fails, use a tee-reader to
fall back to the old behavior, but print a warning for the user in this
case. Thus, we retain the old behavior, but print warnings for scripts
which fails the more strict unknown-fields detection.

* Nil-check before running the postprocessing chain

Fixes panics when scraping returns nil values.

* Lift nil-ness in post-postprocessing

If the struct we are trying to post-process is nil, we shouldn't
enter the postprocessing flow at all. Pass the struct as a value
rather than a pointer, eliminating nil-checks as we go. Use the
top-level postProcess call to make the nil-check and then abort there
if the object we are looking at is nil.

* Allow conversion routines to handle values

If we have a non-pointer type in the interface, we should also convert
those into ScrapedContent. Otherwise we get errors on deprecated
functions.
2021-11-26 11:20:06 +11:00
SmallCoccinelle
c1f89611e2
Refactor scraper top half (#1893)
* Simplify scraper listing

Introduce an enum, scraper.Kind, which explains what we are looking
for. Make it possible to match this from a scraper struct.

Use the enum to rewrite all the listing code to use the same code path.

* Use a map, nitpick ScrapePerformerList

Let the cache store a map from ID of a scraper to the scraper. This
improves lookups when there are many scrapers, making it practically
O(1) rather than O(n). If many scrapers are stored, this is faster.

Since range expressions work unchanged, we don't have to change much,
and things will still work.

make Kind a Stringer

Rename ScraperPerformerList -> ScraperPerformerQuery since that name
is used in the other scrapers, and we value consistency.

Tune ScraperPerformerQuery:

* Return static errors
* Use the new functionality

* When loading scrapers, do so directly

Rather than first walking the directory structure to obtain file paths,
fold the load directly in the the filepath walk. This makes the code
for more direct.

* Use static ErrNotFound

If a scraper isn't found, return one static error. This paves the way
for eventually doing our own error-presenter in gqlgen.

* Store the cache in the Resolver state

Putting the scraperCache directly in the resolver avoids the need to
call manager.GetInstance() all over the place to get access to the
scraper cache. The cache is stored by pointer, so it should be safe,
since the cache will just update its internal state rather than being
overwritten.

We can now utilize the resolver state to grab the cache where needed.

While here, pass context.Context from the resolver down into a function,
which removes a context.TODO()

* Introduce ScrapedContent

Create a union in the GraphQL schema for all scraped content. This
simplifies the internal implementation because we get variance on
the output content type.

Introduce a new type ScrapedContentType which signifies the scraped
content you want as a caller.

Use these to generalize the List interface and the URL scraping
interface.

* Simplify the scraper API

Introduce a new interface for scraping. This interface is then
used in the upper half of the scraper code, to make the code use one
code flow rather than multiple code flows. Variance is currently at
the old scraper structure.

Add extending interfaces for the different ways of invoking scrapes.
Use interface conversions to convert a scraper from the cache to a
scraper supporting the extra methods.

The return path returns models.ScrapedContent.

Write a general postProcess function in the scraper, handling all
ScrapedContent via type switching. This consolidates all postprocessing
code flows.

Introduce marhsallers in the resolver code for converting ScrapedContent
into the underlying concrete types. Use this to plug the existing
fields in the Query resolver, so everything still works.

* ScrapedContent: add more marshalling functions

Handle all marshalling of ScrapedContent through marhsalling functions.
Removes some hand-rolled early variants of it, and replaces it with
a canonical code flow.

* Support loadByName via scraper_s

In order to temporarily plug a hole in the current implementation, we
use the older implementation as a hook to get the newer implementation
to run.

Later on, this can serve as a guide for how to implement the lower level
bits inside the scrapers themselves. For now, it just enables support.

* Plug the remaining scraper functions for now

Since we would like to have a scraper which works in between refactors,
plug the lower level parts of the scraper for now. It avoids us having
to tackle this part just yet.

* Move postprocessing to its own file

There's enough postprocessing to clutter the main scrapers.go file.

Move all of this into a new file, postprocessing to make the API
simpler. It now lives in scrapers.go.

* Scraper: Invoke API consistency

scraper.Cache.ScrapeByName -> ScrapeName

* Fix scraping scenes by URL

Simple typo. While here, also make a single marshaller nil-aware.

* Introduce scraper groups, consolidate loadByURL

Rename `scraper_s` into `group`. A group is a group of scrapers with
the same identity. This corresponds to a single YAML file for a scraper
configuration. It defines a group which supports different types of
scraping contexts.

Move config into the group, and lift txnManager and globalConfig to
the group.

Because we now return models.ScrapedContent we can use interfaces to
get variance from the different underlying scrapers. Use a type
switch for the URL matcher candidates. And then again for the scrapers.

This consolidates all URL scraping paths into one.

While here, remove the urlMatcher interface which isn't needed. Also
clean up the remaining interfaces for url scraping and delete code
which has no purpose anymore.

* Consolidate fragment scraping in one code path

While here, abide the linters checks.

* Refactor loadByFragment

Give it the same treatment as loadByURL:

Step 1: find a scraperActionImpl which works for the data.
Step 2: use that to scrape

Most of this is simple analysis on the data at hand. It can be pushed
down further in a later commit, but for now we leave it here.

* Remove configScraper, autotag is a scraper

Remove the remains of the configScraper struct. It now lives on in the
group struct. Kill the remaining interfaces from the old implementation
while here.

Remove group.specification since it can now be handled by a simple
func call to spec().

Work through the autotag scraper. It now implements the scraper
interface, so it can be used as a scraper. This also simplifies the
autotag scraper quite a bit since it doens't have to implement a number
of unsupported func calls.

* Simplify the fragment scraper flow

* Pass the context

Eliminate a round of context.TODO() in the scraper code by passing
the calling context down into the subsystem. This will gracefully
allow for termination of remote calls if the client goes away for some
reason in GraphQL requests.

* Improve listScrapers in the schema

Support lists of types we accept.

* Be graceful on nil values in conversion

Supporting nil-values make the API more robust in the
case of partial results in a multi-scrape situation.

* Improve listScrapers: output at-most-once

Use the ID of a scraper to reduce the output set. If a scraper has
been included, don't include it again.

* Consolidate all API level errors into resolver.go
* Reorder files and functions:

scrapers.go -> cache.go:
    It almost contains nothing but the cache code.
    Move errors into scraper.go from here because
    It is a better place to have them living right now
group.go:
    All of the group structure. This can now go from
    scraper.go, making it more lean. Move group create
    from config_scraper to here.
config.go:
    Move the `(c config) spec()` call to here.
config_scraper.go:
    Empty file by now

* Name-update the scraper interfaces

Use 'via' rather than 'loadBy'.

The scrape happens via a given scrape method, so I think this is a nice
name for it.

* Rename scrapers for consistency.

While here, improve the error formatting, so different errors come
back differently.

* Nuke the freeones field from the GraphQL schema

* Fix autotag interfacing, refactor

The autotag scraper uses a pointer receiver, but the rest of the code
we use for scraping doesn't expect a pointer-receiver. Hence, to fix
the autotag scraper, we change it to be a value receiver, like the
rest of the code.

Fix: viaScene, and viaGallery.

While here, remove a couple of pointer-receiver methods which can be
trivially rewritten into plain functions.

* Protect against pointer interfaces

The underlying code can be a bit inconsistent in what it returns.
Introduce pointer-types in the postprocessing layer and handle them
accordingly for now. Once a better understanding of the lower levels
are understood, we can lift this.

* Move ErrConversion into the models package.

The conversion error pertains to the logic of converting models.
Because of this, it should move there, so it is centralized.

* Be consistent in scraper resolver error handling

If we have a static error

    Err = errors.New(..)

Then use it wrapped at the start:

    fmt.Errorf("%w: ...context...", Err)

This reads better.

While here, avoid using the underlying Atoi errors: they are verbose,
and like 99% of the time, the user know what is wrong from the input
string, so just give that back.

Also, remove the scraper id from the error contexts: it is implicit,
and the error wouldn't change if we used a different scraper, which
the error message would imply.

* Mark the list*Scrapers() API as deprecated

The same functionality is now present in listScrapers.

* Improve error formatting

Think about how each error is going to be used and tweak them to be
nicer.

* Return a sorted list of scrapers

This helps testing, it's closer to what we had, caches like stable data,
and it is easier for humans. It also makes the output stable, because
map iteration is randomized.

* Fix listScrapers calls to return in ID-order

Since we need the ordering to be by ID in all situations, it is easier
to just generalize the cache listScrapers call to support multiple
scraper types.

This avoids a de-dupe map up the chain, since every scraper is only
considered once. Sorting now happens in the cache listScrapers call.

Use this generalized function in all resolvers, which are now simple
passthroughs.

* Remove UpdateConfig from the scraper cache.

This isn't needed, so get rid of it.

* Pull a context into identify

Scraping scenes in the identify tasks now use a context from up the
call chain.

* Do not store the scraper cache in the resolver.

Scraper caches are updated through
manager.singleton•RefreshScraperCache, so we can't keep a pointer to
it in the resolver. Instead, solve this by adding a fetcher method to
the resolver type. This keeps it local to the resolver, while handling
the problem of updating caches in the configuration.
2021-11-19 10:55:34 +11:00
SmallCoccinelle
e14bb8432c
Enable gocritic (#1848)
* Don't capitalize local variables

ValidCodecs -> validCodecs

* Capitalize deprecation markers

A deprecated marker should be capitalized.

* Use re.MustCompile for static regexes

If the regex fails to compile, it's a programmer error, and should be
treated as such. The regex is entirely static.

* Simplify else-if constructions

Rewrite

   else { if cond {}}

to

   else if cond {}

* Use a switch statement to analyze formats

Break an if-else chain. While here, simplify code flow.

Also introduce a proper static error for unsupported image formats,
paving the way for being able to check against the error.

* Rewrite ifElse chains into switch statements

The "Effective Go" https://golang.org/doc/effective_go#switch document
mentions it is more idiomatic to write if-else chains as switches when
it is possible.

Find all the plain rewrite occurrences in the code base and rewrite.
In some cases, the if-else chains are replaced by a switch scrutinizer.
That is, the code sequence

  if x == 1 {
      ..
  } else if x == 2 {
      ..
  } else if x == 3 {
      ...
  }

can be rewritten into

  switch x {
  case 1:
    ..
  case 2:
    ..
  case 3:
    ..
  }

which is clearer for the compiler: it can decide if the switch is
better served by a jump-table then a branch-chain.

* Rewrite switches, introduce static errors

Introduce two new static errors:

* `ErrNotImplmented`
* `ErrNotSupported`

And use these rather than forming new generative errors whenever the
code is called. Code can now test on the errors (since they are static
and the pointers to them wont change).

Also rewrite ifElse chains into switches in this part of the code base.

* Introduce a StashBoxError in configuration

Since all stashbox errors are the same, treat them as such in the code
base. While here, rewrite an ifElse chain.

In the future, it might be beneifical to refactor configuration errors
into one error which can handle missing fields, which context the error
occurs in and so on. But for now, try to get an overview of the error
categories by hoisting them into static errors.

* Get rid of an else-block in transaction handling

If we succesfully `recover()`, we then always `panic()`. This means the
rest of the code is not reachable, so we can avoid having an else-block
here.

It also solves an ifElse-chain style check in the code base.

* Use strings.ReplaceAll

Rewrite

    strings.Replace(s, o, n, -1)

into

    strings.ReplaceAll(s, o, n)

To make it consistent and clear that we are doing an all-replace in the
string rather than replacing parts of it. It's more of a nitpick since
there are no implementation differences: the stdlib implementation is
just to supply -1.

* Rewrite via gocritic's assignOp

Statements of the form

    x = x + e

is rewritten into

    x += e

where applicable.

* Formatting

* Review comments handled

Stash-box is a proper noun.

Rewrite a switch into an if-chain which returns on the first error
encountered.

* Use context.TODO() over context.Background()

Patch in the same vein as everything else: use the TODO() marker so we
can search for it later and link it into the context tree/tentacle once
it reaches down to this level in the code base.

* Tell the linter to ignore a section in manager_tasks.go

The section is less readable, so mark it with a nolint for now. Because
the rewrite enables a ifElseChain, also mark that as nolint for now.

* Use strings.ReplaceAll over strings.Replace

* Apply an ifElse rewrite

else { if .. { .. } } rewrite into else if { .. }

* Use switch-statements over ifElseChains

Rewrite chains of if-else into switch statements. Where applicable,
add an early nil-guard to simplify case analysis. Also, in
ScanTask's Start(..), invert the logic to outdent the whole block, and
help the reader: if it's not a scene, the function flow is now far more
local to the top of the function, and it's clear that the rest of the
function has to do with scene management.

* Enable gocritic on the code base.

Disable appendAssign for now since we aren't passing that check yet.

* Document the nolint additions

* Document StashBoxBatchPerformerTagInput
2021-10-18 14:12:40 +11:00
SmallCoccinelle
c6f6205e4f
Errorlint sweep + minor linter tweaks (#1796)
* Replace error assertions with Go 1.13 style

Use `errors.As(..)` over type assertions. This enables better use of
wrapped errors in the future, and lets us pass some errorlint checks
in the process.

The rewrite is entirely mechanical, and uses a standard idiom for
doing so.

* Use Go 1.13's errors.Is(..)

Rather than directly checking for error equality, use errors.Is(..).

This protects against error wrapping issues in the future.

Even though something like sql.ErrNoRows doesn't need the wrapping, do
so anyway, for the sake of consistency throughout the code base.

The change almost lets us pass the `errorlint` Go checker except for
a missing case in `js.go` which is to be handled separately; it isn't
mechanical, like these changes are.

* Remove goconst

goconst isn't a useful linter in many cases, because it's false positive
rate is high. It's 100% for the current code base.

* Avoid direct comparison of errors in recover()

Assert that we are catching an error from recover(). If we are,
check that the error caught matches errStop.

* Enable the "errorlint" checker

Configure the checker to avoid checking for errorf wraps. These are
often false positives since the suggestion is to blanket wrap errors
with %w, and that exposes the underlying API which you might not want
to do.

The other warnings are good however, and with the current patch stack,
the code base passes all these checks as well.

* Configure rowserrcheck

The project uses sqlx. Configure rowserrcheck to include said package.

* Mechanically rewrite a large set of errors

Mechanically search for errors that look like

    fmt.Errorf("...%s", err.Error())

and rewrite those into

    fmt.Errorf("...%v", err)

The `fmt` package is error-aware and knows how to call err.Error()
itself.

The rationale is that this is more idiomatic Go; it paves the
way for using error wrapping later with %w in some sites.

This patch only addresses the entirely mechanical rewriting caught by
a project-side search/replace. There are more individual sites not
addressed by this patch.
2021-10-12 14:03:08 +11:00
WithoutPants
1a3a2f1f83
Scrape scene by name (#1712)
* Support scrape scene by name in configs
* Initial scene querying
* Add to manual
2021-09-14 14:54:53 +10:00
SmallCoccinelle
82a41e17c7
Avoid wrapping strings.Replace in Contains (#1710)
The strings.Replace function counts the number of replacements. If 0,
the original string is returned. Hence, there is no need to check if a
replacement will happen before doing the work.
2021-09-09 14:10:39 +10:00
SmallCoccinelle
4b00d24248
Remove unused (#1709)
* Remove stuff which isn't being used

Some fields, functions and structs aren't in use by the project. Remove
them for janitorial reasons.

* Remove more unused code

All of these functions are currently not in use. Clean up the code by
removal, since the version control has the code if need be.

* Remove unused functions

There's a large set of unused functions and variables in the code base.
Remove these, so it clearer what code to support going forward.

Dead code has been eliminated.

Where applicable, comment const-sections in tests, so reserved
identifiers are still known.

* Fix use-def of tsURL

The first def of tsURL doesn't matter because there's no use before
we hit the 2nd def.

* Remove dead code assignment

Setting logFile = "" is effectively dead code, because there's no use
of it later.

* Comment out found

The variable 'found' is dead in the function (because no post-process
action is following it). Comment it for now.

* Comment dead code in tests

These might provide hints as to what isn't covered at the moment.

* Dead code removal

In the case of constants where iota is involved, move the iota so it
matches the current key values.

This avoids problems with persistently stored key IDs.
2021-09-09 14:10:08 +10:00
WithoutPants
4625e1f955
Unify scrape refactor (#1630)
* Unify scraped types
* Make name fields optional
* Unify single scrape queries
* Change UI to use new interfaces
* Add multi scrape interfaces
* Use images instead of image
2021-09-07 11:54:22 +10:00
peolic
cc5ec650ae
Fix scraper date parser failing when parsing time (#1431)
* Don't mutate the original scraped date

`time.Parse` is case-sensitive for some values, `AM/pm` in particular
2021-05-26 07:29:51 +10:00
EnameEtavir
5c4351f338
Cleanup fixes (#1422)
* cleanup: remove dead code

removing some code that does nothing

* cleanup: fixing usage of deprecated gqlgen/graphql api in api/changeset_translator

* cleanup: changing to recommended comparison methods

Changing byte and case-insensitive string comparison to the recommended methods.

* cleanup: making staticcheck happy
2021-05-25 11:03:09 +10:00
bnkai
ab24d0f625
Add subtractDays pp action to scraper (#1399) 2021-05-21 12:20:12 +10:00
bnkai
bc9aa02835
Discard null values from scraper results (#1374) 2021-05-16 16:40:54 +10:00
bnkai
597576f5e6
Get distinct values from scraper (#1338)
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
2021-04-29 11:38:55 +10:00
bnkai
aedadc3857
Add lbToKg pp action to the scraper (#1337) 2021-04-26 13:31:25 +10:00
bnkai
2edcdeaeb9
Support today, yesterday when using parseDate in scrapers (#1261) 2021-04-07 09:09:04 +10:00
WithoutPants
a0676d5c30
Performer tags (#1132)
* Add scraping support for performer tags
* Add performer count to tag cards
* Refactor sqlite test setup
* Add performer tag filtering in gallery and image
* Add bulk update performer
* Add Performers tab to tag page
* Add count filters and sort bys for tags
* Move scene count to icon in performer card #1148
2021-03-10 12:25:51 +11:00
SpedNSFW
147d0067f5
Add gallery scraping (#862) 2020-10-21 09:24:32 +11:00
woodgen
e3ea3ea85e
scraper/mapped: Add feetToCm post process. (#711)
This patch adds a feetToCm post process that converts imperial feet and
inches to centimeters.
2020-08-12 11:17:43 +10:00
woodgen
4045ddf3e9
Implement scraping movies by URL (#709)
* api/urlbuilders/movie: Auto format.

* graphql+pkg+ui: Implement scraping movies by URL.

This patch implements the missing required boilerplate for scraping
movies by URL, using performers and scenes as a reference.

Although this patch contains a big chunck of ground work for enabling
scraping movies by fragment, the feature would require additional
changes to be completely implemented and was not tested.

* graphql+pkg+ui: Scrape movie studio.

Extends and corrects the movie model for the ability to store and
dereference studio IDs with received studio string from the scraper.
This was done with Scenes as a reference. For simplicity the duplication
of having `ScrapedMovieStudio` and `ScrapedSceneStudio` was kept, which
should probably be refactored to be the same type in the model in the
future.

* ui/movies: Add movie scrape dialog.

Adds possibility to update existing movie entries with the URL scraper.

For this the MovieScrapeDialog.tsx was implemented with Performers and
Scenes as a reference. In addition DurationUtils needs to be called one
time for converting seconds from the model to the string that is
displayed in the component. This seemed the least intrusive to me as it
kept a ScrapeResult<string> type compatible with ScrapedInputGroupRow.
2020-08-10 15:34:15 +10:00
WithoutPants
2b9215702e
Refactor xpath scraper code. Add fixed and map (#616)
* Refactor xpath scraper code
* Make post-process a list
* Add map post-process action
* Add fixed xpath values
* Refactor scrapers into cache
* Refactor into mapped config
* Trim test html
2020-07-21 14:06:25 +10:00