until this, if a user tries to connect to the same backend with a
different path, the frontend would flicker as we'd get things from the
wrong cache key
until this, if a user tries to connect to the same backend with a
different path, the frontend would flicker as we'd get things from the
wrong cache key
we used to have the readonly attribute but somehow it's not supported by
all fields, eg: select. As such when something is read only we disable
the field
before this commit, the issue was:
1. whenever uploading a large folder, any upload would trigger the ls change, resulting in a lot of unecessary dom
changes which in case the user is browsing through a lot of picture would trigger some flickering in the thumbnail.
2. large upload would be slow as virtual.before would call change
detection and refresh the list of files within the folder constantly
now by avoiding mutations we effectively just rerender when something we
are interested changes: aka something change in the current folder the
user is in
this fix a panic that can be replicated using the video thumbnail
plugin, opening up a page with a lot of videos. Under the hood, the
server would call ffmpeg that would make a bunch of HTTP range requests
that would call the cache concurrently, hence causing the panic
up until now, the stance was to refuse video thumbnail because it's too
slow but really many people don't seem to care that much about it and
keep insisting to have it.
With this solution, it's not in the base build but it gives an
option for those people to make it happen
instead of inventing a new protocol for chunked upload that can be
resumed, we might as well use something that already exists like TUS.
As such we removed our custom implementation to favor that standard
issue:
1. create a 2MB file: dd if=/dev/zero of=testfile bs=1024 count=2048
2. size chunk to 1MB
what we see: progress up to 50%, then stop and jump to 100%
before this, if the user had a full disk, there wouldn't be any error
reported back whenever editing something in the admin
console as file.Close() would return nil ....
The only way to go around it is to wait for the sync to be done.
I've seen the case where someone ran out of disk with a corrupted config
file which gave the following fatal error in the login screen:
Uncaught TypeError: Cannot read properties of null (reading 'map')
with a stacktrace pointing to: ctrl_form.js:22:63
this fixes the assumptions on the config file so as to not trigger the
fatal error but head to the nicer error cases where it would say:
Internal Error: There is nothing here.
which is much nicer for end users than "Cannot read properties of null"
Cloudflare does limit the size of file upload by an arbitrary number. We
can go around that by using chunked upload but somehow that wasn't
enough, to circumvent that issue, we make it clear to the proxy it
should close the connection and we hope the problem we go away
on a mobile like screen, the sidebar wouldn't be hidden entirely, it
would still show the border artifact. We need to make sure the default
is to be of class hidden to prevent such artefact
whenever reading an audio file, quitting and coming back, the audio
context under wavesurfer.backend.ac would show a currentTime that is not
actually reset properly. Closing or trying several trick didn't fix the
issue, hence this approach which is quite dirty but work.
Overall wavesurfer has some weird tendencies, this is just one more hack
within the audio player