feat: Crypt driver, improve http/webdav handling (#4884)
this PR has several enhancements, fixes, and features:
- [x] Crypt: a transparent encryption driver. Anyone can easily, and safely store encrypted data on the remote storage provider. Consider your data is safely stored in the safe, and the storage provider can only see the safe, but not your data.
- [x] Optional: compatible with [Rclone Crypt](https://rclone.org/crypt/). More ways to manipulate the encrypted data.
- [x] directory and filename encryption
- [x] server-side encryption mode (server encrypts & decrypts all data, all data flows thru the server)
- [x] obfuscate sensitive information internally
- [x] introduced a server memory-cached multi-thread downloader.
- [x] Driver: **Quark** enabled this feature, faster load in any single thread scenario. e.g. media player directly playing from the link, now it's faster.
- [x] general improvement on HTTP/WebDAV stream processing & header handling & response handling
- [x] Driver: **Mega** driver support ranged http header
- [x] Driver: **Quark** fix bug of not closing HTTP request to Quark server while user end has closed connection to alist
## Crypt, a transparent Encrypt/Decrypt Driver. (Rclone Crypt compatible)
e.g.
Crypt mount path -> /vault
Crypt remote path -> /ali/encrypted
Aliyun mount paht -> /ali
when the user uploads a.jpg to /vault, the data will be encrypted and saved to /ali/encrypted/xxxxx. And when the user wants to access a.jpg, it's automatically decrypted, and the user can do anything with it.
Since it's Rclone Crypt compatible, users can download /ali/encrypted/xxxxx and decrypt it with rclone crypt tool. Or the user can mount this folder using rclone, then mount the decrypted folder in Linux...
NB. Some breaking changes is made to make it follow global standard, e.g. processing the HTTP header properly.
close #4679
close #4827
Co-authored-by: Sean He <866155+seanhe26@users.noreply.github.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-08-02 06:40:36 +00:00
|
|
|
package net
|
|
|
|
|
|
|
|
import (
|
|
|
|
"fmt"
|
|
|
|
"io"
|
|
|
|
"math"
|
|
|
|
"mime/multipart"
|
|
|
|
"net/http"
|
|
|
|
"net/textproto"
|
|
|
|
"strings"
|
|
|
|
"time"
|
|
|
|
|
2025-01-27 12:08:39 +00:00
|
|
|
"github.com/alist-org/alist/v3/pkg/utils"
|
|
|
|
|
feat: Crypt driver, improve http/webdav handling (#4884)
this PR has several enhancements, fixes, and features:
- [x] Crypt: a transparent encryption driver. Anyone can easily, and safely store encrypted data on the remote storage provider. Consider your data is safely stored in the safe, and the storage provider can only see the safe, but not your data.
- [x] Optional: compatible with [Rclone Crypt](https://rclone.org/crypt/). More ways to manipulate the encrypted data.
- [x] directory and filename encryption
- [x] server-side encryption mode (server encrypts & decrypts all data, all data flows thru the server)
- [x] obfuscate sensitive information internally
- [x] introduced a server memory-cached multi-thread downloader.
- [x] Driver: **Quark** enabled this feature, faster load in any single thread scenario. e.g. media player directly playing from the link, now it's faster.
- [x] general improvement on HTTP/WebDAV stream processing & header handling & response handling
- [x] Driver: **Mega** driver support ranged http header
- [x] Driver: **Quark** fix bug of not closing HTTP request to Quark server while user end has closed connection to alist
## Crypt, a transparent Encrypt/Decrypt Driver. (Rclone Crypt compatible)
e.g.
Crypt mount path -> /vault
Crypt remote path -> /ali/encrypted
Aliyun mount paht -> /ali
when the user uploads a.jpg to /vault, the data will be encrypted and saved to /ali/encrypted/xxxxx. And when the user wants to access a.jpg, it's automatically decrypted, and the user can do anything with it.
Since it's Rclone Crypt compatible, users can download /ali/encrypted/xxxxx and decrypt it with rclone crypt tool. Or the user can mount this folder using rclone, then mount the decrypted folder in Linux...
NB. Some breaking changes is made to make it follow global standard, e.g. processing the HTTP header properly.
close #4679
close #4827
Co-authored-by: Sean He <866155+seanhe26@users.noreply.github.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-08-02 06:40:36 +00:00
|
|
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
|
|
|
log "github.com/sirupsen/logrus"
|
|
|
|
)
|
|
|
|
|
|
|
|
// scanETag determines if a syntactically valid ETag is present at s. If so,
|
|
|
|
// the ETag and remaining text after consuming ETag is returned. Otherwise,
|
|
|
|
// it returns "", "".
|
|
|
|
func scanETag(s string) (etag string, remain string) {
|
|
|
|
s = textproto.TrimString(s)
|
|
|
|
start := 0
|
|
|
|
if strings.HasPrefix(s, "W/") {
|
|
|
|
start = 2
|
|
|
|
}
|
|
|
|
if len(s[start:]) < 2 || s[start] != '"' {
|
|
|
|
return "", ""
|
|
|
|
}
|
|
|
|
// ETag is either W/"text" or "text".
|
|
|
|
// See RFC 7232 2.3.
|
|
|
|
for i := start + 1; i < len(s); i++ {
|
|
|
|
c := s[i]
|
|
|
|
switch {
|
|
|
|
// Character values allowed in ETags.
|
|
|
|
case c == 0x21 || c >= 0x23 && c <= 0x7E || c >= 0x80:
|
|
|
|
case c == '"':
|
|
|
|
return s[:i+1], s[i+1:]
|
|
|
|
default:
|
|
|
|
return "", ""
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return "", ""
|
|
|
|
}
|
|
|
|
|
|
|
|
// etagStrongMatch reports whether a and b match using strong ETag comparison.
|
|
|
|
// Assumes a and b are valid ETags.
|
|
|
|
func etagStrongMatch(a, b string) bool {
|
|
|
|
return a == b && a != "" && a[0] == '"'
|
|
|
|
}
|
|
|
|
|
|
|
|
// etagWeakMatch reports whether a and b match using weak ETag comparison.
|
|
|
|
// Assumes a and b are valid ETags.
|
|
|
|
func etagWeakMatch(a, b string) bool {
|
|
|
|
return strings.TrimPrefix(a, "W/") == strings.TrimPrefix(b, "W/")
|
|
|
|
}
|
|
|
|
|
|
|
|
// condResult is the result of an HTTP request precondition check.
|
|
|
|
// See https://tools.ietf.org/html/rfc7232 section 3.
|
|
|
|
type condResult int
|
|
|
|
|
|
|
|
const (
|
|
|
|
condNone condResult = iota
|
|
|
|
condTrue
|
|
|
|
condFalse
|
|
|
|
)
|
|
|
|
|
|
|
|
func checkIfMatch(w http.ResponseWriter, r *http.Request) condResult {
|
|
|
|
im := r.Header.Get("If-Match")
|
|
|
|
if im == "" {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
for {
|
|
|
|
im = textproto.TrimString(im)
|
|
|
|
if len(im) == 0 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
if im[0] == ',' {
|
|
|
|
im = im[1:]
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if im[0] == '*' {
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
etag, remain := scanETag(im)
|
|
|
|
if etag == "" {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
if etagStrongMatch(etag, w.Header().Get("Etag")) {
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
im = remain
|
|
|
|
}
|
|
|
|
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
|
|
|
|
func checkIfUnmodifiedSince(r *http.Request, modtime time.Time) condResult {
|
|
|
|
ius := r.Header.Get("If-Unmodified-Since")
|
|
|
|
if ius == "" || isZeroTime(modtime) {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
t, err := http.ParseTime(ius)
|
|
|
|
if err != nil {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
|
|
|
|
// The Last-Modified header truncates sub-second precision so
|
|
|
|
// the modtime needs to be truncated too.
|
|
|
|
modtime = modtime.Truncate(time.Second)
|
|
|
|
if ret := modtime.Compare(t); ret <= 0 {
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
|
|
|
|
func checkIfNoneMatch(w http.ResponseWriter, r *http.Request) condResult {
|
|
|
|
inm := r.Header.Get("If-None-Match")
|
|
|
|
if inm == "" {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
buf := inm
|
|
|
|
for {
|
|
|
|
buf = textproto.TrimString(buf)
|
|
|
|
if len(buf) == 0 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
if buf[0] == ',' {
|
|
|
|
buf = buf[1:]
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if buf[0] == '*' {
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
etag, remain := scanETag(buf)
|
|
|
|
if etag == "" {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
if etagWeakMatch(etag, w.Header().Get("Etag")) {
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
buf = remain
|
|
|
|
}
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
|
|
|
|
func checkIfModifiedSince(r *http.Request, modtime time.Time) condResult {
|
|
|
|
if r.Method != "GET" && r.Method != "HEAD" {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
ims := r.Header.Get("If-Modified-Since")
|
|
|
|
if ims == "" || isZeroTime(modtime) {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
t, err := http.ParseTime(ims)
|
|
|
|
if err != nil {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
// The Last-Modified header truncates sub-second precision so
|
|
|
|
// the modtime needs to be truncated too.
|
|
|
|
modtime = modtime.Truncate(time.Second)
|
|
|
|
if ret := modtime.Compare(t); ret <= 0 {
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
|
|
|
|
func checkIfRange(w http.ResponseWriter, r *http.Request, modtime time.Time) condResult {
|
|
|
|
if r.Method != "GET" && r.Method != "HEAD" {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
ir := r.Header.Get("If-Range")
|
|
|
|
if ir == "" {
|
|
|
|
return condNone
|
|
|
|
}
|
|
|
|
etag, _ := scanETag(ir)
|
|
|
|
if etag != "" {
|
|
|
|
if etagStrongMatch(etag, w.Header().Get("Etag")) {
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
// The If-Range value is typically the ETag value, but it may also be
|
|
|
|
// the modtime date. See golang.org/issue/8367.
|
|
|
|
if modtime.IsZero() {
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
t, err := http.ParseTime(ir)
|
|
|
|
if err != nil {
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
if t.Unix() == modtime.Unix() {
|
|
|
|
return condTrue
|
|
|
|
}
|
|
|
|
return condFalse
|
|
|
|
}
|
|
|
|
|
|
|
|
var unixEpochTime = time.Unix(0, 0)
|
|
|
|
|
|
|
|
// isZeroTime reports whether t is obviously unspecified (either zero or Unix()=0).
|
|
|
|
func isZeroTime(t time.Time) bool {
|
|
|
|
return t.IsZero() || t.Equal(unixEpochTime)
|
|
|
|
}
|
|
|
|
|
|
|
|
func setLastModified(w http.ResponseWriter, modtime time.Time) {
|
|
|
|
if !isZeroTime(modtime) {
|
|
|
|
w.Header().Set("Last-Modified", modtime.UTC().Format(http.TimeFormat))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func writeNotModified(w http.ResponseWriter) {
|
|
|
|
// RFC 7232 section 4.1:
|
|
|
|
// a sender SHOULD NOT generate representation metadata other than the
|
|
|
|
// above listed fields unless said metadata exists for the purpose of
|
|
|
|
// guiding cache updates (e.g., Last-Modified might be useful if the
|
|
|
|
// response does not have an ETag field).
|
|
|
|
h := w.Header()
|
|
|
|
delete(h, "Content-Type")
|
|
|
|
delete(h, "Content-Length")
|
|
|
|
delete(h, "Content-Encoding")
|
|
|
|
if h.Get("Etag") != "" {
|
|
|
|
delete(h, "Last-Modified")
|
|
|
|
}
|
|
|
|
w.WriteHeader(http.StatusNotModified)
|
|
|
|
}
|
|
|
|
|
|
|
|
// checkPreconditions evaluates request preconditions and reports whether a precondition
|
|
|
|
// resulted in sending StatusNotModified or StatusPreconditionFailed.
|
|
|
|
func checkPreconditions(w http.ResponseWriter, r *http.Request, modtime time.Time) (done bool, rangeHeader string) {
|
|
|
|
// This function carefully follows RFC 7232 section 6.
|
|
|
|
ch := checkIfMatch(w, r)
|
|
|
|
if ch == condNone {
|
|
|
|
ch = checkIfUnmodifiedSince(r, modtime)
|
|
|
|
}
|
|
|
|
if ch == condFalse {
|
|
|
|
w.WriteHeader(http.StatusPreconditionFailed)
|
|
|
|
return true, ""
|
|
|
|
}
|
|
|
|
switch checkIfNoneMatch(w, r) {
|
|
|
|
case condFalse:
|
|
|
|
if r.Method == "GET" || r.Method == "HEAD" {
|
|
|
|
writeNotModified(w)
|
|
|
|
return true, ""
|
|
|
|
}
|
|
|
|
w.WriteHeader(http.StatusPreconditionFailed)
|
|
|
|
return true, ""
|
|
|
|
case condNone:
|
|
|
|
if checkIfModifiedSince(r, modtime) == condFalse {
|
|
|
|
writeNotModified(w)
|
|
|
|
return true, ""
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rangeHeader = r.Header.Get("Range")
|
|
|
|
if rangeHeader != "" && checkIfRange(w, r, modtime) == condFalse {
|
|
|
|
rangeHeader = ""
|
|
|
|
}
|
|
|
|
return false, rangeHeader
|
|
|
|
}
|
|
|
|
|
|
|
|
func sumRangesSize(ranges []http_range.Range) (size int64) {
|
|
|
|
for _, ra := range ranges {
|
|
|
|
size += ra.Length
|
|
|
|
}
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
// countingWriter counts how many bytes have been written to it.
|
|
|
|
type countingWriter int64
|
|
|
|
|
|
|
|
func (w *countingWriter) Write(p []byte) (n int, err error) {
|
|
|
|
*w += countingWriter(len(p))
|
|
|
|
return len(p), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// rangesMIMESize returns the number of bytes it takes to encode the
|
|
|
|
// provided ranges as a multipart response.
|
|
|
|
func rangesMIMESize(ranges []http_range.Range, contentType string, contentSize int64) (encSize int64, err error) {
|
|
|
|
var w countingWriter
|
|
|
|
mw := multipart.NewWriter(&w)
|
|
|
|
for _, ra := range ranges {
|
|
|
|
_, err := mw.CreatePart(ra.MimeHeader(contentType, contentSize))
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
encSize += ra.Length
|
|
|
|
}
|
|
|
|
err = mw.Close()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
encSize += int64(w)
|
|
|
|
return encSize, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// LimitedReadCloser wraps a io.ReadCloser and limits the number of bytes that can be read from it.
|
|
|
|
type LimitedReadCloser struct {
|
|
|
|
rc io.ReadCloser
|
|
|
|
remaining int
|
|
|
|
}
|
|
|
|
|
|
|
|
func (l *LimitedReadCloser) Read(buf []byte) (int, error) {
|
|
|
|
if l.remaining <= 0 {
|
|
|
|
return 0, io.EOF
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(buf) > l.remaining {
|
|
|
|
buf = buf[0:l.remaining]
|
|
|
|
}
|
|
|
|
|
|
|
|
n, err := l.rc.Read(buf)
|
|
|
|
l.remaining -= n
|
|
|
|
|
|
|
|
return n, err
|
|
|
|
}
|
|
|
|
|
|
|
|
func (l *LimitedReadCloser) Close() error {
|
|
|
|
return l.rc.Close()
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetRangedHttpReader some http server doesn't support "Range" header,
|
|
|
|
// so this function read readCloser with whole data, skip offset, then return ReaderCloser.
|
|
|
|
func GetRangedHttpReader(readCloser io.ReadCloser, offset, length int64) (io.ReadCloser, error) {
|
|
|
|
var length_int int
|
|
|
|
if length > math.MaxInt {
|
|
|
|
return nil, fmt.Errorf("doesnot support length bigger than int32 max ")
|
|
|
|
}
|
|
|
|
length_int = int(length)
|
|
|
|
|
|
|
|
if offset > 100*1024*1024 {
|
2024-04-02 08:50:30 +00:00
|
|
|
log.Warnf("offset is more than 100MB, if loading data from internet, high-latency and wasting of bandwidth is expected")
|
feat: Crypt driver, improve http/webdav handling (#4884)
this PR has several enhancements, fixes, and features:
- [x] Crypt: a transparent encryption driver. Anyone can easily, and safely store encrypted data on the remote storage provider. Consider your data is safely stored in the safe, and the storage provider can only see the safe, but not your data.
- [x] Optional: compatible with [Rclone Crypt](https://rclone.org/crypt/). More ways to manipulate the encrypted data.
- [x] directory and filename encryption
- [x] server-side encryption mode (server encrypts & decrypts all data, all data flows thru the server)
- [x] obfuscate sensitive information internally
- [x] introduced a server memory-cached multi-thread downloader.
- [x] Driver: **Quark** enabled this feature, faster load in any single thread scenario. e.g. media player directly playing from the link, now it's faster.
- [x] general improvement on HTTP/WebDAV stream processing & header handling & response handling
- [x] Driver: **Mega** driver support ranged http header
- [x] Driver: **Quark** fix bug of not closing HTTP request to Quark server while user end has closed connection to alist
## Crypt, a transparent Encrypt/Decrypt Driver. (Rclone Crypt compatible)
e.g.
Crypt mount path -> /vault
Crypt remote path -> /ali/encrypted
Aliyun mount paht -> /ali
when the user uploads a.jpg to /vault, the data will be encrypted and saved to /ali/encrypted/xxxxx. And when the user wants to access a.jpg, it's automatically decrypted, and the user can do anything with it.
Since it's Rclone Crypt compatible, users can download /ali/encrypted/xxxxx and decrypt it with rclone crypt tool. Or the user can mount this folder using rclone, then mount the decrypted folder in Linux...
NB. Some breaking changes is made to make it follow global standard, e.g. processing the HTTP header properly.
close #4679
close #4827
Co-authored-by: Sean He <866155+seanhe26@users.noreply.github.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-08-02 06:40:36 +00:00
|
|
|
}
|
|
|
|
|
2024-04-25 12:11:15 +00:00
|
|
|
if _, err := utils.CopyWithBuffer(io.Discard, io.LimitReader(readCloser, offset)); err != nil {
|
feat: Crypt driver, improve http/webdav handling (#4884)
this PR has several enhancements, fixes, and features:
- [x] Crypt: a transparent encryption driver. Anyone can easily, and safely store encrypted data on the remote storage provider. Consider your data is safely stored in the safe, and the storage provider can only see the safe, but not your data.
- [x] Optional: compatible with [Rclone Crypt](https://rclone.org/crypt/). More ways to manipulate the encrypted data.
- [x] directory and filename encryption
- [x] server-side encryption mode (server encrypts & decrypts all data, all data flows thru the server)
- [x] obfuscate sensitive information internally
- [x] introduced a server memory-cached multi-thread downloader.
- [x] Driver: **Quark** enabled this feature, faster load in any single thread scenario. e.g. media player directly playing from the link, now it's faster.
- [x] general improvement on HTTP/WebDAV stream processing & header handling & response handling
- [x] Driver: **Mega** driver support ranged http header
- [x] Driver: **Quark** fix bug of not closing HTTP request to Quark server while user end has closed connection to alist
## Crypt, a transparent Encrypt/Decrypt Driver. (Rclone Crypt compatible)
e.g.
Crypt mount path -> /vault
Crypt remote path -> /ali/encrypted
Aliyun mount paht -> /ali
when the user uploads a.jpg to /vault, the data will be encrypted and saved to /ali/encrypted/xxxxx. And when the user wants to access a.jpg, it's automatically decrypted, and the user can do anything with it.
Since it's Rclone Crypt compatible, users can download /ali/encrypted/xxxxx and decrypt it with rclone crypt tool. Or the user can mount this folder using rclone, then mount the decrypted folder in Linux...
NB. Some breaking changes is made to make it follow global standard, e.g. processing the HTTP header properly.
close #4679
close #4827
Co-authored-by: Sean He <866155+seanhe26@users.noreply.github.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-08-02 06:40:36 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// return an io.ReadCloser that is limited to `length` bytes.
|
|
|
|
return &LimitedReadCloser{readCloser, length_int}, nil
|
|
|
|
}
|