Now dedicated ConnectCommand handles connection establishment. It
checks whether connection is established or not. It also handles
backup connection. The next Command creation is abstracted using
ControlChain struct template.
Issue A record query only when non-loopback IPv4 address is
configured. Likewise, issue AAA record query only when non-loopback
and non-linklocak IPv6 address is configured.
The old implementation starts to find faster host when the number of
missing segment becomes 1. Because of --min-split-size option, before
the number of missing segment becomes 1, the number of connection
becomes 1 and it can be slow. In this case, we have to wait until the
last segment is reached. The new implementation starts to find faster
host when the remaining length is less than --min-split-size * 2, to
mitigate the problem stated above.
The old implementation calculates download/upload statistics for a
RequestGroup by summing up all PeerStat objects. For global
statistics, those are summed together. This clearly incurs runtime
penalty and we introduced some kind of caching which updates
statistics every 250ms but it did not work right.
This change removes all these aggregation code, and instead makes
RequestGroup and RequestGroupMan objects hold NetStat object and
download/upload bytes are directly calculated by thier own NetStat.
This is far more simplar than the old way and less runtime penalty and
brings more accuracy.
In CreateRequestCommand, if Request object returned from getRequest()
is still sleeping, CreateRequestCommand pools it back but still holds
its reference. This makes assertion error in
UnknownLengthPieceStroage::hasMissingUnusedPiece() from
AbstractCommand::execute().
The previous implementation constructs proxy URI in OptionHandler but
it cannot handle with the situation when user, password and proxy URI
is given various order. Now we just set rules: username set in
--*-proxy-user overrides username in --*-proxy option, no matter in
any order username, password and proxy URI are parsed. Likewise,
password set in --*--proxy-passwd overrides password in --*-proxy
option.
+ DownloadFailureException.
Throwing DownloadFailureException may stop download unexpectedly when
--reuse-uri is false. Using CreateRequestCommand with STATUS_INACTIVE,
they can be executed next iteration with
DownloadEngine::setRefreshInterval(0).
The delay is caused because some Commands are only called in certain
interval(called refreshInterval, default, 1000ms). In aria2 download
stops when all Commands associated to it are stopped. Since some
Commands are called in each 1000ms by default, as mentioned before, we
have to wait for them. To fix this issue, we call
DownloadEngine::setRefreshInterval(0) when pausing/stopping downloads.
DownloadEngine::setRefreshInterval(0) makes refreshInterval 0 in one
shot.
When all segments are ignored, now DownloadFailureException is thrown.
And stop the download immediately. As described earlier, we call
DownloadEngine::setRefreshInterval(0) in catch block of
DownloadFailureException to eliminate delay.
This option accepts comma separated list of DNS server address used in
asynchronous DNS resolver. Usually asynchronous DNS resolver reads DNS
server addresses from /etc/resolv.conf. When this option is used, it
uses DNS servers specified in this option instead of ones in
/etc/resolv.conf. You can specify both IPv4 and IPv6 address. This
option is useful when the system does not have /etc/resolv.conf and
user does not have the permission to create it.
This option was once existed in aria2 but erased on 2009-09-20. Now
it is resurrected once again. We choose 2 as default value, but there
is no good theory behind it. Now we retry HTTP download when remote
server returns 503 Service Unavailable if --retry-wait > 0. We also
added error code 29: HTTP_SERVICE_UNAVAILABLE.
after several pipelined requests.
We call Request::setMaxPipelinedRequest(1) if Connection: close is
received. Also call Request::supportsPersistentConnection(true) and
Request::setMaxPipelinedRequest(1) when closing the connection.
We introduced SocketRecvBuffer which buffers received bytes. Since
HTTP response header and response body are divided with \r\n, we have
to buffer up several bytes to find this delimiter. We use
SocketRecvBuffer to hold these bytes and only consumes header and
passes SocketRecvBuffer, which may contain head of response body, to
next Command. Since FTPConnection doesn't use SocketCore::peekData(),
we left it as is.
Added .cc file for classes/structs that only provided by header
file. Defined non-POD classes' ctor, dtor in .cc file. Moved
implementation code in header file to .cc file for major
classes/strucsts.
Removed SharedHandle::isNull(). Instead we added operator* and
operator unspecified_bool_type. Removed use of WeakHandle and
replaced with raw pointer.
Fixed the bug that a file gets overwritten if -V is given and no
hash is provided. Fixed the bug that --dry-run leads download
error. Added RequestGroup::createCheckIntegrityEntry() which
correctly creates CheckIntegrityEntry objects and open files based
on -V option and the existence of control file.
* src/AbstractCommand.cc
* src/AbstractCommand.h
* src/ChecksumCheckIntegrityEntry.cc
* src/DownloadContext.cc
* src/DownloadContext.h
* src/FtpNegotiationCommand.cc
* src/HttpResponseCommand.cc
* src/PieceHashCheckIntegrityEntry.cc
* src/RequestGroup.cc
* src/RequestGroup.h
* src/RequestGroupEntry.cc
* src/RequestGroupEntry.h