2007-01-23 Tatsuhiro Tsujikawa <tujikawa at rednoah dot com>

To add chunk checksum validation:
	* src/MetalinkEntry.h
	(MetalinkChunkChecksum.h): New include.
	(chunkChecksum): New variable.
	* src/Request.h
	(method): New variable.
	(setMethod): New function.
	(getMethod): New function.
	(METHOD_GET): New static constant.
	(METHOD_HEAD): New static constant.
	* src/Xml2MetalinkProcessor.h
	(getPieceHash): New function.
	* src/PieceStorage.h
	(markAllPiecesDone): New function.
	(checkIntegrity): New function.
	* src/FileAllocator.h
	(NullFileAllocationMonitor.h): New include.
	(FileAllocator): Initialize fileAllocationMonitor with new
	NullFileAllocationMonitor().
	* src/MultiDiskAdaptor.h
	(messageDigest.h): Remove include.
	(ctx): Removed.
	(hashUpdate): Added ctx.
	(MultiDiskAdaptor): Removed ctx.
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/UrlRequestInfo.h
	(HeadResult): New class.
	(digestAlgo): New variable.
	(chunkChecksumLength): New variable.
	(chunkChecksums): New variable.
	(getHeadResult): New function.
	(UrlRequestInfo): Added digestAlgo, chunkChecksumLength.
	(setDigestAlgo): New function.
	(setChunkChecksumLength): New function.
	(setChunkChecksums): New function.
	* src/DefaultPieceStorage.cc
	(DiskAdaptorWriter.h): New include.
	(ChunkChecksumValidator.h): New include.
	(markAllPiecesDone): New function.
	(checkIntegrity): New function.
	* src/DefaultBtContext.h
	(getPieceHashes): New function.
	* src/TorrentRequestInfo.cc
	(execute): Try to validate chunk checksum if file already exists 
and
	.aria2 file doesn't there and user allows aria2 to overwrite it.
	* src/messageDigest.h
	(~MessageDigestContext): Added digestFree().
	* src/MetalinkRequestInfo.cc
	(execute): Set digestAlgo, chunkChecksum, chunkChecksums to 
reqInfo.
	* src/DiskAdaptor.h
	(messageDigest.h): New include.
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/DownloadCommand.h
	(PeerStat.h): New include.
	(maxDownloadSpeedLimit): New variable.
	(startupIdleTime): New variable.
	(lowestDownloadSpeedLimit): New variable.
	(peerStat): New variable.
	(setMaxDownloadSpeedLimit): New function.
	(setStartupIdleTime): New function.
	(setLowestDownloadSPeedLimit): New function.
	* src/BtContext.h
	(getPieceHashes): New function.
	* src/main.cc
	(main): Set PREF_REALTIME_CHUNK_CHECKSUM and 
PREF_CHECK_INTEGRITY
	option to true for testing purpose.
	* src/BtPieceMessage.cc
	(checkPieceHash): Use messageDigest
	* src/DownloadEngine.cc
	(SetDescriptor): Removed.
	(AccumulateActiveCommand): Removed.
	(waitData): Rewritten.
	(updateFdSet): Rewritten.
	* src/MultiDiskAdaptor.cc
	(hashUpdate): Added ctx.
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/BitfieldMan.h
	(isBitRangeSet): New function.
	(unsetBitRange): New function.
	* src/ByteArrayDiskWriter.h
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/ConsoleDownloadEngine.cc
	(calculateStatistics): If nspeed < 0 then set nspeed to 0.
	* src/DiskWriter.h
	(messageDigest.h): New include.
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/ChunkChecksumValidator.h: New class.
	* src/DiskAdaptorWriter.h: New class.
	* src/prefs.h
	(PREF_REALTIME_CHUNK_CHECKSUM): New definition.
	(PREF_CHECK_INTEGRITY): New definition.
	* src/HttpResponseCommand.cc
	(handleDefaultEncoding): Added method "HEAD" handling.
	Removed the call to 
e->segmentMan->shouldCancelDownloadForSafety().
	(handleOtherEncoding):
	Added the call to 
e->segmentMan->shouldCancelDownloadForSafety().
	(createHttpDownloadCommand): Set maxDownloadSpeedLimit,
	startupIdleTime, lowestDownloadSpeedLimit to command.
	* src/SegmentMan.h
	(getSegmentEntryByIndex): New function.
	(getSegmentEntryByCuid): New function.
	(getSegmentEntryIteratorByCuid): New function.
	(diskWriter): DiskWriter -> DiskWriterHandle
	(pieceHashes): New variable.
	(chunkHashLength): New variable.
	(digestAlgo): New variable.
	(FindPeerStat): Removed.
	(getPeerStat): Rewritten.
	(markAllPiecesDone): New function.
	(checkIntegrity): New function.
	(tryChunkChecksumValidation): New function.
	(isChunkChecksumValidationReady): New function.
	* src/BitfieldMan.cc
	(BitfieldMan): Initialized bitfieldLength, blocks to 0.
	(BitfieldMan): Initialized blockLength, totalLength, 
bitfieldLength,
	blocks to 0.
	(isBitRangeSet): New function.
	(unsetBitRange): New function.
	* src/FtpNegotiateCommand.cc
	(executeInternal): Set maxDownloadSpeedLimit,
	startupIdleTime, lowestDownloadSpeedLimit to command.
	(recvSize): Added method "HEAD" handling.
	Removed the call to 
e->segmentMan->shouldCancelDownloadForSafety().
	* src/AbstractSingleDiskAdaptor.cc
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/AbstractSingleDiskAdaptor.h
	(sha1Sum): Renamed as messageDigest.
	(messageDigest): New function.
	* src/Util.h
	(indexRange): New function.
	* src/MetalinkEntry.cc
	(MetalinkEntry): Initialized chunkChecksum to 0.
	* src/ShaVisitor.cc
	(~ShaVisitor): Removed the call to ctx.digestFree().
	* src/SegmentMan.cc
	(ChunkChecksumValidator.h): New include.
	(SegmentMan): Initialized chunkHashLength to 0. Initialized 
digestAlgo
	to DIGEST_ALGO_SHA1.
	(~SegmentMan): Removed diskWriter.
	(FindSegmentEntryByIndex): Removed.
	(FindSegmentEntryByCuid): Removed.
	(checkoutSegment): Rewritten.
	(findSlowerSegmentEntry): Rewritten.
	(getSegment): Rewritten.
	(updateSegment): Rewritten.
	(completeSegment): Rewritten.
	(markAllPiecesDone): New function.
	(checkIntegrity): New function.
	(isChunkChecksumValidationReady): New function.
	(tryChunkChecksumValidation): New function.
	* src/Xml2MetalinkProcessor.cc
	(getEntry): Get size and set it to entry.
	Get chunk checksum and set it to entry.
	(getPieceHash): New function.
	* src/Util.cc
	(sha1Sum): Removed ctx.digestFree()
	(fileChecksum): Removed ctx.digestFree()
	(indexRange): New function.
	* src/Request.cc
	(METHOD_GET): New variable.
	(METHOD_HEAD): New variable.
	(Request): Added method.
	* src/UrlRequestInfo.cc
	(FatalException.h): New include.
	(message.h): New include.
	(operator<<): Added operator<< for class HeadResult.
	(getHeadResult): New function.
	(execute): Get filename and size in separate download engine.
	* src/ChunkChecksumValidator.cc: New class.
	* src/DownloadCommand.cc:
	(DownloadCommand): Added peerStat.
	(executeInternal): Use maxDownloadSpeedLimit member instead of 
getting
	the value from Option.
	The buffer size is now 16KB.
	Use peerStat member instead of getting it from SegmentMan.
	Use startupIdleTime member instead of gettingit from Option.
	Added chunk checksum validation.
	* src/AbstractDiskWriter.cc
	(AbstractDiskWriter): Removed ctx.
	(~AbstractDiskWriter): Removed ctx.digestFree()
	(writeDataInternal): Returns the return value of write.
	(readDataInternal): Returns the return value of read.
	(sha1Sum): Renamed as messageDigest
	(messageDigest): New function.
	* src/AbstractDiwkWriter.h
	(messageDigest.h): Removed include.
	(ctx): Removed.
	(sha1Sum): Renamed as messageDigest
	(messageDigest): New function.
	* src/DefaultPieceStorage.h
	(markAllPiecesDone): New function.
	(checkIntegrity): New function.
	* src/NullFileAllocationMonitor.h: New class.
pull/1/head
Tatsuhiro Tsujikawa 2007-01-24 15:55:34 +00:00
parent a1df7a762e
commit ea6d9493c8
63 changed files with 1352 additions and 302 deletions

202
ChangeLog
View File

@ -1,3 +1,205 @@
2007-01-23 Tatsuhiro Tsujikawa <tujikawa at rednoah dot com>
To add chunk checksum validation:
* src/MetalinkEntry.h
(MetalinkChunkChecksum.h): New include.
(chunkChecksum): New variable.
* src/Request.h
(method): New variable.
(setMethod): New function.
(getMethod): New function.
(METHOD_GET): New static constant.
(METHOD_HEAD): New static constant.
* src/Xml2MetalinkProcessor.h
(getPieceHash): New function.
* src/PieceStorage.h
(markAllPiecesDone): New function.
(checkIntegrity): New function.
* src/FileAllocator.h
(NullFileAllocationMonitor.h): New include.
(FileAllocator): Initialize fileAllocationMonitor with new
NullFileAllocationMonitor().
* src/MultiDiskAdaptor.h
(messageDigest.h): Remove include.
(ctx): Removed.
(hashUpdate): Added ctx.
(MultiDiskAdaptor): Removed ctx.
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/UrlRequestInfo.h
(HeadResult): New class.
(digestAlgo): New variable.
(chunkChecksumLength): New variable.
(chunkChecksums): New variable.
(getHeadResult): New function.
(UrlRequestInfo): Added digestAlgo, chunkChecksumLength.
(setDigestAlgo): New function.
(setChunkChecksumLength): New function.
(setChunkChecksums): New function.
* src/DefaultPieceStorage.cc
(DiskAdaptorWriter.h): New include.
(ChunkChecksumValidator.h): New include.
(markAllPiecesDone): New function.
(checkIntegrity): New function.
* src/DefaultBtContext.h
(getPieceHashes): New function.
* src/TorrentRequestInfo.cc
(execute): Try to validate chunk checksum if file already exists and
.aria2 file doesn't there and user allows aria2 to overwrite it.
* src/messageDigest.h
(~MessageDigestContext): Added digestFree().
* src/MetalinkRequestInfo.cc
(execute): Set digestAlgo, chunkChecksum, chunkChecksums to reqInfo.
* src/DiskAdaptor.h
(messageDigest.h): New include.
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/DownloadCommand.h
(PeerStat.h): New include.
(maxDownloadSpeedLimit): New variable.
(startupIdleTime): New variable.
(lowestDownloadSpeedLimit): New variable.
(peerStat): New variable.
(setMaxDownloadSpeedLimit): New function.
(setStartupIdleTime): New function.
(setLowestDownloadSPeedLimit): New function.
* src/BtContext.h
(getPieceHashes): New function.
* src/main.cc
(main): Set PREF_REALTIME_CHUNK_CHECKSUM and PREF_CHECK_INTEGRITY
option to true for testing purpose.
* src/BtPieceMessage.cc
(checkPieceHash): Use messageDigest
* src/DownloadEngine.cc
(SetDescriptor): Removed.
(AccumulateActiveCommand): Removed.
(waitData): Rewritten.
(updateFdSet): Rewritten.
* src/MultiDiskAdaptor.cc
(hashUpdate): Added ctx.
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/BitfieldMan.h
(isBitRangeSet): New function.
(unsetBitRange): New function.
* src/ByteArrayDiskWriter.h
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/ConsoleDownloadEngine.cc
(calculateStatistics): If nspeed < 0 then set nspeed to 0.
* src/DiskWriter.h
(messageDigest.h): New include.
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/ChunkChecksumValidator.h: New class.
* src/DiskAdaptorWriter.h: New class.
* src/prefs.h
(PREF_REALTIME_CHUNK_CHECKSUM): New definition.
(PREF_CHECK_INTEGRITY): New definition.
* src/HttpResponseCommand.cc
(handleDefaultEncoding): Added method "HEAD" handling.
Removed the call to e->segmentMan->shouldCancelDownloadForSafety().
(handleOtherEncoding):
Added the call to e->segmentMan->shouldCancelDownloadForSafety().
(createHttpDownloadCommand): Set maxDownloadSpeedLimit,
startupIdleTime, lowestDownloadSpeedLimit to command.
* src/SegmentMan.h
(getSegmentEntryByIndex): New function.
(getSegmentEntryByCuid): New function.
(getSegmentEntryIteratorByCuid): New function.
(diskWriter): DiskWriter -> DiskWriterHandle
(pieceHashes): New variable.
(chunkHashLength): New variable.
(digestAlgo): New variable.
(FindPeerStat): Removed.
(getPeerStat): Rewritten.
(markAllPiecesDone): New function.
(checkIntegrity): New function.
(tryChunkChecksumValidation): New function.
(isChunkChecksumValidationReady): New function.
* src/BitfieldMan.cc
(BitfieldMan): Initialized bitfieldLength, blocks to 0.
(BitfieldMan): Initialized blockLength, totalLength, bitfieldLength,
blocks to 0.
(isBitRangeSet): New function.
(unsetBitRange): New function.
* src/FtpNegotiateCommand.cc
(executeInternal): Set maxDownloadSpeedLimit,
startupIdleTime, lowestDownloadSpeedLimit to command.
(recvSize): Added method "HEAD" handling.
Removed the call to e->segmentMan->shouldCancelDownloadForSafety().
* src/AbstractSingleDiskAdaptor.cc
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/AbstractSingleDiskAdaptor.h
(sha1Sum): Renamed as messageDigest.
(messageDigest): New function.
* src/Util.h
(indexRange): New function.
* src/MetalinkEntry.cc
(MetalinkEntry): Initialized chunkChecksum to 0.
* src/ShaVisitor.cc
(~ShaVisitor): Removed the call to ctx.digestFree().
* src/SegmentMan.cc
(ChunkChecksumValidator.h): New include.
(SegmentMan): Initialized chunkHashLength to 0. Initialized digestAlgo
to DIGEST_ALGO_SHA1.
(~SegmentMan): Removed diskWriter.
(FindSegmentEntryByIndex): Removed.
(FindSegmentEntryByCuid): Removed.
(checkoutSegment): Rewritten.
(findSlowerSegmentEntry): Rewritten.
(getSegment): Rewritten.
(updateSegment): Rewritten.
(completeSegment): Rewritten.
(markAllPiecesDone): New function.
(checkIntegrity): New function.
(isChunkChecksumValidationReady): New function.
(tryChunkChecksumValidation): New function.
* src/Xml2MetalinkProcessor.cc
(getEntry): Get size and set it to entry.
Get chunk checksum and set it to entry.
(getPieceHash): New function.
* src/Util.cc
(sha1Sum): Removed ctx.digestFree()
(fileChecksum): Removed ctx.digestFree()
(indexRange): New function.
* src/Request.cc
(METHOD_GET): New variable.
(METHOD_HEAD): New variable.
(Request): Added method.
* src/UrlRequestInfo.cc
(FatalException.h): New include.
(message.h): New include.
(operator<<): Added operator<< for class HeadResult.
(getHeadResult): New function.
(execute): Get filename and size in separate download engine.
* src/ChunkChecksumValidator.cc: New class.
* src/DownloadCommand.cc:
(DownloadCommand): Added peerStat.
(executeInternal): Use maxDownloadSpeedLimit member instead of getting
the value from Option.
The buffer size is now 16KB.
Use peerStat member instead of getting it from SegmentMan.
Use startupIdleTime member instead of gettingit from Option.
Added chunk checksum validation.
* src/AbstractDiskWriter.cc
(AbstractDiskWriter): Removed ctx.
(~AbstractDiskWriter): Removed ctx.digestFree()
(writeDataInternal): Returns the return value of write.
(readDataInternal): Returns the return value of read.
(sha1Sum): Renamed as messageDigest
(messageDigest): New function.
* src/AbstractDiwkWriter.h
(messageDigest.h): Removed include.
(ctx): Removed.
(sha1Sum): Renamed as messageDigest
(messageDigest): New function.
* src/DefaultPieceStorage.h
(markAllPiecesDone): New function.
(checkIntegrity): New function.
* src/NullFileAllocationMonitor.h: New class.
2007-01-16 Tatsuhiro Tsujikawa <tujikawa at rednoah dot com>
To decrease CPU usage in bittorrent download, calculation results in

4
TODO
View File

@ -24,5 +24,7 @@
* remove blockIndex
* Add an ability of seeding
* Add piece hash checking
* Stopping while piece hash checking and file allocation
* Stop download after selective download completes
* Remove -pg option in Makefile.am
* Continue file allocation with existing file

View File

@ -48,20 +48,11 @@ AbstractDiskWriter::AbstractDiskWriter():
fd(0),
fileAllocator(0),
logger(LogFactory::getInstance())
#ifdef ENABLE_MESSAGE_DIGEST
,ctx(DIGEST_ALGO_SHA1)
#endif // ENABLE_MESSAGE_DIGEST
{
#ifdef ENABLE_MESSAGE_DIGEST
ctx.digestInit();
#endif // ENABLE_MESSAGE_DIGEST
}
{}
AbstractDiskWriter::~AbstractDiskWriter() {
AbstractDiskWriter::~AbstractDiskWriter()
{
closeFile();
#ifdef ENABLE_MESSAGE_DIGEST
ctx.digestFree();
#endif // ENABLE_MESSAGE_DIGEST
}
void AbstractDiskWriter::openFile(const string& filename, uint64_t totalLength) {
@ -104,42 +95,42 @@ void AbstractDiskWriter::createFile(const string& filename, int32_t addFlags) {
}
}
void AbstractDiskWriter::writeDataInternal(const char* data, uint32_t len) {
if(write(fd, data, len) < 0) {
throw new DlAbortEx(EX_FILE_WRITE, filename.c_str(), strerror(errno));
}
int32_t AbstractDiskWriter::writeDataInternal(const char* data, uint32_t len) {
return write(fd, data, len);
}
int AbstractDiskWriter::readDataInternal(char* data, uint32_t len) {
int32_t ret;
if((ret = read(fd, data, len)) < 0) {
throw new DlAbortEx(EX_FILE_READ, filename.c_str(), strerror(errno));
}
return ret;
return read(fd, data, len);
}
string AbstractDiskWriter::sha1Sum(int64_t offset, uint64_t length) {
string AbstractDiskWriter::messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo)
{
#ifdef ENABLE_MESSAGE_DIGEST
ctx.digestReset();
MessageDigestContext ctx(algo);
ctx.digestInit();
uint32_t BUFSIZE = 16*1024;
int32_t BUFSIZE = 16*1024;
char buf[BUFSIZE];
for(uint64_t i = 0; i < length/BUFSIZE; i++) {
if((int32_t)BUFSIZE != readData(buf, BUFSIZE, offset)) {
int32_t rs = readData(buf, BUFSIZE, offset);
if(BUFSIZE != readData(buf, BUFSIZE, offset)) {
throw new DlAbortEx(EX_FILE_SHA1SUM, filename.c_str(), strerror(errno));
}
ctx.digestUpdate(buf, BUFSIZE);
offset += BUFSIZE;
}
uint32_t r = length%BUFSIZE;
int32_t r = length%BUFSIZE;
if(r > 0) {
if((int32_t)r != readData(buf, r, offset)) {
int32_t rs = readData(buf, r, offset);
if(r != readData(buf, r, offset)) {
throw new DlAbortEx(EX_FILE_SHA1SUM, filename.c_str(), strerror(errno));
}
ctx.digestUpdate(buf, r);
}
unsigned char hashValue[20];
ctx.digestFinal(hashValue);
return Util::toHex(hashValue, 20);
#else
return "";
@ -154,11 +145,17 @@ void AbstractDiskWriter::seek(int64_t offset) {
void AbstractDiskWriter::writeData(const char* data, uint32_t len, int64_t offset) {
seek(offset);
writeDataInternal(data, len);
if(writeDataInternal(data, len) < 0) {
throw new DlAbortEx(EX_FILE_WRITE, filename.c_str(), strerror(errno));
}
}
int AbstractDiskWriter::readData(char* data, uint32_t len, int64_t offset) {
int32_t ret;
seek(offset);
return readDataInternal(data, len);
if((ret = readDataInternal(data, len)) < 0) {
throw new DlAbortEx(EX_FILE_READ, filename.c_str(), strerror(errno));
}
return ret;
}

View File

@ -36,9 +36,6 @@
#define _D_ABSTRACT_DISK_WRITER_H_
#include "DiskWriter.h"
#ifdef ENABLE_MESSAGE_DIGEST
#include "messageDigest.h"
#endif // ENABLE_MESSAGE_DIGEST
#include "FileAllocator.h"
#include "Logger.h"
@ -48,13 +45,11 @@ protected:
int32_t fd;
FileAllocatorHandle fileAllocator;
const Logger* logger;
#ifdef ENABLE_MESSAGE_DIGEST
MessageDigestContext ctx;
#endif // ENABLE_MESSAGE_DIGEST
void createFile(const string& filename, int32_t addFlags = 0);
void writeDataInternal(const char* data, uint32_t len);
private:
int writeDataInternal(const char* data, uint32_t len);
int readDataInternal(char* data, uint32_t len);
void seek(int64_t offset);
@ -69,7 +64,8 @@ public:
virtual void openExistingFile(const string& filename);
virtual string sha1Sum(int64_t offset, uint64_t length);
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo);
virtual void writeData(const char* data, uint32_t len, int64_t offset);

View File

@ -59,8 +59,8 @@ int AbstractSingleDiskAdaptor::readData(unsigned char* data, uint32_t len, int64
return diskWriter->readData(data, len, offset);
}
string AbstractSingleDiskAdaptor::sha1Sum(int64_t offset, uint64_t length) {
return diskWriter->sha1Sum(offset, length);
string AbstractSingleDiskAdaptor::messageDigest(int64_t offset, uint64_t length, const MessageDigestContext::DigestAlgo& algo) {
return diskWriter->messageDigest(offset, length, algo);
}
bool AbstractSingleDiskAdaptor::fileExists()

View File

@ -60,7 +60,8 @@ public:
virtual int readData(unsigned char* data, uint32_t len, int64_t offset);
virtual string sha1Sum(int64_t offset, uint64_t length);
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo);
virtual bool fileExists();

View File

@ -42,6 +42,8 @@ BitfieldMan::BitfieldMan(uint32_t blockLength, uint64_t totalLength)
bitfield(0),
useBitfield(0),
filterBitfield(0),
bitfieldLength(0),
blocks(0),
filterEnabled(false),
randomizer(0),
cachedNumMissingBlock(0),
@ -62,9 +64,13 @@ BitfieldMan::BitfieldMan(uint32_t blockLength, uint64_t totalLength)
}
BitfieldMan::BitfieldMan(const BitfieldMan& bitfieldMan)
:bitfield(0),
:blockLength(0),
totalLength(0),
bitfield(0),
useBitfield(0),
filterBitfield(0),
bitfieldLength(0),
blocks(0),
filterEnabled(false),
randomizer(0),
cachedNumMissingBlock(0),
@ -616,3 +622,21 @@ void BitfieldMan::updateCache()
cachedCompletedLength = getCompletedLengthNow();
cachedFilteredComletedLength = getFilteredCompletedLengthNow();
}
bool BitfieldMan::isBitRangeSet(int32_t startIndex, int32_t endIndex) const
{
for(int32_t i = startIndex; i <= endIndex; ++i) {
if(!isBitSet(i)) {
return false;
}
}
return true;
}
void BitfieldMan::unsetBitRange(int32_t startIndex, int32_t endIndex)
{
for(int32_t i = startIndex; i <= endIndex; ++i) {
unsetBit(i);
}
updateCache();
}

View File

@ -248,6 +248,10 @@ public:
}
void updateCache();
bool isBitRangeSet(int32_t startIndex, int32_t endIndex) const;
void unsetBitRange(int32_t startIndex, int32_t endIndex);
};
#endif // _D_BITFIELD_MAN_H_

View File

@ -62,6 +62,8 @@ public:
virtual string getPieceHash(int index) const = 0;
virtual const Strings& getPieceHashes() const = 0;
virtual long long int getTotalLength() const = 0;
virtual FILE_MODE getFileMode() const = 0;

View File

@ -191,7 +191,7 @@ string BtPieceMessage::toString() const {
bool BtPieceMessage::checkPieceHash(const PieceHandle& piece) {
int64_t offset =
((int64_t)piece->getIndex())*btContext->getPieceLength();
return pieceStorage->getDiskAdaptor()->sha1Sum(offset, piece->getLength()) ==
return pieceStorage->getDiskAdaptor()->messageDigest(offset, piece->getLength(), DIGEST_ALGO_SHA1) ==
btContext->getPieceHash(piece->getIndex());
}

View File

@ -61,7 +61,8 @@ public:
virtual void writeData(const char* data, uint32_t len, int64_t position = 0);
virtual int readData(char* data, uint32_t len, int64_t position);
// not implemented yet
virtual string sha1Sum(int64_t offset, uint64_t length) { return ""; }
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo) { return ""; }
const char* getByteArray() const {
return buf;

View File

@ -0,0 +1,117 @@
/* <!-- copyright */
/*
* aria2 - The high speed download utility
*
* Copyright (C) 2006 Tatsuhiro Tsujikawa
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* In addition, as a special exception, the copyright holders give
* permission to link the code of portions of this program with the
* OpenSSL library under certain conditions as described in each
* individual source file, and distribute linked combinations
* including the two.
* You must obey the GNU General Public License in all respects
* for all of the code used other than OpenSSL. If you modify
* file(s) with this exception, you may extend this exception to your
* version of the file(s), but you are not obligated to do so. If you
* do not wish to do so, delete this exception statement from your
* version. If you delete this exception statement from all source
* files in the program, then also delete it here.
*/
/* copyright --> */
#include "ChunkChecksumValidator.h"
#include "Util.h"
#include "Exception.h"
#include "TimeA2.h"
void ChunkChecksumValidator::validateSameLengthChecksum(BitfieldMan* bitfieldMan,
int32_t index,
const string& expectedChecksum,
uint32_t dataLength,
uint32_t checksumLength)
{
int64_t offset = index*checksumLength;
string actualChecksum = diskWriter->messageDigest(offset, dataLength, algo);
if(actualChecksum != expectedChecksum) {
logger->error("Chunk checksum validation failed. checksumIndex=%d, offset=%lld, length=%u, expected=%s, actual=%s",
index, offset, dataLength, expectedChecksum.c_str(), actualChecksum.c_str());
bitfieldMan->unsetBit(index);
}
}
void ChunkChecksumValidator::validateDifferentLengthChecksum(BitfieldMan* bitfieldMan,
int32_t index,
const string& expectedChecksum,
uint32_t dataLength,
uint32_t checksumLength)
{
int64_t offset = index*checksumLength;
int32_t startIndex;
int32_t endIndex;
Util::indexRange(startIndex, endIndex, offset,
checksumLength, bitfieldMan->getBlockLength());
if(bitfieldMan->isBitRangeSet(startIndex, endIndex)) {
string actualChecksum = diskWriter->messageDigest(offset, dataLength, algo);
if(expectedChecksum != actualChecksum) {
// wrong checksum
logger->error("Chunk checksum validation failed. checksumIndex=%d, offset=%lld, length=%u, expected=%s, actual=%s",
index, offset, dataLength,
expectedChecksum.c_str(), actualChecksum.c_str());
bitfieldMan->unsetBitRange(startIndex, endIndex);
}
}
}
void ChunkChecksumValidator::validate(BitfieldMan* bitfieldMan,
const Strings& checksums,
uint32_t checksumLength)
{
// We assume file is already opened using DiskWriter::open or openExistingFile.
if(checksumLength*checksums.size() < bitfieldMan->getTotalLength()) {
// insufficient checksums.
logger->error("Insufficient checksums. checksumLength=%u, numChecksum=%u",
checksumLength, checksums.size());
return;
}
uint32_t x = bitfieldMan->getTotalLength()/checksumLength;
uint32_t r = bitfieldMan->getTotalLength()%checksumLength;
void (ChunkChecksumValidator::*f)(BitfieldMan*, int32_t, const string&, uint32_t, uint32_t);
if(checksumLength == bitfieldMan->getBlockLength()) {
f = &ChunkChecksumValidator::validateSameLengthChecksum;
} else {
f = &ChunkChecksumValidator::validateDifferentLengthChecksum;
}
fileAllocationMonitor->setMinValue(0);
fileAllocationMonitor->setMaxValue(bitfieldMan->getTotalLength());
fileAllocationMonitor->setCurrentValue(0);
fileAllocationMonitor->showProgress();
Time cp;
for(uint32_t i = 0; i < x; ++i) {
(this->*f)(bitfieldMan, i, checksums.at(i), checksumLength, checksumLength);
if(cp.elapsedInMillis(500)) {
fileAllocationMonitor->setCurrentValue(i*checksumLength);
fileAllocationMonitor->showProgress();
cp.reset();
}
}
if(r) {
(this->*f)(bitfieldMan, x, checksums.at(x), r, checksumLength);
}
fileAllocationMonitor->setCurrentValue(bitfieldMan->getTotalLength());
fileAllocationMonitor->showProgress();
}

View File

@ -0,0 +1,94 @@
/* <!-- copyright */
/*
* aria2 - The high speed download utility
*
* Copyright (C) 2006 Tatsuhiro Tsujikawa
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* In addition, as a special exception, the copyright holders give
* permission to link the code of portions of this program with the
* OpenSSL library under certain conditions as described in each
* individual source file, and distribute linked combinations
* including the two.
* You must obey the GNU General Public License in all respects
* for all of the code used other than OpenSSL. If you modify
* file(s) with this exception, you may extend this exception to your
* version of the file(s), but you are not obligated to do so. If you
* do not wish to do so, delete this exception statement from your
* version. If you delete this exception statement from all source
* files in the program, then also delete it here.
*/
/* copyright --> */
#ifndef _D_CHUNK_CHECKSUM_VALIDATOR_H_
#define _D_CHUNK_CHECKSUM_VALIDATOR_H_
#include "common.h"
#include "DiskWriter.h"
#include "BitfieldMan.h"
#include "messageDigest.h"
#include "LogFactory.h"
#include "FileAllocationMonitor.h"
#include "NullFileAllocationMonitor.h"
class ChunkChecksumValidator {
private:
DiskWriterHandle diskWriter;
MessageDigestContext::DigestAlgo algo;
FileAllocationMonitorHandle fileAllocationMonitor;
const Logger* logger;
void validateSameLengthChecksum(BitfieldMan* bitfieldMan,
int32_t index,
const string& expectedChecksum,
uint32_t thisLength,
uint32_t checksumLength);
void validateDifferentLengthChecksum(BitfieldMan* bitfieldMan,
int32_t index,
const string& expectedChecksum,
uint32_t thisLength,
uint32_t checksumLength);
public:
ChunkChecksumValidator():
diskWriter(0),
algo(DIGEST_ALGO_SHA1),
fileAllocationMonitor(new NullFileAllocationMonitor()),
logger(LogFactory::getInstance())
{}
~ChunkChecksumValidator() {}
void validate(BitfieldMan* bitfieldMan,
const Strings& checksums,
uint32_t checksumLength);
void setDiskWriter(const DiskWriterHandle& diskWriter) {
this->diskWriter = diskWriter;
}
void setDigestAlgo(const MessageDigestContext::DigestAlgo& algo) {
this->algo = algo;
}
void setFileAllocationMonitor(const FileAllocationMonitorHandle& monitor) {
this->fileAllocationMonitor = monitor;
}
};
#endif // _D_CHUNK_CHECKSUM_VALIDATOR_H_

View File

@ -76,6 +76,9 @@ void ConsoleDownloadEngine::calculateStatistics() {
int elapsed = cp.difference();
if(elapsed >= 1) {
int nspeed = (int)((dlSize-psize)/elapsed);
if(nspeed < 0) {
nspeed = 0;
}
speed = (nspeed+speed)/2;
cp.reset();
psize = dlSize;

View File

@ -78,6 +78,11 @@ private:
virtual string getPieceHash(int index) const;
virtual const Strings& getPieceHashes() const
{
return pieceHashes;
}
virtual long long int getTotalLength() const;
virtual FILE_MODE getFileMode() const;

View File

@ -42,6 +42,8 @@
#include "DlAbortEx.h"
#include "BitfieldManFactory.h"
#include "FileAllocationMonitor.h"
#include "DiskAdaptorWriter.h"
#include "ChunkChecksumValidator.h"
DefaultPieceStorage::DefaultPieceStorage(BtContextHandle btContext, const Option* option):
btContext(btContext),
@ -428,3 +430,20 @@ void DefaultPieceStorage::removeAdvertisedPiece(int elapsed) {
haves.erase(itr, haves.end());
}
}
void DefaultPieceStorage::markAllPiecesDone()
{
bitfieldMan->setAllBit();
}
void DefaultPieceStorage::checkIntegrity()
{
logger->notice("Validating file %s",
diskAdaptor->getFilePath().c_str());
ChunkChecksumValidator v;
v.setDigestAlgo(DIGEST_ALGO_SHA1);
v.setDiskWriter(new DiskAdaptorWriter(diskAdaptor));
v.setFileAllocationMonitor(FileAllocationMonitorFactory::getFactory()->createNewMonitor());
v.validate(bitfieldMan, btContext->getPieceHashes(),
btContext->getPieceLength());
}

View File

@ -152,10 +152,15 @@ public:
virtual void removeAdvertisedPiece(int elapsed);
virtual void markAllPiecesDone();
virtual void checkIntegrity();
/**
* This method is made private for test purpose only.
*/
void addUsedPiece(const PieceHandle& piece);
};
#endif // _D_DEFAULT_PIECE_STORAGE_H_

View File

@ -38,6 +38,7 @@
#include "common.h"
#include "FileEntry.h"
#include "Logger.h"
#include "messageDigest.h"
class DiskAdaptor {
protected:
@ -60,7 +61,8 @@ public:
virtual int readData(unsigned char* data, uint32_t len, int64_t offset) = 0;
virtual string sha1Sum(int64_t offset, uint64_t length) = 0;
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo) = 0;
virtual void onDownloadComplete() = 0;

87
src/DiskAdaptorWriter.h Normal file
View File

@ -0,0 +1,87 @@
/* <!-- copyright */
/*
* aria2 - The high speed download utility
*
* Copyright (C) 2006 Tatsuhiro Tsujikawa
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* In addition, as a special exception, the copyright holders give
* permission to link the code of portions of this program with the
* OpenSSL library under certain conditions as described in each
* individual source file, and distribute linked combinations
* including the two.
* You must obey the GNU General Public License in all respects
* for all of the code used other than OpenSSL. If you modify
* file(s) with this exception, you may extend this exception to your
* version of the file(s), but you are not obligated to do so. If you
* do not wish to do so, delete this exception statement from your
* version. If you delete this exception statement from all source
* files in the program, then also delete it here.
*/
/* copyright --> */
#ifndef _D_DISK_ADAPTOR_WRITER_H_
#define _D_DISK_ADAPTOR_WRITER_H_
#include "DiskWriter.h"
#include "DiskAdaptor.h"
class DiskAdaptorWriter : public DiskWriter {
private:
DiskAdaptorHandle diskAdaptor;
public:
DiskAdaptorWriter(const DiskAdaptorHandle& diskAdaptor):
diskAdaptor(diskAdaptor) {}
virtual ~DiskAdaptorWriter() {}
virtual void initAndOpenFile(const string& filename, uint64_t totalLength = 0)
{
diskAdaptor->initAndOpenFile();
}
virtual void openFile(const string& filename, uint64_t totalLength = 0)
{
diskAdaptor->openFile();
}
virtual void closeFile()
{
diskAdaptor->closeFile();
}
virtual void openExistingFile(const string& filename)
{
diskAdaptor->openExistingFile();
}
virtual void writeData(const char* data, uint32_t len, int64_t position = 0)
{
diskAdaptor->writeData((const unsigned char*)data, len, position);
}
virtual int readData(char* data, uint32_t len, int64_t position)
{
return diskAdaptor->readData((unsigned char*)data, len, position);
}
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo)
{
return diskAdaptor->messageDigest(offset, length, algo);
}
};
#endif // _D_DISK_ADAPTOR_WRITER_H_

View File

@ -37,6 +37,9 @@
#include <string>
#include "common.h"
#ifdef ENABLE_MESSAGE_DIGEST
#include "messageDigest.h"
#endif // ENABLE_MESSAGE_DIGEST
using namespace std;
@ -89,7 +92,8 @@ public:
return readData((char*)data, len, position);
}
virtual string sha1Sum(int64_t offset, uint64_t length) = 0;
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo) = 0;
};
typedef SharedHandle<DiskWriter> DiskWriterHandle;

View File

@ -46,25 +46,23 @@ DownloadCommand::DownloadCommand(int cuid,
const RequestHandle req,
DownloadEngine* e,
const SocketHandle& s):
AbstractCommand(cuid, req, e, s), lastSize(0) {
PeerStatHandle peerStat = this->e->segmentMan->getPeerStat(cuid);
AbstractCommand(cuid, req, e, s), lastSize(0), peerStat(0) {
peerStat = this->e->segmentMan->getPeerStat(cuid);
if(!peerStat.get()) {
peerStat = PeerStatHandle(new PeerStat(cuid));
peerStat = new PeerStat(cuid);
this->e->segmentMan->registerPeerStat(peerStat);
}
peerStat->downloadStart();
}
DownloadCommand::~DownloadCommand() {
PeerStatHandle peerStat = e->segmentMan->getPeerStat(cuid);
assert(peerStat.get());
peerStat->downloadStop();
}
bool DownloadCommand::executeInternal(Segment& segment) {
int maxSpeedLimit = e->option->getAsInt(PREF_MAX_DOWNLOAD_LIMIT);
if(maxSpeedLimit > 0 &&
maxSpeedLimit < e->segmentMan->calculateDownloadSpeed()) {
if(maxDownloadSpeedLimit > 0 &&
maxDownloadSpeedLimit < e->segmentMan->calculateDownloadSpeed()) {
usleep(1);
e->commands.push_back(this);
return false;
@ -74,13 +72,11 @@ bool DownloadCommand::executeInternal(Segment& segment) {
te = getTransferEncoding(transferEncoding);
assert(te != NULL);
}
int bufSize = 4096;
int bufSize = 16*1024;//4096;
char buf[bufSize];
socket->readData(buf, bufSize);
PeerStatHandle peerStat = e->segmentMan->getPeerStat(cuid);
assert(peerStat.get());
if(te != NULL) {
int infbufSize = 4096;
int infbufSize = 16*1024;//4096;
char infbuf[infbufSize];
te->inflate(infbuf, infbufSize, buf, bufSize);
e->segmentMan->diskWriter->writeData(infbuf, infbufSize,
@ -94,14 +90,13 @@ bool DownloadCommand::executeInternal(Segment& segment) {
peerStat->updateDownloadLength(bufSize);
}
// calculate downloading speed
if(peerStat->getDownloadStartTime().elapsed(e->option->getAsInt(PREF_STARTUP_IDLE_TIME))) {
int lowestLimit = e->option->getAsInt(PREF_LOWEST_SPEED_LIMIT);
int nowSpeed = peerStat->calculateDownloadSpeed();
if(lowestLimit > 0 && nowSpeed <= lowestLimit) {
if(peerStat->getDownloadStartTime().elapsed(startupIdleTime)) {
uint32_t nowSpeed = peerStat->calculateDownloadSpeed();
if(lowestDownloadSpeedLimit > 0 && nowSpeed <= lowestDownloadSpeedLimit) {
throw new DlAbortEx("CUID#%d - Too slow Downloading speed: %d <= %d(B/s)",
cuid,
nowSpeed,
lowestLimit);
lowestDownloadSpeedLimit);
}
}
if(e->segmentMan->totalSize != 0 && bufSize == 0) {
@ -113,6 +108,9 @@ bool DownloadCommand::executeInternal(Segment& segment) {
if(te != NULL) te->end();
logger->info(MSG_DOWNLOAD_COMPLETED, cuid);
e->segmentMan->completeSegment(cuid, segment);
if(e->option->get(PREF_REALTIME_CHUNK_CHECKSUM) == V_TRUE) {
e->segmentMan->tryChunkChecksumValidation(segment);
}
// this unit is going to download another segment.
return prepareForNextSegment(segment);
} else {

View File

@ -38,12 +38,17 @@
#include "AbstractCommand.h"
#include "TransferEncoding.h"
#include "TimeA2.h"
#include "PeerStat.h"
using namespace std;
class DownloadCommand : public AbstractCommand {
private:
long long int lastSize;
uint32_t maxDownloadSpeedLimit;
uint32_t startupIdleTime;
uint32_t lowestDownloadSpeedLimit;
PeerStatHandle peerStat;
protected:
bool executeInternal(Segment& segment);
@ -57,6 +62,17 @@ public:
string transferEncoding;
void setMaxDownloadSpeedLimit(uint32_t maxDownloadSpeedLimit) {
this->maxDownloadSpeedLimit = maxDownloadSpeedLimit;
}
void setStartupIdleTime(uint32_t startupIdleTime) {
this->startupIdleTime = startupIdleTime;
}
void setLowestDownloadSpeedLimit(uint32_t lowestDownloadSpeedLimit) {
this->lowestDownloadSpeedLimit = lowestDownloadSpeedLimit;
}
};
#endif // _D_DOWNLOAD_COMMAND_H_

View File

@ -108,89 +108,6 @@ void DownloadEngine::shortSleep() const {
select(0, &rfds, NULL, NULL, &tv);
}
class SetDescriptor {
private:
int* max_ptr;
fd_set* rfds_ptr;
fd_set* wfds_ptr;
public:
SetDescriptor(int* max_ptr, fd_set* rfds_ptr, fd_set* wfds_ptr):
max_ptr(max_ptr),
rfds_ptr(rfds_ptr),
wfds_ptr(wfds_ptr) {}
void operator()(const SocketEntry& entry) {
int fd = entry.socket->getSockfd();
switch(entry.type) {
case SocketEntry::TYPE_RD:
FD_SET(fd, rfds_ptr);
break;
case SocketEntry::TYPE_WR:
FD_SET(fd, wfds_ptr);
break;
}
if(*max_ptr < fd) {
*max_ptr = fd;
}
}
#ifdef ENABLE_ASYNC_DNS
void operator()(const NameResolverEntry& entry) {
int tempFd = entry.nameResolver->getFds(rfds_ptr, wfds_ptr);
if(*max_ptr < tempFd) {
*max_ptr = tempFd;
}
}
#endif // ENABLE_ASYNC_DNS
};
class AccumulateActiveCommand {
private:
Commands* activeCommands_ptr;
fd_set* rfds_ptr;
fd_set* wfds_ptr;
public:
AccumulateActiveCommand(Commands* activeCommands_ptr,
fd_set* rfds_ptr,
fd_set* wfds_ptr):
activeCommands_ptr(activeCommands_ptr),
rfds_ptr(rfds_ptr),
wfds_ptr(wfds_ptr) {}
void operator()(const SocketEntry& entry) {
if(FD_ISSET(entry.socket->getSockfd(), rfds_ptr) ||
FD_ISSET(entry.socket->getSockfd(), wfds_ptr)) {
activeCommands_ptr->push_back(entry.command);
}
/*
switch(entry.type) {
case SocketEntry::TYPE_RD:
if(FD_ISSET(entry.socket->getSockfd(), rfds_ptr)) {
activeCommands_ptr->push_back(entry.command);
}
break;
case SocketEntry::TYPE_WR:
if(FD_ISSET(entry.socket->getSockfd(), wfds_ptr)) {
activeCommands_ptr->push_back(entry.command);
}
break;
}
*/
}
#ifdef ENABLE_ASYNC_DNS
void operator()(const NameResolverEntry& entry) {
entry.nameResolver->process(rfds_ptr, wfds_ptr);
switch(entry.nameResolver->getStatus()) {
case NameResolver::STATUS_SUCCESS:
case NameResolver::STATUS_ERROR:
activeCommands_ptr->push_back(entry.command);
break;
default:
break;
}
}
#endif // ENABLE_ASYNC_DNS
};
void DownloadEngine::waitData(Commands& activeCommands) {
fd_set rfds;
fd_set wfds;
@ -204,16 +121,33 @@ void DownloadEngine::waitData(Commands& activeCommands) {
tv.tv_usec = 0;
retval = select(fdmax+1, &rfds, &wfds, NULL, &tv);
if(retval > 0) {
for_each(socketEntries.begin(), socketEntries.end(),
AccumulateActiveCommand(&activeCommands, &rfds, &wfds));
for(SocketEntries::iterator itr = socketEntries.begin();
itr != socketEntries.end(); ++itr) {
SocketEntry& entry = *itr;
if(FD_ISSET(entry.socket->getSockfd(), &rfds) ||
FD_ISSET(entry.socket->getSockfd(), &wfds)) {
if(find(activeCommands.begin(), activeCommands.end(), entry.command) == activeCommands.end()) {
activeCommands.push_back(entry.command);
}
}
}
#ifdef ENABLE_ASYNC_DNS
for_each(nameResolverEntries.begin(), nameResolverEntries.end(),
AccumulateActiveCommand(&activeCommands, &rfds, &wfds));
for(NameResolverEntries::iterator itr = nameResolverEntries.begin();
itr != nameResolverEntries.end(); ++itr) {
NameResolverEntry& entry = *itr;
entry.nameResolver->process(&rfds, &wfds);
switch(entry.nameResolver->getStatus()) {
case NameResolver::STATUS_SUCCESS:
case NameResolver::STATUS_ERROR:
if(find(activeCommands.begin(), activeCommands.end(), entry.command) == activeCommands.end()) {
activeCommands.push_back(entry.command);
}
break;
default:
break;
}
}
#endif // ENABLE_ASYNC_DNS
sort(activeCommands.begin(), activeCommands.end());
activeCommands.erase(unique(activeCommands.begin(),
activeCommands.end()),
activeCommands.end());
}
}
@ -222,11 +156,31 @@ void DownloadEngine::updateFdSet() {
FD_ZERO(&rfdset);
FD_ZERO(&wfdset);
#ifdef ENABLE_ASYNC_DNS
for_each(nameResolverEntries.begin(), nameResolverEntries.end(),
SetDescriptor(&fdmax, &rfdset, &wfdset));
for(NameResolverEntries::iterator itr = nameResolverEntries.begin();
itr != nameResolverEntries.end(); ++itr) {
NameResolverEntry& entry = *itr;
int fd = entry.nameResolver->getFds(&rfdset, &wfdset);
if(fdmax < fd) {
fdmax = fd;
}
}
#endif // ENABLE_ASYNC_DNS
for_each(socketEntries.begin(), socketEntries.end(),
SetDescriptor(&fdmax, &rfdset, &wfdset));
for(SocketEntries::iterator itr = socketEntries.begin();
itr != socketEntries.end(); ++itr) {
SocketEntry& entry = *itr;
int fd = entry.socket->getSockfd();
switch(entry.type) {
case SocketEntry::TYPE_RD:
FD_SET(fd, &rfdset);
break;
case SocketEntry::TYPE_WR:
FD_SET(fd, &wfdset);
break;
}
if(fdmax < fd) {
fdmax = fd;
}
}
}
bool DownloadEngine::addSocket(const SocketEntry& entry) {

View File

@ -37,12 +37,13 @@
#include "common.h"
#include "FileAllocationMonitor.h"
#include "NullFileAllocationMonitor.h"
class FileAllocator {
private:
FileAllocationMonitorHandle fileAllocationMonitor;
public:
FileAllocator():fileAllocationMonitor(0) {}
FileAllocator():fileAllocationMonitor(new NullFileAllocationMonitor()) {}
~FileAllocator() {}

View File

@ -62,6 +62,9 @@ bool FtpNegotiationCommand::executeInternal(Segment& segment) {
} else if(sequence == SEQ_NEGOTIATION_COMPLETED) {
FtpDownloadCommand* command =
new FtpDownloadCommand(cuid, req, e, dataSocket, socket);
command->setMaxDownloadSpeedLimit(e->option->getAsInt(PREF_MAX_DOWNLOAD_LIMIT));
command->setStartupIdleTime(e->option->getAsInt(PREF_STARTUP_IDLE_TIME));
command->setLowestDownloadSpeedLimit(e->option->getAsInt(PREF_LOWEST_SPEED_LIMIT));
e->commands.push_back(command);
return true;
} else {
@ -183,17 +186,21 @@ bool FtpNegotiationCommand::recvSize() {
throw new DlAbortEx(EX_TOO_LARGE_FILE, size);
}
if(!e->segmentMan->downloadStarted) {
if(req->getMethod() == Request::METHOD_HEAD) {
e->segmentMan->downloadStarted = true;
e->segmentMan->totalSize = size;
e->segmentMan->initBitfield(e->option->getAsInt(PREF_SEGMENT_SIZE),
e->segmentMan->totalSize);
e->segmentMan->markAllPiecesDone();
e->segmentMan->isSplittable = false; // TODO because we don't want segment file to be saved.
return true;
}
e->segmentMan->downloadStarted = true;
e->segmentMan->totalSize = size;
e->segmentMan->initBitfield(e->option->getAsInt(PREF_SEGMENT_SIZE),
e->segmentMan->totalSize);
e->segmentMan->filename = Util::urldecode(req->getFile());
if(e->segmentMan->shouldCancelDownloadForSafety()) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
e->segmentMan->getFilePath().c_str(),
e->segmentMan->getSegmentFilePath().c_str());
}
bool segFileExists = e->segmentMan->segmentFileExists();
if(segFileExists) {
e->segmentMan->load();

View File

@ -153,6 +153,17 @@ bool HttpResponseCommand::handleDefaultEncoding(const HttpHeader& headers) {
}
e->segmentMan->isSplittable = !(size == 0);
e->segmentMan->filename = determinFilename(headers);
// quick hack for method 'head'
if(req->getMethod() == Request::METHOD_HEAD) {
e->segmentMan->downloadStarted = true;
e->segmentMan->totalSize = size;
e->segmentMan->initBitfield(e->option->getAsInt(PREF_SEGMENT_SIZE),
e->segmentMan->totalSize);
e->segmentMan->markAllPiecesDone();
e->segmentMan->isSplittable = false; // TODO because we don't want segment file to be saved.
return true;
}
bool segFileExists = e->segmentMan->segmentFileExists();
e->segmentMan->downloadStarted = true;
if(segFileExists) {
@ -161,11 +172,6 @@ bool HttpResponseCommand::handleDefaultEncoding(const HttpHeader& headers) {
// send request again to the server with Range header
return prepareForRetry(0);
} else {
if(e->segmentMan->shouldCancelDownloadForSafety()) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
e->segmentMan->getFilePath().c_str(),
e->segmentMan->getSegmentFilePath().c_str());
}
e->segmentMan->totalSize = size;
e->segmentMan->initBitfield(e->option->getAsInt(PREF_SEGMENT_SIZE),
e->segmentMan->totalSize);
@ -176,6 +182,11 @@ bool HttpResponseCommand::handleDefaultEncoding(const HttpHeader& headers) {
}
bool HttpResponseCommand::handleOtherEncoding(const string& transferEncoding, const HttpHeader& headers) {
if(e->segmentMan->shouldCancelDownloadForSafety()) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
e->segmentMan->getFilePath().c_str(),
e->segmentMan->getSegmentFilePath().c_str());
}
// we ignore content-length when transfer-encoding is set
e->segmentMan->downloadStarted = true;
e->segmentMan->isSplittable = false;
@ -192,6 +203,9 @@ bool HttpResponseCommand::handleOtherEncoding(const string& transferEncoding, co
void HttpResponseCommand::createHttpDownloadCommand(const string& transferEncoding) {
HttpDownloadCommand* command = new HttpDownloadCommand(cuid, req, e, socket);
command->setMaxDownloadSpeedLimit(e->option->getAsInt(PREF_MAX_DOWNLOAD_LIMIT));
command->setStartupIdleTime(e->option->getAsInt(PREF_STARTUP_IDLE_TIME));
command->setLowestDownloadSpeedLimit(e->option->getAsInt(PREF_LOWEST_SPEED_LIMIT));
TransferEncoding* enc = NULL;
if(transferEncoding.size() && (enc = command->getTransferEncoding(transferEncoding)) == NULL) {
delete(command);

View File

@ -63,7 +63,8 @@ SRCS = Socket.h\
SimpleRandomizer.cc SimpleRandomizer.h\
FileAllocator.cc FileAllocator.h\
FileAllocationMonitor.cc FileAllocationMonitor.h\
ConsoleFileAllocationMonitor.cc ConsoleFileAllocationMonitor.h
ConsoleFileAllocationMonitor.cc ConsoleFileAllocationMonitor.h\
ChunkChecksumValidator.cc ChunkChecksumValidator.h
# debug_new.cpp
if ENABLE_ASYNC_DNS

View File

@ -212,6 +212,7 @@ am__libaria2c_a_SOURCES_DIST = Socket.h SocketCore.cc SocketCore.h \
SimpleRandomizer.h FileAllocator.cc FileAllocator.h \
FileAllocationMonitor.cc FileAllocationMonitor.h \
ConsoleFileAllocationMonitor.cc ConsoleFileAllocationMonitor.h \
ChunkChecksumValidator.cc ChunkChecksumValidator.h \
NameResolver.cc NameResolver.h MetaEntry.h Data.cc Data.h \
Dictionary.cc Dictionary.h List.cc List.h MetaFileUtil.cc \
MetaFileUtil.h MetaEntryVisitor.h ShaVisitor.cc ShaVisitor.h \
@ -375,7 +376,8 @@ am__objects_4 = SocketCore.$(OBJEXT) Command.$(OBJEXT) \
SpeedCalc.$(OBJEXT) BitfieldMan.$(OBJEXT) \
BitfieldManFactory.$(OBJEXT) SimpleRandomizer.$(OBJEXT) \
FileAllocator.$(OBJEXT) FileAllocationMonitor.$(OBJEXT) \
ConsoleFileAllocationMonitor.$(OBJEXT) $(am__objects_1) \
ConsoleFileAllocationMonitor.$(OBJEXT) \
ChunkChecksumValidator.$(OBJEXT) $(am__objects_1) \
$(am__objects_2) $(am__objects_3)
am_libaria2c_a_OBJECTS = $(am__objects_4)
libaria2c_a_OBJECTS = $(am_libaria2c_a_OBJECTS)
@ -582,6 +584,7 @@ SRCS = Socket.h SocketCore.cc SocketCore.h Command.cc Command.h \
SimpleRandomizer.h FileAllocator.cc FileAllocator.h \
FileAllocationMonitor.cc FileAllocationMonitor.h \
ConsoleFileAllocationMonitor.cc ConsoleFileAllocationMonitor.h \
ChunkChecksumValidator.cc ChunkChecksumValidator.h \
$(am__append_1) $(am__append_2) $(am__append_3)
noinst_LIBRARIES = libaria2c.a
libaria2c_a_SOURCES = $(SRCS)
@ -697,6 +700,7 @@ distclean-compile:
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BtSuggestPieceMessage.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BtUnchokeMessage.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ByteArrayDiskWriter.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ChunkChecksumValidator.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ChunkedEncoding.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Command.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CompactPeerListProcessor.Po@am__quote@

View File

@ -36,7 +36,7 @@
#include "Util.h"
#include <algorithm>
MetalinkEntry::MetalinkEntry() {}
MetalinkEntry::MetalinkEntry():chunkChecksum(0) {}
MetalinkEntry::~MetalinkEntry() {}

View File

@ -38,6 +38,7 @@
#include "common.h"
#include "MetalinkResource.h"
#include "Checksum.h"
#include "MetalinkChunkChecksum.h"
#include <deque>
class MetalinkEntry {
@ -46,10 +47,11 @@ public:
string version;
string language;
string os;
long long int size;
uint64_t size;
Checksum checksum;
public:
MetalinkResources resources;
MetalinkChunkChecksumHandle chunkChecksum;
public:
MetalinkEntry();
~MetalinkEntry();

View File

@ -121,8 +121,11 @@ RequestInfos MetalinkRequestInfo::execute() {
// BitTorrent downloading
urls.push_back((*itr)->url);
}
RequestInfoHandle reqInfo(new UrlRequestInfo(urls, maxConnection, op));
UrlRequestInfoHandle reqInfo = new UrlRequestInfo(urls, maxConnection, op);
reqInfo->setChecksum(checksum);
reqInfo->setDigestAlgo(entry->chunkChecksum->digestAlgo);
reqInfo->setChunkChecksumLength(entry->chunkChecksum->pieceLength);
reqInfo->setChunkChecksums(entry->chunkChecksum->pieceHashes);
nextReqInfos.push_front(reqInfo);
}
} catch(RecoverableException* ex) {

View File

@ -169,7 +169,8 @@ int MultiDiskAdaptor::readData(unsigned char* data, uint32_t len, int64_t offset
return totalReadLength;
}
void MultiDiskAdaptor::hashUpdate(const DiskWriterEntryHandle entry,
void MultiDiskAdaptor::hashUpdate(MessageDigestContext& ctx,
const DiskWriterEntryHandle& entry,
int64_t offset, uint64_t length)
{
uint32_t BUFSIZE = 16*1024;
@ -190,17 +191,19 @@ void MultiDiskAdaptor::hashUpdate(const DiskWriterEntryHandle entry,
}
}
string MultiDiskAdaptor::sha1Sum(int64_t offset, uint64_t length) {
string MultiDiskAdaptor::messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo) {
int64_t fileOffset = offset;
bool reading = false;
uint32_t rem = length;
ctx.digestReset();
MessageDigestContext ctx(algo);
ctx.digestInit();
for(DiskWriterEntries::iterator itr = diskWriterEntries.begin();
itr != diskWriterEntries.end() && rem != 0; itr++) {
if(isInRange(*itr, offset) || reading) {
uint32_t readLength = calculateLength((*itr), fileOffset, rem);
hashUpdate(*itr, fileOffset, readLength);
hashUpdate(ctx, *itr, fileOffset, readLength);
rem -= readLength;
reading = true;
fileOffset = 0;

View File

@ -38,7 +38,6 @@
#include "DiskAdaptor.h"
#include "Option.h"
#include "DiskWriter.h"
#include "messageDigest.h"
#include "File.h"
class DiskWriterEntry {
@ -103,7 +102,6 @@ class MultiDiskAdaptor : public DiskAdaptor {
private:
string topDir;
uint32_t pieceLength;
MessageDigestContext ctx;
DiskWriterEntries diskWriterEntries;
const Option* option;
@ -117,17 +115,15 @@ private:
int64_t fileOffset,
uint32_t rem) const;
void hashUpdate(const DiskWriterEntryHandle entry,
void hashUpdate(MessageDigestContext& ctx,
const DiskWriterEntryHandle& entry,
int64_t offset, uint64_t length);
string getTopDirPath() const;
public:
MultiDiskAdaptor():pieceLength(0),
ctx(DIGEST_ALGO_SHA1),
option(0)
{
ctx.digestInit();
}
{}
virtual ~MultiDiskAdaptor() {}
@ -146,7 +142,8 @@ public:
virtual int readData(unsigned char* data, uint32_t len, int64_t offset);
virtual string sha1Sum(int64_t offset, uint64_t length);
virtual string messageDigest(int64_t offset, uint64_t length,
const MessageDigestContext::DigestAlgo& algo);
virtual bool fileExists();

View File

@ -47,7 +47,6 @@ MultiDiskWriter::MultiDiskWriter(int pieceLength):
MultiDiskWriter::~MultiDiskWriter() {
clearEntries();
ctx.digestFree();
}
void MultiDiskWriter::clearEntries() {

View File

@ -0,0 +1,58 @@
/* <!-- copyright */
/*
* aria2 - The high speed download utility
*
* Copyright (C) 2006 Tatsuhiro Tsujikawa
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* In addition, as a special exception, the copyright holders give
* permission to link the code of portions of this program with the
* OpenSSL library under certain conditions as described in each
* individual source file, and distribute linked combinations
* including the two.
* You must obey the GNU General Public License in all respects
* for all of the code used other than OpenSSL. If you modify
* file(s) with this exception, you may extend this exception to your
* version of the file(s), but you are not obligated to do so. If you
* do not wish to do so, delete this exception statement from your
* version. If you delete this exception statement from all source
* files in the program, then also delete it here.
*/
/* copyright --> */
#ifndef _D_NULL_FILE_ALLOCATION_MONITOR_H_
#define _D_NULL_FILE_ALLOCATION_MONITOR_H_
#include "FileAllocationMonitor.h"
class NullFileAllocationMonitor : public FileAllocationMonitor {
public:
NullFileAllocationMonitor() {}
virtual ~NullFileAllocationMonitor() {}
virtual void setFilename(const string& filename) {}
virtual void setMinValue(const uint64_t& min) {}
virtual void setMaxValue(const uint64_t& max) {}
virtual void setCurrentValue(const uint64_t& current) {}
virtual void showProgress() {}
};
#endif // _D_NULL_FILE_ALLOCATION_MONITOR_H_

View File

@ -146,6 +146,15 @@ public:
*/
virtual void removeAdvertisedPiece(int elapsed) = 0;
/**
* Sets all bits in bitfield to 1.
*/
virtual void markAllPiecesDone() = 0;
/**
* Validates file integrity by comparing checksums.
*/
virtual void checkIntegrity() = 0;
};
typedef SharedHandle<PieceStorage> PieceStorageHandle;

View File

@ -36,7 +36,11 @@
#include "Util.h"
#include "FeatureConfig.h"
Request::Request():port(0), tryCount(0), keepAlive(true), isTorrent(false) {
const string Request::METHOD_GET = "get";
const string Request::METHOD_HEAD = "head";
Request::Request():port(0), tryCount(0), keepAlive(true), method(METHOD_GET), isTorrent(false) {
cookieBox = new CookieBox();
}

View File

@ -74,6 +74,7 @@ private:
int tryCount;
int trackerEvent;
bool keepAlive;
string method;
bool parseUrl(const string& url);
public:
Segment segment;
@ -111,6 +112,17 @@ public:
void setTrackerEvent(int event) { trackerEvent = event; }
int getTrackerEvent() const { return trackerEvent; }
void setMethod(const string& method) {
this->method = method;
}
const string& getMethod() const {
return method;
}
static const string METHOD_GET;
static const string METHOD_HEAD;
enum TRACKER_EVENT {
AUTO,
STARTED,

View File

@ -40,6 +40,7 @@
#include "prefs.h"
#include "LogFactory.h"
#include "BitfieldManFactory.h"
#include "ChunkChecksumValidator.h"
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
@ -51,13 +52,15 @@ SegmentMan::SegmentMan():bitfield(0),
downloadStarted(false),
dir("."),
errors(0),
diskWriter(0) {
diskWriter(0),
chunkHashLength(0),
digestAlgo(DIGEST_ALGO_SHA1)
{
logger = LogFactory::getInstance();
}
SegmentMan::~SegmentMan() {
delete bitfield;
delete diskWriter;
}
bool SegmentMan::segmentFileExists() const {
@ -233,47 +236,20 @@ void SegmentMan::initBitfield(int segmentLength, long long int totalLength) {
this->bitfield = BitfieldManFactory::getNewFactory()->createBitfieldMan(segmentLength, totalLength);
}
class FindSegmentEntryByIndex {
private:
int index;
public:
FindSegmentEntryByIndex(int index):index(index) {}
bool operator()(const SegmentEntryHandle& entry) {
return entry->segment.index == index;
}
};
class FindSegmentEntryByCuid {
private:
int cuid;
public:
FindSegmentEntryByCuid(int cuid):cuid(cuid) {}
bool operator()(const SegmentEntryHandle& entry) {
return entry->cuid == cuid;
}
};
Segment SegmentMan::checkoutSegment(int cuid, int index) {
logger->debug("Attach segment#%d to CUID#%d.", index, cuid);
bitfield->setUseBit(index);
SegmentEntries::iterator itr = find_if(usedSegmentEntries.begin(),
usedSegmentEntries.end(),
FindSegmentEntryByIndex(index));
SegmentEntryHandle segmentEntry = getSegmentEntryByIndex(index);
Segment segment;
if(itr == usedSegmentEntries.end()) {
if(segmentEntry.isNull()) {
segment = Segment(index, bitfield->getBlockLength(index),
bitfield->getBlockLength());
SegmentEntryHandle entry =
SegmentEntryHandle(new SegmentEntry(cuid, segment));
SegmentEntryHandle entry = new SegmentEntry(cuid, segment);
usedSegmentEntries.push_back(entry);
} else {
(*itr)->cuid = cuid;
segment = (*itr)->segment;
segmentEntry->cuid = cuid;
segment = segmentEntry->segment;
}
logger->debug("index=%d, length=%d, segmentLength=%d, writtenLength=%d",
segment.index, segment.length, segment.segmentLength,
segment.writtenLength);
@ -286,13 +262,11 @@ bool SegmentMan::onNullBitfield(Segment& segment, int cuid) {
usedSegmentEntries.push_back(SegmentEntryHandle(new SegmentEntry(cuid, segment)));
return true;
} else {
SegmentEntries::iterator itr = find_if(usedSegmentEntries.begin(),
usedSegmentEntries.end(),
FindSegmentEntryByCuid(cuid));
if(itr == usedSegmentEntries.end()) {
SegmentEntryHandle segmentEntry = getSegmentEntryByCuid(cuid);
if(segmentEntry.isNull()) {
return false;
} else {
segment = (*itr)->segment;
segment = segmentEntry->segment;
return true;
}
}
@ -302,7 +276,7 @@ SegmentEntryHandle SegmentMan::findSlowerSegmentEntry(const PeerStatHandle& peer
int speed = (int)(peerStat->getAvgDownloadSpeed()*0.8);
SegmentEntryHandle slowSegmentEntry(0);
for(SegmentEntries::const_iterator itr = usedSegmentEntries.begin();
itr != usedSegmentEntries.end(); itr++) {
itr != usedSegmentEntries.end(); ++itr) {
const SegmentEntryHandle& segmentEntry = *itr;
if(segmentEntry->cuid == 0) {
continue;
@ -326,11 +300,10 @@ bool SegmentMan::getSegment(Segment& segment, int cuid) {
if(!bitfield) {
return onNullBitfield(segment, cuid);
}
SegmentEntries::iterator itr = find_if(usedSegmentEntries.begin(),
usedSegmentEntries.end(),
FindSegmentEntryByCuid(cuid));
if(itr != usedSegmentEntries.end()) {
segment = (*itr)->segment;
SegmentEntryHandle segmentEntry = getSegmentEntryByCuid(cuid);
if(!segmentEntry.isNull()) {
segment = segmentEntry->segment;
return true;
}
int index = bitfield->getSparseMissingUnusedIndex();
@ -378,13 +351,11 @@ bool SegmentMan::updateSegment(int cuid, const Segment& segment) {
if(segment.isNull()) {
return false;
}
SegmentEntries::iterator itr = find_if(usedSegmentEntries.begin(),
usedSegmentEntries.end(),
FindSegmentEntryByCuid(cuid));
if(itr == usedSegmentEntries.end()) {
SegmentEntryHandle segmentEntry = getSegmentEntryByCuid(cuid);
if(segmentEntry.isNull()) {
return false;
} else {
(*itr)->segment = segment;
segmentEntry->segment = segment;
return true;
}
}
@ -425,9 +396,7 @@ bool SegmentMan::completeSegment(int cuid, const Segment& segment) {
initBitfield(option->getAsInt(PREF_SEGMENT_SIZE), segment.writtenLength);
bitfield->setAllBit();
}
SegmentEntries::iterator itr = find_if(usedSegmentEntries.begin(),
usedSegmentEntries.end(),
FindSegmentEntryByCuid(cuid));
SegmentEntries::iterator itr = getSegmentEntryIteratorByCuid(cuid);
if(itr == usedSegmentEntries.end()) {
return false;
} else {
@ -463,7 +432,7 @@ void SegmentMan::registerPeerStat(const PeerStatHandle& peerStat) {
}
}
int SegmentMan::calculateDownloadSpeed() const {
uint32_t SegmentMan::calculateDownloadSpeed() const {
int speed = 0;
for(PeerStats::const_iterator itr = peerStats.begin();
itr != peerStats.end(); itr++) {
@ -483,3 +452,78 @@ bool SegmentMan::shouldCancelDownloadForSafety() {
return fileExists() && !segmentFileExists() &&
option->get(PREF_FORCE_TRUNCATE) != V_TRUE;
}
void SegmentMan::markAllPiecesDone()
{
if(bitfield) {
bitfield->setAllBit();
}
}
void SegmentMan::checkIntegrity()
{
logger->notice("Validating file %s",
getFilePath().c_str());
ChunkChecksumValidator v;
v.setDigestAlgo(digestAlgo);
v.setDiskWriter(diskWriter);
v.setFileAllocationMonitor(FileAllocationMonitorFactory::getFactory()->createNewMonitor());
v.validate(bitfield, pieceHashes, chunkHashLength);
}
bool SegmentMan::isChunkChecksumValidationReady() const {
return bitfield &&
pieceHashes.size()*chunkHashLength == bitfield->getBlockLength()*(bitfield->getMaxIndex()+1);
}
void SegmentMan::tryChunkChecksumValidation(const Segment& segment)
{
if(!isChunkChecksumValidationReady()) {
return;
}
int32_t hashStartIndex;
int32_t hashEndIndex;
Util::indexRange(hashStartIndex, hashEndIndex,
segment.getPosition(),
segment.writtenLength,
chunkHashLength);
if(hashStartIndex*chunkHashLength < segment.getPosition() && !bitfield->isBitSet(segment.index-1)) {
++hashStartIndex;
}
if(hashEndIndex*(chunkHashLength+1) > segment.getPosition()+segment.segmentLength && !bitfield->isBitSet(segment.index+1)) {
--hashEndIndex;
}
logger->debug("hashStartIndex=%d, hashEndIndex=%d",
hashStartIndex, hashEndIndex);
if(hashStartIndex > hashEndIndex) {
logger->debug("No chunk to verify.");
return;
}
int64_t hashOffset = hashStartIndex*chunkHashLength;
int32_t startIndex;
int32_t endIndex;
Util::indexRange(startIndex, endIndex,
hashOffset,
(hashEndIndex-hashStartIndex+1)*chunkHashLength,
segment.segmentLength);
logger->debug("startIndex=%d, endIndex=%d", startIndex, endIndex);
if(bitfield->isBitRangeSet(startIndex, endIndex)) {
for(int32_t index = hashStartIndex; index <= hashEndIndex; ++index) {
int64_t offset = index*chunkHashLength;
uint32_t dataLength =
offset+chunkHashLength <= totalSize ? chunkHashLength : totalSize-offset;
string actualChecksum = diskWriter->messageDigest(offset, dataLength, digestAlgo);
string expectedChecksum = pieceHashes.at(index);
if(expectedChecksum == actualChecksum) {
logger->info("Chunk checksum validation succeeded.");
} else {
logger->error("Chunk checksum validation failed. checksumIndex=%d, offset=%lld, length=%u, expected=%s, actual=%s",
index, offset, dataLength,
expectedChecksum.c_str(), actualChecksum.c_str());
logger->info("Unset bit from %d to %d(inclusive)", startIndex, endIndex);
bitfield->unsetBitRange(startIndex, endIndex);
break;
}
}
}
}

View File

@ -77,6 +77,39 @@ private:
bool onNullBitfield(Segment& segment, int cuid);
Segment checkoutSegment(int cuid, int index);
SegmentEntryHandle findSlowerSegmentEntry(const PeerStatHandle& peerStat) const;
SegmentEntryHandle getSegmentEntryByIndex(int index) {
for(SegmentEntries::const_iterator itr = usedSegmentEntries.begin();
itr != usedSegmentEntries.end(); ++itr) {
const SegmentEntryHandle& segmentEntry = *itr;
if(segmentEntry->segment.index == index) {
return segmentEntry;
}
}
return 0;
}
SegmentEntryHandle getSegmentEntryByCuid(int cuid) {
for(SegmentEntries::const_iterator itr = usedSegmentEntries.begin();
itr != usedSegmentEntries.end(); ++itr) {
const SegmentEntryHandle& segmentEntry = *itr;
if(segmentEntry->cuid == cuid) {
return segmentEntry;
}
}
return 0;
}
SegmentEntries::iterator getSegmentEntryIteratorByCuid(int cuid) {
for(SegmentEntries::iterator itr = usedSegmentEntries.begin();
itr != usedSegmentEntries.end(); ++itr) {
const SegmentEntryHandle& segmentEntry = *itr;
if(segmentEntry->cuid == cuid) {
return itr;
}
}
return usedSegmentEntries.end();
}
public:
/**
* The total number of bytes to download.
@ -121,9 +154,13 @@ public:
int errors;
const Option* option;
DiskWriter* diskWriter;
DiskWriterHandle diskWriter;
Requests reserved;
Strings pieceHashes;
uint32_t chunkHashLength;
MessageDigestContext::DigestAlgo digestAlgo;
SegmentMan();
~SegmentMan();
@ -221,45 +258,37 @@ public:
*/
void registerPeerStat(const PeerStatHandle& peerStat);
class FindPeerStat {
private:
int cuid;
public:
FindPeerStat(int cuid):cuid(cuid) {}
bool operator()(const PeerStatHandle& peerStat) {
if(peerStat->getCuid() == cuid) {
return true;
} else {
return false;
}
}
};
/**
* Returns peerStat whose cuid is given cuid. If it is not found, returns
* PeerStatHandle(0).
* 0.
*/
PeerStatHandle getPeerStat(int cuid) const {
PeerStats::const_iterator itr = find_if(peerStats.begin(), peerStats.end(),
FindPeerStat(cuid));
if(itr == peerStats.end()) {
// TODO
return PeerStatHandle(0);
} else {
return *itr;
for(PeerStats::const_iterator itr = peerStats.begin(); itr != peerStats.end(); ++itr) {
const PeerStatHandle& peerStat = *itr;
if(peerStat->getCuid() == cuid) {
return peerStat;
}
}
return 0;
}
/**
* Returns current download speed in bytes per sec.
*/
int calculateDownloadSpeed() const;
uint32_t calculateDownloadSpeed() const;
bool fileExists();
bool shouldCancelDownloadForSafety();
void markAllPiecesDone();
void checkIntegrity();
void tryChunkChecksumValidation(const Segment& segment);
bool isChunkChecksumValidationReady() const;
};
#endif // _D_SEGMENT_MAN_H_

View File

@ -40,9 +40,8 @@ ShaVisitor::ShaVisitor():
ctx.digestInit();
}
ShaVisitor::~ShaVisitor() {
ctx.digestFree();
}
ShaVisitor::~ShaVisitor() {}
void ShaVisitor::visit(const Data* d) {
if(d->isNumber()) {

View File

@ -67,12 +67,22 @@ RequestInfos TorrentRequestInfo::execute() {
// load .aria2 file if it exists.
BT_PROGRESS_INFO_FILE(btContext)->load();
PIECE_STORAGE(btContext)->getDiskAdaptor()->openExistingFile();
if(op->get(PREF_CHECK_INTEGRITY) == V_TRUE) {
PIECE_STORAGE(btContext)->checkIntegrity();
}
} else {
if(PIECE_STORAGE(btContext)->getDiskAdaptor()->fileExists() &&
op->get(PREF_FORCE_TRUNCATE) != V_TRUE) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
PIECE_STORAGE(btContext)->getDiskAdaptor()->getFilePath().c_str(),
BT_PROGRESS_INFO_FILE(btContext)->getFilename().c_str());
if(PIECE_STORAGE(btContext)->getDiskAdaptor()->fileExists()) {
if(op->get(PREF_FORCE_TRUNCATE) != V_TRUE) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
PIECE_STORAGE(btContext)->getDiskAdaptor()->getFilePath().c_str(),
BT_PROGRESS_INFO_FILE(btContext)->getFilename().c_str());
} else {
PIECE_STORAGE(btContext)->getDiskAdaptor()->openExistingFile();
if(op->get(PREF_CHECK_INTEGRITY) == V_TRUE) {
PIECE_STORAGE(btContext)->markAllPiecesDone();
PIECE_STORAGE(btContext)->checkIntegrity();
}
}
} else {
PIECE_STORAGE(btContext)->getDiskAdaptor()->initAndOpenFile();
}

View File

@ -38,6 +38,13 @@
#include "prefs.h"
#include "DownloadEngineFactory.h"
#include "RecoverableException.h"
#include "FatalException.h"
#include "message.h"
std::ostream& operator<<(std::ostream& o, const HeadResult& hr) {
o << "filename = " << hr.filename << ", " << "totalLength = " << hr.totalLength;
return o;
}
extern volatile sig_atomic_t haltRequested;
@ -81,18 +88,22 @@ private:
Requests* requestsPtr;
string referer;
int split;
string method;
public:
CreateRequest(Requests* requestsPtr,
const string& referer,
int split)
int split,
const string& method = Request::METHOD_GET)
:requestsPtr(requestsPtr),
referer(referer),
split(split) {}
split(split),
method(method) {}
void operator()(const string& url) {
for(int s = 1; s <= split; s++) {
RequestHandle req;
req->setReferer(referer);
req->setMethod(method);
if(req->setUrl(url)) {
requestsPtr->push_back(req);
} else {
@ -109,6 +120,32 @@ void UrlRequestInfo::printUrls(const Strings& urls) const {
}
}
HeadResult UrlRequestInfo::getHeadResult() {
Requests requests;
for_each(urls.begin(), urls.end(),
CreateRequest(&requests,
op->get(PREF_REFERER),
1,
Request::METHOD_HEAD));
Requests reserved(requests.begin()+1, requests.end());
requests.erase(requests.begin()+1, requests.end());
SharedHandle<ConsoleDownloadEngine> e(DownloadEngineFactory::newConsoleEngine(op, requests, reserved));
HeadResult hr;
try {
e->run();
hr.filename = e->segmentMan->filename;
hr.totalLength = e->segmentMan->totalSize;
} catch(RecoverableException *ex) {
logger->error("Exception caught", ex);
delete ex;
fail = true;
}
return hr;
}
RequestInfos UrlRequestInfo::execute() {
Requests requests;
Requests reserved;
@ -117,11 +154,49 @@ RequestInfos UrlRequestInfo::execute() {
CreateRequest(&requests,
op->get(PREF_REFERER),
op->getAsInt(PREF_SPLIT)));
HeadResult hr = getHeadResult();
if(fail) {
return RequestInfos();
}
logger->info("Head result: filename=%s, total length=%s",
hr.filename.c_str(), Util::ullitos(hr.totalLength, true).c_str());
adjustRequestSize(requests, reserved, maxConnections);
SharedHandle<ConsoleDownloadEngine> e(DownloadEngineFactory::newConsoleEngine(op, requests, reserved));
e->segmentMan->filename = hr.filename;
e->segmentMan->totalSize = hr.totalLength;
e->segmentMan->downloadStarted = true;
e->segmentMan->digestAlgo = digestAlgo;
e->segmentMan->chunkHashLength = chunkChecksumLength;
e->segmentMan->pieceHashes = chunkChecksums;
if(e->segmentMan->segmentFileExists()) {
e->segmentMan->load();
e->segmentMan->diskWriter->openExistingFile(e->segmentMan->getFilePath());
if(e->option->get(PREF_CHECK_INTEGRITY) == V_TRUE) {
e->segmentMan->checkIntegrity();
}
} else {
if(e->segmentMan->shouldCancelDownloadForSafety()) {
throw new FatalException(EX_FILE_ALREADY_EXISTS,
e->segmentMan->getFilePath().c_str(),
e->segmentMan->getSegmentFilePath().c_str());
}
e->segmentMan->initBitfield(e->option->getAsInt(PREF_SEGMENT_SIZE),
e->segmentMan->totalSize);
if(e->segmentMan->fileExists() && e->option->get(PREF_CHECK_INTEGRITY) == V_TRUE) {
e->segmentMan->diskWriter->openExistingFile(e->segmentMan->getFilePath());
e->segmentMan->markAllPiecesDone();
e->segmentMan->checkIntegrity();
} else {
e->segmentMan->diskWriter->initAndOpenFile(e->segmentMan->getFilePath(),
e->segmentMan->totalSize);
}
}
Util::setGlobalSignalHandler(SIGINT, handler, 0);
Util::setGlobalSignalHandler(SIGTERM, handler, 0);

View File

@ -37,24 +37,53 @@
#include "RequestInfo.h"
class HeadResult {
public:
HeadResult():totalLength(0) {}
string filename;
uint64_t totalLength;
};
std::ostream& operator<<(std::ostream& o, const HeadResult& hr);
class UrlRequestInfo : public RequestInfo {
private:
Strings urls;
int maxConnections;
MessageDigestContext::DigestAlgo digestAlgo;
uint32_t chunkChecksumLength;
Strings chunkChecksums;
RequestInfo* createNextRequestInfo() const;
void adjustRequestSize(Requests& requests,
Requests& reserved,
int maxConnections) const;
void printUrls(const Strings& urls) const;
HeadResult getHeadResult();
public:
UrlRequestInfo(const Strings& urls, int maxConnections, Option* op):
RequestInfo(op),
urls(urls),
maxConnections(maxConnections) {}
maxConnections(maxConnections),
digestAlgo(DIGEST_ALGO_SHA1),
chunkChecksumLength(0) {}
virtual ~UrlRequestInfo() {}
virtual RequestInfos execute();
void setDigestAlgo(const MessageDigestContext::DigestAlgo& algo) {
this->digestAlgo = algo;
}
void setChunkChecksumLength(uint32_t chunkChecksumLength) {
this->chunkChecksumLength = chunkChecksumLength;
}
void setChunkChecksums(const Strings& chunkChecksums) {
this->chunkChecksums = chunkChecksums;
}
};
typedef SharedHandle<UrlRequestInfo> UrlRequestInfoHandle;
#endif // _D_URL_REQUEST_INFO_H_

View File

@ -460,7 +460,6 @@ void Util::sha1Sum(unsigned char* digest, const void* data, int dataLength) {
ctx.digestInit();
ctx.digestUpdate(data, dataLength);
ctx.digestFinal(digest);
ctx.digestFree();
}
string Util::simpleMessageDigest(const string& data) {
@ -501,7 +500,6 @@ void Util::fileChecksum(const string& filename, unsigned char* digest,
}
}
ctx.digestFinal(digest);
ctx.digestFree();
}
#endif // ENABLE_MESSAGE_DIGEST
@ -638,3 +636,11 @@ void Util::setGlobalSignalHandler(int signal, void (*handler)(int), int flags) {
sigemptyset(&sigact.sa_mask);
sigaction(signal, &sigact, NULL);
}
void Util::indexRange(int32_t& startIndex, int32_t& endIndex,
int64_t offset, uint32_t srcLength, uint32_t destLength)
{
startIndex = offset/destLength;
endIndex = (offset+srcLength-1)/destLength;
}

View File

@ -134,6 +134,10 @@ public:
static bool isNumbersAndDotsNotation(const string& name);
static void setGlobalSignalHandler(int signal, void (*handler)(int), int flags);
static void indexRange(int32_t& startIndex, int32_t& endIndex,
int64_t offset,
uint32_t srcLength, uint32_t destLength);
};
#endif // _D_UTIL_H_

View File

@ -99,7 +99,7 @@ MetalinkEntryHandle Xml2MetalinkProcessor::getEntry(const string& xpath) {
MetalinkEntryHandle entry(new MetalinkEntry());
entry->filename = filename;
entry->size = STRTOLL(xpathContent(xpath+"/m:size").c_str());
entry->version = Util::trim(xpathContent(xpath+"/m:version"));
entry->language = Util::trim(xpathContent(xpath+"/m:language"));
entry->os = Util::trim(xpathContent(xpath+"/m:os"));
@ -116,6 +116,8 @@ MetalinkEntryHandle Xml2MetalinkProcessor::getEntry(const string& xpath) {
entry->checksum.setDigestAlgo(DIGEST_ALGO_MD5);
}
}
entry->chunkChecksum = getPieceHash(xpath+"/m:verification/m:pieces[@type=\"sha1\"]", entry->size);
#endif // ENABLE_MESSAGE_DIGEST
for(int index = 1; 1; index++) {
MetalinkResourceHandle resource(getResource(xpath+"/m:resources/m:url["+Util::itos(index)+"]"));
@ -128,6 +130,33 @@ MetalinkEntryHandle Xml2MetalinkProcessor::getEntry(const string& xpath) {
return entry;
}
MetalinkChunkChecksumHandle Xml2MetalinkProcessor::getPieceHash(const string& xpath,
uint64_t totalSize)
{
MetalinkChunkChecksumHandle chunkChecksum = new MetalinkChunkChecksum();
chunkChecksum->digestAlgo = DIGEST_ALGO_SHA1;
xmlXPathObjectPtr result = xpathEvaluation(xpath);
if(!result) {
return 0;
}
xmlNodeSetPtr nodeSet = result->nodesetval;
xmlNodePtr node = nodeSet->nodeTab[0];
chunkChecksum->pieceLength = STRTOLL(Util::trim(xmlAttribute(node, "length")).c_str());
xmlXPathFreeObject(result);
uint64_t numPiece =
(totalSize+chunkChecksum->pieceLength-1)/chunkChecksum->pieceLength;
for(uint64_t i = 0; i < numPiece; i++) {
string pieceHash = Util::trim(xpathContent(xpath+"/m:hash[@piece=\""+Util::ullitos(i)+"\"]"));
if(pieceHash == "") {
throw new DlAbortEx("Piece hash missing. index=%u", i);
}
chunkChecksum->pieceHashes.push_back(pieceHash);
}
return chunkChecksum;
}
MetalinkResourceHandle Xml2MetalinkProcessor::getResource(const string& xpath) {
xmlXPathObjectPtr result = xpathEvaluation(xpath);
if(!result) {

View File

@ -46,6 +46,8 @@ private:
MetalinkEntryHandle getEntry(const string& xpath);
MetalinkResourceHandle getResource(const string& xpath);
MetalinkChunkChecksumHandle getPieceHash(const string& xpath,
uint64_t totalSize);
xmlXPathObjectPtr xpathEvaluation(const string& xpath);
string xpathContent(const string& xpath);

View File

@ -344,6 +344,8 @@ int main(int argc, char* argv[]) {
op->put(PREF_TRACKER_MAX_TRIES, "10");
op->put(PREF_FILE_ALLOCATION, V_NONE);
op->put(PREF_FORCE_TRUNCATE, V_FALSE);
op->put(PREF_REALTIME_CHUNK_CHECKSUM, V_TRUE);
op->put(PREF_CHECK_INTEGRITY, V_TRUE);
while(1) {
int optIndex = 0;
int lopt;

View File

@ -75,6 +75,10 @@ public:
MessageDigestContext(DigestAlgo algo):
algo(algo) {}
~MessageDigestContext()
{
digestFree();
}
#ifdef HAVE_LIBSSL
void digestInit() {
EVP_MD_CTX_init(&ctx);

View File

@ -86,6 +86,10 @@
# define V_PREALLOC "prealloc"
// value: true | false
#define PREF_FORCE_TRUNCATE "force_truncate"
// value: true | false
#define PREF_REALTIME_CHUNK_CHECKSUM "realtime_chunk_checksum"
// value: true | false
#define PREF_CHECK_INTEGRITY "check_integrity"
/**
* FTP related preferences

View File

@ -0,0 +1,135 @@
#include "ChunkChecksumValidator.h"
#include "DefaultDiskWriter.h"
#include <cppunit/extensions/HelperMacros.h>
using namespace std;
class ChunkChecksumValidatorTest:public CppUnit::TestFixture {
CPPUNIT_TEST_SUITE(ChunkChecksumValidatorTest);
CPPUNIT_TEST(testValidate);
CPPUNIT_TEST(testValidate2);
CPPUNIT_TEST(testValidate3);
CPPUNIT_TEST(testValidate4);
CPPUNIT_TEST_SUITE_END();
private:
static const char* csArray[];// = { "29b0e7878271645fffb7eec7db4a7473a1c00bc1",
// "4df75a661cb7eb2733d9cdaa7f772eae3a4e2976",
// "0a4ea2f7dd7c52ddf2099a444ab2184b4d341bdb" };
public:
void setUp() {
}
void testValidate();
void testValidate2();
void testValidate3();
void testValidate4();
};
CPPUNIT_TEST_SUITE_REGISTRATION( ChunkChecksumValidatorTest );
const char* ChunkChecksumValidatorTest::csArray[] = { "29b0e7878271645fffb7eec7db4a7473a1c00bc1",
"4df75a661cb7eb2733d9cdaa7f772eae3a4e2976",
"0a4ea2f7dd7c52ddf2099a444ab2184b4d341bdb" };
void ChunkChecksumValidatorTest::testValidate() {
BitfieldMan bitfieldMan(100, 250);
bitfieldMan.setAllBit();
Strings checksums(&csArray[0], &csArray[3]);
DefaultDiskWriterHandle diskWriter = new DefaultDiskWriter();
diskWriter->openExistingFile("chunkChecksumTestFile250.txt");
ChunkChecksumValidator validator;
validator.setDiskWriter(diskWriter);
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isAllBitSet());
checksums[1] = "ffffffffffffffffffffffffffffffffffffffff";
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isBitSet(0));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(1));
CPPUNIT_ASSERT(bitfieldMan.isBitSet(2));
}
void ChunkChecksumValidatorTest::testValidate2() {
BitfieldMan bitfieldMan(50, 250);
bitfieldMan.setAllBit();
Strings checksums(&csArray[0], &csArray[3]);
DefaultDiskWriterHandle diskWriter = new DefaultDiskWriter();
diskWriter->openExistingFile("chunkChecksumTestFile250.txt");
ChunkChecksumValidator validator;
validator.setDiskWriter(diskWriter);
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isAllBitSet());
checksums[1] = "ffffffffffffffffffffffffffffffffffffffff";
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isBitSet(0));
CPPUNIT_ASSERT(bitfieldMan.isBitSet(1));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(2));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(3));
CPPUNIT_ASSERT(bitfieldMan.isBitSet(4));
}
void ChunkChecksumValidatorTest::testValidate3() {
BitfieldMan bitfieldMan(50, 250);
bitfieldMan.setAllBit();
Strings checksums;
checksums.push_back("898a81b8e0181280ae2ee1b81e269196d91e869a");
DefaultDiskWriterHandle diskWriter = new DefaultDiskWriter();
diskWriter->openExistingFile("chunkChecksumTestFile250.txt");
ChunkChecksumValidator validator;
validator.setDiskWriter(diskWriter);
validator.validate(&bitfieldMan, checksums, 250);
CPPUNIT_ASSERT(bitfieldMan.isAllBitSet());
checksums[0] = "ffffffffffffffffffffffffffffffffffffffff";
validator.validate(&bitfieldMan, checksums, 250);
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(0));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(1));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(2));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(3));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(4));
}
void ChunkChecksumValidatorTest::testValidate4() {
BitfieldMan bitfieldMan(70, 250);
bitfieldMan.setAllBit();
Strings checksums(&csArray[0], &csArray[3]);
DefaultDiskWriterHandle diskWriter = new DefaultDiskWriter();
diskWriter->openExistingFile("chunkChecksumTestFile250.txt");
ChunkChecksumValidator validator;
validator.setDiskWriter(diskWriter);
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isAllBitSet());
checksums[1] = "ffffffffffffffffffffffffffffffffffffffff";
validator.validate(&bitfieldMan, checksums, 100);
CPPUNIT_ASSERT(bitfieldMan.isBitSet(0));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(1));
CPPUNIT_ASSERT(!bitfieldMan.isBitSet(2));
CPPUNIT_ASSERT(bitfieldMan.isBitSet(3));
}

View File

@ -31,7 +31,6 @@ void ConsoleFileAllocationMonitorTest::testShowProgress() {
for(uint64_t i = monitor.getMinValue(); i <= monitor.getMaxValue(); i += 1234343) {
monitor.setCurrentValue(i);
monitor.showProgress();
usleep(5);
}
monitor.setCurrentValue(monitor.getMaxValue());
monitor.showProgress();

View File

@ -7,7 +7,7 @@ using namespace std;
class DefaultDiskWriterTest:public CppUnit::TestFixture {
CPPUNIT_TEST_SUITE(DefaultDiskWriterTest);
CPPUNIT_TEST(testSha1Sum);
CPPUNIT_TEST(testMessageDigest);
CPPUNIT_TEST_SUITE_END();
private:
@ -15,21 +15,21 @@ public:
void setUp() {
}
void testSha1Sum();
void testMessageDigest();
};
CPPUNIT_TEST_SUITE_REGISTRATION( DefaultDiskWriterTest );
void DefaultDiskWriterTest::testSha1Sum() {
void DefaultDiskWriterTest::testMessageDigest() {
DefaultDiskWriter dw;
dw.openExistingFile("4096chunk.txt");
CPPUNIT_ASSERT_EQUAL(string("608cabc0f2fa18c260cafd974516865c772363d5"),
dw.sha1Sum(0, 4096));
dw.messageDigest(0, 4096, DIGEST_ALGO_SHA1));
CPPUNIT_ASSERT_EQUAL(string("7a4a9ae537ebbbb826b1060e704490ad0f365ead"),
dw.sha1Sum(5, 100));
dw.messageDigest(5, 100, DIGEST_ALGO_SHA1));
dw.closeFile();
}

View File

@ -58,7 +58,8 @@ aria2c_SOURCES = AllTest.cc\
FixedNumberRandomizer.h\
MockBtMessageFactory.h\
MockBtMessage.h\
ConsoleFileAllocationMonitorTest.cc
ConsoleFileAllocationMonitorTest.cc\
ChunkChecksumValidatorTest.cc
#aria2c_CXXFLAGS = ${CPPUNIT_CFLAGS} -I../src -I../lib -Wall -D_FILE_OFFSET_BITS=64
#aria2c_LDFLAGS = ${CPPUNIT_LIBS}

View File

@ -89,7 +89,8 @@ am_aria2c_OBJECTS = AllTest.$(OBJEXT) RequestTest.$(OBJEXT) \
BtSuggestPieceMessageTest.$(OBJEXT) \
BtUnchokeMessageTest.$(OBJEXT) \
BtHandshakeMessageTest.$(OBJEXT) \
ConsoleFileAllocationMonitorTest.$(OBJEXT)
ConsoleFileAllocationMonitorTest.$(OBJEXT) \
ChunkChecksumValidatorTest.$(OBJEXT)
aria2c_OBJECTS = $(am_aria2c_OBJECTS)
am__DEPENDENCIES_1 =
aria2c_DEPENDENCIES = ../src/libaria2c.a $(am__DEPENDENCIES_1)
@ -311,7 +312,8 @@ aria2c_SOURCES = AllTest.cc\
FixedNumberRandomizer.h\
MockBtMessageFactory.h\
MockBtMessage.h\
ConsoleFileAllocationMonitorTest.cc
ConsoleFileAllocationMonitorTest.cc\
ChunkChecksumValidatorTest.cc
#aria2c_CXXFLAGS = ${CPPUNIT_CFLAGS} -I../src -I../lib -Wall -D_FILE_OFFSET_BITS=64
#aria2c_LDFLAGS = ${CPPUNIT_LIBS}
@ -396,6 +398,7 @@ distclean-compile:
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BtRequestMessageTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BtSuggestPieceMessageTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/BtUnchokeMessageTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ChunkChecksumValidatorTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ChunkedEncodingTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ConsoleFileAllocationMonitorTest.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CookieBoxTest.Po@am__quote@

View File

@ -43,6 +43,10 @@ public:
return pieceHashes.at(index);
}
virtual const Strings& getPieceHashes() const {
return pieceHashes;
}
void addPieceHash(const string pieceHash) {
pieceHashes.push_back(pieceHash);
}

View File

@ -146,6 +146,10 @@ public:
}
virtual void removeAdvertisedPiece(int elapsed) {}
virtual void markAllPiecesDone() {}
virtual void checkIntegrity() {}
};
typedef SharedHandle<MockPieceStorage> MockPieceStorageHandle;

View File

@ -9,7 +9,7 @@ class MultiDiskAdaptorTest:public CppUnit::TestFixture {
CPPUNIT_TEST_SUITE(MultiDiskAdaptorTest);
CPPUNIT_TEST(testWriteData);
CPPUNIT_TEST(testReadData);
CPPUNIT_TEST(testSha1Sum);
CPPUNIT_TEST(testMessageDigest);
CPPUNIT_TEST_SUITE_END();
private:
Option* option;
@ -30,7 +30,7 @@ public:
void testWriteData();
void testReadData();
void testSha1Sum();
void testMessageDigest();
};
@ -134,7 +134,7 @@ void MultiDiskAdaptorTest::testReadData() {
CPPUNIT_ASSERT_EQUAL(string("1234567890ABCDEFGHIJKLMNO"), string((char*)buf));
}
void MultiDiskAdaptorTest::testSha1Sum() {
void MultiDiskAdaptorTest::testMessageDigest() {
FileEntryHandle entry1(new FileEntry("file1r.txt", 15, 0));
FileEntryHandle entry2(new FileEntry("file2r.txt", 7, 15));
FileEntryHandle entry3(new FileEntry("file3r.txt", 3, 22));
@ -146,11 +146,11 @@ void MultiDiskAdaptorTest::testSha1Sum() {
adaptor->setFileEntries(entries);
adaptor->openFile();
string sha1sum = adaptor->sha1Sum(0, 25);
string sha1sum = adaptor->messageDigest(0, 25, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("76495faf71ca63df66dce99547d2c58da7266d9e"), sha1sum);
sha1sum = adaptor->sha1Sum(15, 7);
sha1sum = adaptor->messageDigest(15, 7, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("737660d816fb23c2d5bc74f62d9b01b852b2aaca"), sha1sum);
sha1sum = adaptor->sha1Sum(10, 14);
sha1sum = adaptor->messageDigest(10, 14, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("6238bf61dd8df8f77156b2378e9e39cd3939680c"), sha1sum);
adaptor->closeFile();
}

View File

@ -9,7 +9,7 @@ class MultiDiskWriterTest:public CppUnit::TestFixture {
CPPUNIT_TEST_SUITE(MultiDiskWriterTest);
CPPUNIT_TEST(testWriteData);
CPPUNIT_TEST(testReadData);
CPPUNIT_TEST(testSha1Sum);
CPPUNIT_TEST(testMessageDigest);
CPPUNIT_TEST_SUITE_END();
private:
@ -19,7 +19,7 @@ public:
void testWriteData();
void testReadData();
void testSha1Sum();
void testMessageDigest();
};
@ -120,7 +120,7 @@ void MultiDiskWriterTest::testReadData() {
CPPUNIT_ASSERT_EQUAL(string("1234567890ABCDEFGHIJKLMNO"), string(buf));
}
void MultiDiskWriterTest::testSha1Sum() {
void MultiDiskWriterTest::testMessageDigest() {
FileEntryHandle entry1(new FileEntry("file1r.txt", 15, 0));
FileEntryHandle entry2(new FileEntry("file2r.txt", 7, 15));
FileEntryHandle entry3(new FileEntry("file3r.txt", 3, 22));
@ -132,11 +132,11 @@ void MultiDiskWriterTest::testSha1Sum() {
dw.setFileEntries(entries);
dw.openFile(".");
string sha1sum = dw.sha1Sum(0, 25);
string sha1sum = dw.messageDigest(0, 25, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("76495faf71ca63df66dce99547d2c58da7266d9e"), sha1sum);
sha1sum = dw.sha1Sum(15, 7);
sha1sum = dw.messageDigest(15, 7, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("737660d816fb23c2d5bc74f62d9b01b852b2aaca"), sha1sum);
sha1sum = dw.sha1Sum(10, 14);
sha1sum = dw.messageDigest(10, 14, DIGEST_ALGO_SHA1);
CPPUNIT_ASSERT_EQUAL(string("6238bf61dd8df8f77156b2378e9e39cd3939680c"), sha1sum);
dw.closeFile();
}

View File

@ -1,4 +1,5 @@
#include "Xml2MetalinkProcessor.h"
#include "Exception.h"
#include <cppunit/extensions/HelperMacros.h>
using namespace std;
@ -26,6 +27,7 @@ CPPUNIT_TEST_SUITE_REGISTRATION( Xml2MetalinkProcessorTest );
void Xml2MetalinkProcessorTest::testParseFile() {
Xml2MetalinkProcessor proc;
try {
MetalinkerHandle metalinker = proc.parseFile("test.xml");
MetalinkEntries::iterator entryItr = metalinker->entries.begin();
@ -55,10 +57,22 @@ void Xml2MetalinkProcessorTest::testParseFile() {
entryItr++;
MetalinkEntryHandle entry2 = *entryItr;
CPPUNIT_ASSERT_EQUAL((uint64_t)345689, entry2->size);
CPPUNIT_ASSERT_EQUAL(string("0.5.1"), entry2->version);
CPPUNIT_ASSERT_EQUAL(string("ja-JP"), entry2->language);
CPPUNIT_ASSERT_EQUAL(string("Linux-m68k"), entry2->os);
CPPUNIT_ASSERT_EQUAL(string("4c255b0ed130f5ea880f0aa061c3da0487e251cc"),
entry2->checksum.getMessageDigest());
CPPUNIT_ASSERT_EQUAL((size_t)2, entry2->chunkChecksum->pieceHashes.size());
CPPUNIT_ASSERT_EQUAL((uint32_t)266144, entry2->chunkChecksum->pieceLength);
CPPUNIT_ASSERT_EQUAL(string("179463a88d79cbf0b1923991708aead914f26142"),
entry2->chunkChecksum->pieceHashes.at(0));
CPPUNIT_ASSERT_EQUAL(string("fecf8bc9a1647505fe16746f94e97a477597dbf3"),
entry2->chunkChecksum->pieceHashes.at(1));
CPPUNIT_ASSERT(DIGEST_ALGO_SHA1 == entry2->checksum.getDigestAlgo());
} catch(Exception* e) {
cerr << e->getMsg() << endl;
delete e;
}
}

View File

@ -0,0 +1 @@
0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789ABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJabcdefghijabcdefghijabcdefghijabcdefghijabcdefghij

View File

@ -20,12 +20,17 @@
</resources>
</file>
<file name="aria2-0.5.1.tar.bz2">
<size>345689</size>
<version>0.5.1</version>
<language>ja-JP</language>
<os>Linux-m68k</os>
<verification>
<hash type="md5">92296e19c406d77d21bda0bb944eac46</hash>
<hash type="sha1">4c255b0ed130f5ea880f0aa061c3da0487e251cc</hash>
<pieces length="262144" type="sha1">
<hash pieces="0">179463a88d79cbf0b1923991708aead914f26142</hash>
<hash pieces="1">fecf8bc9a1647505fe16746f94e97a477597dbf3</hash>
</pieces>
</verification>
<resources>
<url type="ftp" preference="100">ftp://ftphost/aria2-0.5.1.tar.bz2</url>