There are several advantages to this to this addition:
'url-list' = either a single item or a list of normal URLs.
For example (on multiple lines for readability):
d
8:announce27:http://tracker.com/announce
13:creation datei1128487910e
8:url-list26:http://mirror.com/file.exe
4:info...
If the "url-list" URL ends in a slash, "/" the client should add the "name" from the torrent to make the full URL. This should make it easier on Torrent generators and let them treat this field same for single file and multi-file torrents.
I have seen that .torrent files from bittorrent.com did include a http seed in this way. And I've read that they added WebSeeding, apparently using this method!
For Multi-File torrents, this gets a bit more interesting.
Normally, BitTorrent clients use the "name" from the .torrent info section to make a folder, then use the "path/file" items from the info section within that folder. For the case of Multi-File torrents, the 'url-list' should be a root folder where a client could add the same "name" and "path/file" to create the URL for the request.
...
8:url-list22:http://mirror.com/pub/
4:infod5:filesld6:lengthi949e4:pathl10:Readme.txte
e4:name7:michael
So a client would use all that to build a url: http://mirror.com/pub/michael/Readme.txt
HTTP/FTP are streaming protocols, and don't have BitTorrent's concept of blocks. For HTTP you can use byte-ranges to resume anywhere or download specific ranges you specify, but with FTP you can only say where to start the download. So I wanted to have big "gaps" in the data downloaded from BitTorrent peers so a HTTP/FTP connection would have big spaces to fill in. You could use the byte-ranges with HTTP to request individual blocks--but each request will show in the server's logs, and somebody is going to think your DoSing them if it shows 100's of connections in their log. So I made a couple changes to the usual "rarest first" piece-selection method to better allow "gaps" to develop between pieces. That way there are longer spaces in the file for HTTP and FTP threads to fill. They can start at the beginning of a gap and download until they get to the end.
This actually could be implemented differently. You could use the HTTP byte-ranges to request specific pieces and not worry about any server's logs. This method just fit in very well with all the code I'd already done for GetRight's accelerating files. Plus does minimize some connections and restarts.
X = sqrt(Peers) - 1; for (i=0; i<maxpieces; i++) { if (*IDon'tHavePiece*) { Gap++; if (*PeerHasPiece*) { PieceRareness = *Number of peers with the piece*; if (PieceRareness<(CurRareness-X) || (PieceRareness<=(CurRareness+X) && Gap>CurGap)) { CurRareness = PieceRareness; CurGap = Gap; NextPiece = i; } } } else { Gap = 0; } }