diff -Nru tribler-6.2.0/Tribler/Lang/lang.py tribler-6.2.0/Tribler/Lang/lang.py --- tribler-6.2.0/Tribler/Lang/lang.py 2013-07-31 10:45:22.000000000 +0000 +++ tribler-6.2.0/Tribler/Lang/lang.py 2013-07-31 12:17:59.000000000 +0000 @@ -85,7 +85,7 @@ if (label == 'version'): return version_id if (label == 'build'): - return "Build 31061" + return "Build " if (label == 'build_date'): return "Jan 23, 2013" # see if it exists in 'user.lang' diff -Nru tribler-6.2.0/Tribler/SwiftEngine/BUGS tribler-6.2.0/Tribler/SwiftEngine/BUGS --- tribler-6.2.0/Tribler/SwiftEngine/BUGS 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/BUGS 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,65 @@ + * min_owd TINT_NEVER is logged + + v hints, data for non-existing ranges + v opens multiple channels to the same address + v hints do not expire + v RTT calculations need improvement (test) + v google-log is unnecessary + * reduce template use (peer queue) + v hints do not expire + v survive 10% loss + v unlimited ping pong + v git sha-1 + v check hints agains ack_out?_ + v check data against ack_in + v channel suspend/wake. 3 cong modes state machine - ??? + * release hints for a dormant channel + * minimize the number of template instantiations + v Channel thinks how much it HINTs a second, + picker thinks which HINTs are snubbed + * files <1sec download : how HINTs are sent? + v dead Channels are not killed => cannot open a new one + (have a channel already) + v peers don't cooperate + * RecoverProgress fails sometime + v leecher can't see file is done already + v why leecher waits 1sec? + * hint queue buildup + * file operations are not 64-bit ready + http://mail.python.org/pipermail/patches/2000-June/000848.html + * recovery: last packet + v no-HINT sending to a dead peer + * what if rtt>1sec + v unHINTed repeated sending + v 1259859412.out#8,9 connection breaks, #8 rtt 1000, #9 hint - + mudachestvo, cwnd => send int 0.5sec + 0_11_10_075_698 #9 sendctrl may send 0 < 0.000000 & 1732919509_-49_-45_-200_-111 (rtt 59661) + 0_11_10_075_698 #9 +data (0,194) + 0_11_10_575_703 #9 sendctrl loss detected + 0_11_10_575_703 #9 Tdata (0,194) + 0_11_10_575_703 #9 sendctrl may send 0 < 0.000000 & 1732919509_-49_-44_-700_-110 (rtt 59661) + v complete peer reconnects 1259967418.out.gz + * underhinting causes repetition causes interarr underest causes underhinting + * misterious initiating handshake bursts + v whether sending is limited by cwnd or app + * actually: whether packets are ACKed faster than sent + * uproot DATA NONE: complicates and deceives + v r735 goes to github; r741 + * receiver is swapping => strange behavior + v on high losses cwnd goes to silly fractions => slows down recovery + v code the pingpong<->keepalive<->slowstart transition + v empty datagram hammering (see at linode) + * make a testkit!!! + * never back from keepalive syndrome (because of underhashing) + * HTTP daemon, combined select() loop + * range requests, priorities + v LEDBAT + * CUBIC + v misterious mass packet losses (!data) + + // Ric: + * check why the last HAVE msgs r not sent + * data is sent even if the client has stopped + * when initialized the piece picker might select hints from hint_out without knowing if the peer actually has it! + * IMPORTANT: trace(bin, range) reports bins out of range! Check if bug!! + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/ChangeLog tribler-6.2.0/Tribler/SwiftEngine/ChangeLog --- tribler-6.2.0/Tribler/SwiftEngine/ChangeLog 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/ChangeLog 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,7 @@ +0.003 - This is not a release as well - 18 Oct 2009 + + - but at least, it compiles now + +0.002 - This is not a release - 7 Oct 2009 + + - it does not even compile, committed for reading purposes only diff -Nru tribler-6.2.0/Tribler/SwiftEngine/LICENSE tribler-6.2.0/Tribler/SwiftEngine/LICENSE --- tribler-6.2.0/Tribler/SwiftEngine/LICENSE 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/LICENSE 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,694 @@ +------------------------------------------------------------------------------ + + swift content-delivery library. + + The research leading to this library has received funding from the European + Community's Seventh Framework Programme in the P2P-Next project under grant + agreement no 216217. + + All library modules are free software, unless stated otherwise; you can + redistribute them and/or modify them under the terms of the GNU Lesser + General Public License as published by the Free Software Foundation; in + particular, version 2.1 of the License. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + The following library modules are Copyright (c) 2008-2012, VTT Technical Research Centre of Finland; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Norut AS; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, DACC Systems AB; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Lancaster University; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Jožef Stefan Institute; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, First Oversi Ltd.; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, TECHNISCHE UNIVERSITEIT DELFT; All rights reserved: + All files and directories found in the directory containing this LICENSE file. + + The following library modules are Copyright (c) 2008-2012, STMicroelectronics S.r.l.; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Kungliga Tekniska Högskolan (The Royal Institute of Technology); All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Markenfilm GmbH & Co. KG; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Radiotelevizija Slovenija Javni Zavvod Ljubljana; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Kendra Foundation; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Universitaet Klagenfurt; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, AG Projects; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, The British Broadcasting Corporation; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Pioneer Digital Design Centre Limited; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, INSTITUT FUER RUNDFUNKTECHNIK GMBH; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Fabchannel BV; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, University Politehnica Bucharest; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, EBU-UER; All rights reserved: + + The following library modules are Copyright (c) 2008-2012, Università di Roma Sapienza; All rights reserved: + + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + + VTT Technical Research Centre of Finland, + Tekniikankatu 1, + FIN-33710 Tampere, + Finland + + Norut AS, + Postboks 6434 + Forskningsparken, + 9294 Tromsø, + Norway + + DACC Systems AB + Glimmervägen 4, + SE18734, Täby, + Sweden + + Lancaster University, + University House, + Bailrigg, Lancaster, LA1 4YW + United Kingdom + + Jožef Stefan Institute, + Jamova cesta 39, + 1000 Ljubljana, + Slovenia + + First Oversi Ltd., + Rishon Lezion 1, + Petah Tikva 49723, + Israel + + TECHNISCHE UNIVERSITEIT DELFT, + Faculty of Electrical Engineering, Mathematics and Computer Science, + Mekelweg 4, + 2628 CD Delft, + The Netherlands + + STMicroelectronics S.r.l., + via C.Olivetti 2, + I-20041 Agrate Brianza, + Italy + + Kungliga Tekniska Högskolan (The Royal Institute of Technology), + KTH/ICT/ECS/TSLab + Electrum 229 + 164 40 Kista + Sweden + + Markenfilm GmbH & Co. KG, + Schulauer Moorweg 25, + 22880 Wedel, + Germany + + Radiotelevizija Slovenija Javni Zavvod Ljubljana, + Kolodvorska 2, + SI-1000 Ljubljana, + Slovenia + + + Kendra Foundation, + Meadow Barn, Holne, + Newton Abbot, Devon, TQ13 7SP, + United Kingdom + + + Universitaet Klagenfurt, + Universitaetstrasse 65-67, + 9020 Klagenfurt, + Austria + + AG Projects, + Dr. Leijdsstraat 92, + 2021RK Haarlem, + The Netherlands + + The British Broadcasting Corporation, + Broadcasting House, Portland Place, + London, W1A 1AA + United Kingdom + + Pioneer Digital Design Centre Limited, + Pioneer House, Hollybush Hill, Stoke Poges, + Slough, SL2 4QP + United Kingdom + + INSTITUT FUER RUNDFUNKTECHNIK GMBH + Floriansmuehlstrasse 60, + 80939 München, + Germany + + Fabchannel BV, + Kleine-Gartmanplantsoen 21, + 1017 RP Amsterdam, + The Netherlands + + University Politehnica Bucharest, + 313 Splaiul Independentei, + District 6, cod 060042, Bucharest, + Romania + + EBU-UER, + L'Ancienne Route 17A, 1218 + Grand Saconnex - Geneva, + Switzerland + + Università di Roma Sapienza + Dipartimento di Informatica e Sistemistica (DIS), + Via Ariosto 25, + 00185 Rome, + Italy + + +------------------------------------------------------------------------------ + + GNU LESSER GENERAL PUBLIC LICENSE + Version 2.1, February 1999 + + Copyright (C) 1991, 1999 Free Software Foundation, Inc. + 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + +[This is the first released version of the Lesser GPL. It also counts + as the successor of the GNU Library Public License, version 2, hence + the version number 2.1.] + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +Licenses are intended to guarantee your freedom to share and change +free software--to make sure the software is free for all its users. + + This license, the Lesser General Public License, applies to some +specially designated software packages--typically libraries--of the +Free Software Foundation and other authors who decide to use it. You +can use it too, but we suggest you first think carefully about whether +this license or the ordinary General Public License is the better +strategy to use in any particular case, based on the explanations below. + + When we speak of free software, we are referring to freedom of use, +not price. Our General Public Licenses are designed to make sure that +you have the freedom to distribute copies of free software (and charge +for this service if you wish); that you receive source code or can get +it if you want it; that you can change the software and use pieces of +it in new free programs; and that you are informed that you can do +these things. + + To protect your rights, we need to make restrictions that forbid +distributors to deny you these rights or to ask you to surrender these +rights. These restrictions translate to certain responsibilities for +you if you distribute copies of the library or if you modify it. + + For example, if you distribute copies of the library, whether gratis +or for a fee, you must give the recipients all the rights that we gave +you. You must make sure that they, too, receive or can get the source +code. If you link other code with the library, you must provide +complete object files to the recipients, so that they can relink them +with the library after making changes to the library and recompiling +it. And you must show them these terms so they know their rights. + + We protect your rights with a two-step method: (1) we copyright the +library, and (2) we offer you this license, which gives you legal +permission to copy, distribute and/or modify the library. + + To protect each distributor, we want to make it very clear that +there is no warranty for the free library. Also, if the library is +modified by someone else and passed on, the recipients should know +that what they have is not the original version, so that the original +author's reputation will not be affected by problems that might be +introduced by others. + + + Finally, software patents pose a constant threat to the existence of +any free program. We wish to make sure that a company cannot +effectively restrict the users of a free program by obtaining a +restrictive license from a patent holder. Therefore, we insist that +any patent license obtained for a version of the library must be +consistent with the full freedom of use specified in this license. + + Most GNU software, including some libraries, is covered by the +ordinary GNU General Public License. This license, the GNU Lesser +General Public License, applies to certain designated libraries, and +is quite different from the ordinary General Public License. We use +this license for certain libraries in order to permit linking those +libraries into non-free programs. + + When a program is linked with a library, whether statically or using +a shared library, the combination of the two is legally speaking a +combined work, a derivative of the original library. The ordinary +General Public License therefore permits such linking only if the +entire combination fits its criteria of freedom. The Lesser General +Public License permits more lax criteria for linking other code with +the library. + + We call this license the "Lesser" General Public License because it +does Less to protect the user's freedom than the ordinary General +Public License. It also provides other free software developers Less +of an advantage over competing non-free programs. These disadvantages +are the reason we use the ordinary General Public License for many +libraries. However, the Lesser license provides advantages in certain +special circumstances. + + For example, on rare occasions, there may be a special need to +encourage the widest possible use of a certain library, so that it becomes +a de-facto standard. To achieve this, non-free programs must be +allowed to use the library. A more frequent case is that a free +library does the same job as widely used non-free libraries. In this +case, there is little to gain by limiting the free library to free +software only, so we use the Lesser General Public License. + + In other cases, permission to use a particular library in non-free +programs enables a greater number of people to use a large body of +free software. For example, permission to use the GNU C Library in +non-free programs enables many more people to use the whole GNU +operating system, as well as its variant, the GNU/Linux operating +system. + + Although the Lesser General Public License is Less protective of the +users' freedom, it does ensure that the user of a program that is +linked with the Library has the freedom and the wherewithal to run +that program using a modified version of the Library. + + The precise terms and conditions for copying, distribution and +modification follow. Pay close attention to the difference between a +"work based on the library" and a "work that uses the library". The +former contains code derived from the library, whereas the latter must +be combined with the library in order to run. + + + GNU LESSER GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License Agreement applies to any software library or other +program which contains a notice placed by the copyright holder or +other authorized party saying it may be distributed under the terms of +this Lesser General Public License (also called "this License"). +Each licensee is addressed as "you". + + A "library" means a collection of software functions and/or data +prepared so as to be conveniently linked with application programs +(which use some of those functions and data) to form executables. + + The "Library", below, refers to any such software library or work +which has been distributed under these terms. A "work based on the +Library" means either the Library or any derivative work under +copyright law: that is to say, a work containing the Library or a +portion of it, either verbatim or with modifications and/or translated +straightforwardly into another language. (Hereinafter, translation is +included without limitation in the term "modification".) + + "Source code" for a work means the preferred form of the work for +making modifications to it. For a library, complete source code means +all the source code for all modules it contains, plus any associated +interface definition files, plus the scripts used to control compilation +and installation of the library. + + Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running a program using the Library is not restricted, and output from +such a program is covered only if its contents constitute a work based +on the Library (independent of the use of the Library in a tool for +writing it). Whether that is true depends on what the Library does +and what the program that uses the Library does. + + 1. You may copy and distribute verbatim copies of the Library's +complete source code as you receive it, in any medium, provided that +you conspicuously and appropriately publish on each copy an +appropriate copyright notice and disclaimer of warranty; keep intact +all the notices that refer to this License and to the absence of any +warranty; and distribute a copy of this License along with the +Library. + + You may charge a fee for the physical act of transferring a copy, +and you may at your option offer warranty protection in exchange for a +fee. + + + 2. You may modify your copy or copies of the Library or any portion +of it, thus forming a work based on the Library, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) The modified work must itself be a software library. + + b) You must cause the files modified to carry prominent notices + stating that you changed the files and the date of any change. + + c) You must cause the whole of the work to be licensed at no + charge to all third parties under the terms of this License. + + d) If a facility in the modified Library refers to a function or a + table of data to be supplied by an application program that uses + the facility, other than as an argument passed when the facility + is invoked, then you must make a good faith effort to ensure that, + in the event an application does not supply such function or + table, the facility still operates, and performs whatever part of + its purpose remains meaningful. + + (For example, a function in a library to compute square roots has + a purpose that is entirely well-defined independent of the + application. Therefore, Subsection 2d requires that any + application-supplied function or table used by this function must + be optional: if the application does not supply it, the square + root function must still compute square roots.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Library, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Library, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote +it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Library. + +In addition, mere aggregation of another work not based on the Library +with the Library (or with a work based on the Library) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may opt to apply the terms of the ordinary GNU General Public +License instead of this License to a given copy of the Library. To do +this, you must alter all the notices that refer to this License, so +that they refer to the ordinary GNU General Public License, version 2, +instead of to this License. (If a newer version than version 2 of the +ordinary GNU General Public License has appeared, then you can specify +that version instead if you wish.) Do not make any other change in +these notices. + + + Once this change is made in a given copy, it is irreversible for +that copy, so the ordinary GNU General Public License applies to all +subsequent copies and derivative works made from that copy. + + This option is useful when you wish to copy part of the code of +the Library into a program that is not a library. + + 4. You may copy and distribute the Library (or a portion or +derivative of it, under Section 2) in object code or executable form +under the terms of Sections 1 and 2 above provided that you accompany +it with the complete corresponding machine-readable source code, which +must be distributed under the terms of Sections 1 and 2 above on a +medium customarily used for software interchange. + + If distribution of object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the +source code from the same place satisfies the requirement to +distribute the source code, even though third parties are not +compelled to copy the source along with the object code. + + 5. A program that contains no derivative of any portion of the +Library, but is designed to work with the Library by being compiled or +linked with it, is called a "work that uses the Library". Such a +work, in isolation, is not a derivative work of the Library, and +therefore falls outside the scope of this License. + + However, linking a "work that uses the Library" with the Library +creates an executable that is a derivative of the Library (because it +contains portions of the Library), rather than a "work that uses the +library". The executable is therefore covered by this License. +Section 6 states terms for distribution of such executables. + + When a "work that uses the Library" uses material from a header file +that is part of the Library, the object code for the work may be a +derivative work of the Library even though the source code is not. +Whether this is true is especially significant if the work can be +linked without the Library, or if the work is itself a library. The +threshold for this to be true is not precisely defined by law. + + If such an object file uses only numerical parameters, data +structure layouts and accessors, and small macros and small inline +functions (ten lines or less in length), then the use of the object +file is unrestricted, regardless of whether it is legally a derivative +work. (Executables containing this object code plus portions of the +Library will still fall under Section 6.) + + Otherwise, if the work is a derivative of the Library, you may +distribute the object code for the work under the terms of Section 6. +Any executables containing that work also fall under Section 6, +whether or not they are linked directly with the Library itself. + + + 6. As an exception to the Sections above, you may also combine or +link a "work that uses the Library" with the Library to produce a +work containing portions of the Library, and distribute that work +under terms of your choice, provided that the terms permit +modification of the work for the customer's own use and reverse +engineering for debugging such modifications. + + You must give prominent notice with each copy of the work that the +Library is used in it and that the Library and its use are covered by +this License. You must supply a copy of this License. If the work +during execution displays copyright notices, you must include the +copyright notice for the Library among them, as well as a reference +directing the user to the copy of this License. Also, you must do one +of these things: + + a) Accompany the work with the complete corresponding + machine-readable source code for the Library including whatever + changes were used in the work (which must be distributed under + Sections 1 and 2 above); and, if the work is an executable linked + with the Library, with the complete machine-readable "work that + uses the Library", as object code and/or source code, so that the + user can modify the Library and then relink to produce a modified + executable containing the modified Library. (It is understood + that the user who changes the contents of definitions files in the + Library will not necessarily be able to recompile the application + to use the modified definitions.) + + b) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (1) uses at run time a + copy of the library already present on the user's computer system, + rather than copying library functions into the executable, and (2) + will operate properly with a modified version of the library, if + the user installs one, as long as the modified version is + interface-compatible with the version that the work was made with. + + c) Accompany the work with a written offer, valid for at + least three years, to give the same user the materials + specified in Subsection 6a, above, for a charge no more + than the cost of performing this distribution. + + d) If distribution of the work is made by offering access to copy + from a designated place, offer equivalent access to copy the above + specified materials from the same place. + + e) Verify that the user has already received a copy of these + materials or that you have already sent this user a copy. + + For an executable, the required form of the "work that uses the +Library" must include any data and utility programs needed for +reproducing the executable from it. However, as a special exception, +the materials to be distributed need not include anything that is +normally distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies +the executable. + + It may happen that this requirement contradicts the license +restrictions of other proprietary libraries that do not normally +accompany the operating system. Such a contradiction means you cannot +use both them and the Library together in an executable that you +distribute. + + + 7. You may place library facilities that are a work based on the +Library side-by-side in a single library together with other library +facilities not covered by this License, and distribute such a combined +library, provided that the separate distribution of the work based on +the Library and of the other library facilities is otherwise +permitted, and provided that you do these two things: + + a) Accompany the combined library with a copy of the same work + based on the Library, uncombined with any other library + facilities. This must be distributed under the terms of the + Sections above. + + b) Give prominent notice with the combined library of the fact + that part of it is a work based on the Library, and explaining + where to find the accompanying uncombined form of the same work. + + 8. You may not copy, modify, sublicense, link with, or distribute +the Library except as expressly provided under this License. Any +attempt otherwise to copy, modify, sublicense, link with, or +distribute the Library is void, and will automatically terminate your +rights under this License. However, parties who have received copies, +or rights, from you under this License will not have their licenses +terminated so long as such parties remain in full compliance. + + 9. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Library or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Library (or any work based on the +Library), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Library or works based on it. + + 10. Each time you redistribute the Library (or any work based on the +Library), the recipient automatically receives a license from the +original licensor to copy, distribute, link with or modify the Library +subject to these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties with +this License. + + + 11. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Library at all. For example, if a patent +license would not permit royalty-free redistribution of the Library by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Library. + +If any portion of this section is held invalid or unenforceable under any +particular circumstance, the balance of the section is intended to apply, +and the section as a whole is intended to apply in other circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 12. If the distribution and/or use of the Library is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Library under this License may add +an explicit geographical distribution limitation excluding those countries, +so that distribution is permitted only in or among countries not thus +excluded. In such case, this License incorporates the limitation as if +written in the body of this License. + + 13. The Free Software Foundation may publish revised and/or new +versions of the Lesser General Public License from time to time. +Such new versions will be similar in spirit to the present version, +but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Library +specifies a version number of this License which applies to it and +"any later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Library does not specify a +license version number, you may choose any version ever published by +the Free Software Foundation. + + + 14. If you wish to incorporate parts of the Library into other free +programs whose distribution conditions are incompatible with these, +write to the author to ask for permission. For software which is +copyrighted by the Free Software Foundation, write to the Free +Software Foundation; we sometimes make exceptions for this. Our +decision will be guided by the two goals of preserving the free status +of all derivatives of our free software and of promoting the sharing +and reuse of software generally. + + NO WARRANTY + + 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY +KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE +LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME +THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU +FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR +CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE +LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING +RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF +SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH +DAMAGES. + + END OF TERMS AND CONDITIONS + + + How to Apply These Terms to Your New Libraries + + If you develop a new library, and you want it to be of the greatest +possible use to the public, we recommend making it free software that +everyone can redistribute and change. You can do so by permitting +redistribution under these terms (or, alternatively, under the terms of the +ordinary General Public License). + + To apply these terms, attach the following notices to the library. It is +safest to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least the +"copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + +Also add information on how to contact you by electronic and paper mail. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the library, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + library `Frob' (a library for tweaking knobs) written by James Random Hacker. + + , 1 April 1990 + Ty Coon, President of Vice + +That's all there is to it! + +------------------------------------------------------------------------------- + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/Makefile tribler-6.2.0/Tribler/SwiftEngine/Makefile --- tribler-6.2.0/Tribler/SwiftEngine/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/Makefile 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,24 @@ +LIBEVENT_HOME=/prod/pkgs/libevent-2.0.17 + +# Remove NDEBUG define to trigger asserts +CPPFLAGS+=-O2 -I. -DNDEBUG -Wall -Wno-sign-compare -Wno-unused -g -I${LIBEVENT_HOME}/include -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE +LDFLAGS+=-levent -lstdc++ + +all: swift-dynamic + +swift: swift.o sha1.o compat.o sendrecv.o send_control.o hashtree.o bin.o binmap.o channel.o transfer.o httpgw.o statsgw.o cmdgw.o avgspeed.o avail.o storage.o zerostate.o zerohashtree.o + #nat_test.o + +swift-static: swift + g++ ${CPPFLAGS} -o swift *.o ${LDFLAGS} -static -lrt + strip swift + touch swift-static + +swift-dynamic: swift + g++ ${CPPFLAGS} -o swift *.o ${LDFLAGS} -L${LIBEVENT_HOME}/lib -Wl,-rpath,${LIBEVENT_HOME}/lib + touch swift-dynamic + +clean: + rm *.o swift swift-static swift-dynamic2>/dev/null + +.PHONY: all clean swift swift-static swift-dynamic diff -Nru tribler-6.2.0/Tribler/SwiftEngine/Makefile.mac tribler-6.2.0/Tribler/SwiftEngine/Makefile.mac --- tribler-6.2.0/Tribler/SwiftEngine/Makefile.mac 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/Makefile.mac 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,16 @@ +LIBEVENT_HOME=/Users/tribler/pkgs/libevent-2.0.19-stable-10.5 + +# Remove NDEBUG define to trigger asserts +CPPFLAGS+=-O2 -I. -DNDEBUG -Wall -Wno-sign-compare -Wno-unused -g -I${LIBEVENT_HOME}/include -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -arch i386 -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -DMAC_OS_X_VERSION_MIN_REQUIRED=1050 +LDFLAGS+=${LIBEVENT_HOME}/lib/libevent.a -lstdc++ -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5 -DMAC_OS_X_VERSION_MIN_REQUIRED=1050 -no_compact_linkedit + +all: swift + +swift: swift.o sha1.o compat.o sendrecv.o send_control.o hashtree.o bin.o binmap.o channel.o transfer.o httpgw.o statsgw.o cmdgw.o avgspeed.o avail.o storage.o zerostate.o zerohashtree.o +#nat_test.o + g++ ${CPPFLAGS} -o swift *.o ${LDFLAGS} + +clean: + rm *.o swift 2>/dev/null + +.PHONY: all clean diff -Nru tribler-6.2.0/Tribler/SwiftEngine/README tribler-6.2.0/Tribler/SwiftEngine/README --- tribler-6.2.0/Tribler/SwiftEngine/README 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/README 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,35 @@ +swift: the multiparty transport protocol + (aka BitTorrent at the transport layer) + Differently from TCP, the protocol does not use the ordered data stream + abstraction. Effectively, it splits a file into 1KB packets and sends + them around. The secret sauce is Merkle hash trees and binmaps. + + Requires libevent-2.0.17 or higher. + +see doc/index.html for marketing stuff, ideas and rants + doc/draft-ietf-ppsp-grishchenko-swift.txt for protocol draft spec + *.cpp for the actual code + swift.cpp is the main exec file; may run as e.g. + + ./swift -t node300.das2.ewi.tudelft.nl:20000 -h \ + d1502706c46779d361a1d562a10da0a45c4c40e5 -f \ + trailer.ogg + + ...to retrieve video and save it to a file. + + Alternatively, you might play with the HTTP gateway, the preliminary + version. First, run the seeder-tracker: + + $ ./swift -f ~/Downloads/big_buck_bunny_480p_stereo.ogg -l 0.0.0.0:20000 + Root hash: 7c462ad1d980ba44ab4b819e29004eb0bf6e6d5f + + ...then you may try running the swift-HTTP gateway... + + $ ./swift -t 127.0.0.1:20000 -g 0.0.0.0:8080 -w + + ...and finally you may point your browser at the gateway... + + http://127.0.0.1:8080/7c462ad1d980ba44ab4b819e29004eb0bf6e6d5f + + If you use an HTML5 browser (Chrome preferred), you are likely to see + the bunny trailer at this point... diff -Nru tribler-6.2.0/Tribler/SwiftEngine/SConstruct tribler-6.2.0/Tribler/SwiftEngine/SConstruct --- tribler-6.2.0/Tribler/SwiftEngine/SConstruct 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/SConstruct 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,144 @@ +# Written by Victor Grishchenko, Arno Bakker +# see LICENSE.txt for license information +# +# Requirements: +# - scons: Cross-platform build system http://www.scons.org/ +# - libevent2: Event driven network I/O http://www.libevent.org/ +# * Install in \build\libevent-2.0.14-stable +# For debugging: +# - googletest: Google C++ Test Framework http://code.google.com/p/googletest/ +# * Install in \build\gtest-1.4.0 +# + + +import os +import re +import sys + +DEBUG = True + +TestDir='tests' + +target = 'swift' +source = [ 'bin.cpp', 'binmap.cpp', 'sha1.cpp','hashtree.cpp', + 'transfer.cpp', 'channel.cpp', 'sendrecv.cpp', 'send_control.cpp', + 'compat.cpp','avgspeed.cpp', 'avail.cpp', 'cmdgw.cpp', + 'storage.cpp', 'zerostate.cpp', 'zerohashtree.cpp'] +# cmdgw.cpp now in there for SOCKTUNNEL + +env = Environment() +if sys.platform == "win32": + libevent2path = '\\build\\libevent-2.0.19-stable' + #libevent2path = '\\build\\ttuki\\libevent-2.0.15-arno-http' + + # "MSVC works out of the box". Sure. + # Make sure scons finds cl.exe, etc. + env.Append ( ENV = { 'PATH' : os.environ['PATH'] } ) + + # Make sure scons finds std MSVC include files + if not 'INCLUDE' in os.environ: + print "swift: Please run scons in a Visual Studio Command Prompt" + sys.exit(-1) + + include = os.environ['INCLUDE'] + include += libevent2path+'\\include;' + include += libevent2path+'\\WIN32-Code;' + if DEBUG: + include += '\\build\\gtest-1.4.0\\include;' + + env.Append ( ENV = { 'INCLUDE' : include } ) + + if 'CXXPATH' in os.environ: + cxxpath = os.environ['CXXPATH'] + else: + cxxpath = "" + cxxpath += include + if DEBUG: + env.Append(CXXFLAGS="/Zi /MTd") + env.Append(LINKFLAGS="/DEBUG") + else: + env.Append(CXXFLAGS="/DNDEBUG") # disable asserts + env.Append(CXXPATH=cxxpath) + env.Append(CPPPATH=cxxpath) + + # getopt for win32 + source += ['getopt.c','getopt_long.c'] + + # Set libs to link to + # Advapi32.lib for CryptGenRandom in evutil_rand.obj + libs = ['ws2_32','libevent','Advapi32'] + if DEBUG: + libs += ['gtestd'] + + # Update lib search path + libpath = os.environ['LIBPATH'] + libpath += libevent2path+';' + if DEBUG: + libpath += '\\build\\gtest-1.4.0\\msvc\\gtest\\Debug;' + + # Somehow linker can't find uuid.lib + libpath += 'C:\\Program Files\\Microsoft SDKs\\Windows\\v6.0A\\Lib;' + + # TODO: Make the swift.exe a Windows program not a Console program + if not DEBUG: + env.Append(LINKFLAGS="/SUBSYSTEM:WINDOWS") + + APPSOURCE=['swift.cpp','httpgw.cpp','statsgw.cpp','getopt.c','getopt_long.c'] + +else: + libevent2path = '/arno/pkgs/libevent-2.0.15-arno-http' + + # Enable the user defining external includes + if 'CPPPATH' in os.environ: + cpppath = os.environ['CPPPATH'] + else: + cpppath = "" + print "To use external libs, set CPPPATH environment variable to list of colon-separated include dirs" + cpppath += libevent2path+'/include:' + env.Append(CPPPATH=".:"+cpppath) + #env.Append(LINKFLAGS="--static") + + #if DEBUG: + # env.Append(CXXFLAGS="-g") + + # Large-file support always + env.Append(CXXFLAGS="-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE") + + # Set libs to link to + libs = ['stdc++','libevent','pthread'] + if 'LIBPATH' in os.environ: + libpath = os.environ['LIBPATH'] + else: + libpath = "" + print "To use external libs, set LIBPATH environment variable to list of colon-separated lib dirs" + libpath += libevent2path+'/lib:' + + linkflags = '-Wl,-rpath,'+libevent2path+'/lib' + env.Append(LINKFLAGS=linkflags); + + + APPSOURCE=['swift.cpp','httpgw.cpp','statsgw.cpp'] + +if DEBUG: + env.Append(CXXFLAGS="-DDEBUG") + +env.StaticLibrary ( + target='libswift', + source = source, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='swift', + source=APPSOURCE, + #CPPPATH=cpppath, + LIBS=[libs,'libswift'], + LIBPATH=libpath+':.') + + +Export("env") +Export("libs") +Export("libpath") +Export("DEBUG") +# Arno: uncomment to build tests +#SConscript('tests/SConscript') diff -Nru tribler-6.2.0/Tribler/SwiftEngine/TODO tribler-6.2.0/Tribler/SwiftEngine/TODO --- tribler-6.2.0/Tribler/SwiftEngine/TODO 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/TODO 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,76 @@ + TRIAL TODO + +STATE MACHINE +* imposed HINTs are terribly broken, resent for the data in flight +* check ACK/HAVE redundancy +* HAVE overuses find_filtered +* set priorities on ranges +* small-progress update problem (aka peer nap) + guarantee size of updates < x% of data, on both ends +* pex is affected by peer nap +* how will tracker aggregate pexes? +* SWIFT_MSGTYPE_RCVD +* HAVE ALL / HAVE NONE +* aggregate ACKS (schedule for +x ms) +* channel close msg (hs 0) # Arno: indeed, there appears to be no Channel garbage collection +* connection rotation / pex / pex_del +* misterious bug: Rdata (NONE) +* ?amend MAX_REORDER depending on rtt_dev +* Tdata repetitions bug + +PERFORMANCE +* move to the.zett's binmaps +* optimize redundant HASH messages +* move to rolling HAVE queue +* 32 bit time field +* ?empty/full binmaps +* initiate RTT with prev RTT to host:port +* fractional cwnd + +CACHING/FILES +* connection rotation +* file rotation +* real LRU/LFU +* file/hash-file re-open in read-only mode +* no cache recheck, failure-resistant +* completion mark +* unified events/callbacks +* move to 64-bit IO +* Transfer(fd) constructor +* think of sliding window(s) +* the ability to sniff file without downloading + +MANIFOLD +* all-swarm performance stats +* run chained setups (cmd line protocol subsetting) +* implement: multiple swift instances per server +* run thousand-daemon caching tests (use httpgw) +* use a dedicated tracker +* add NATs to the setup +* recover mfold.libswift.org +* integrate Windowses + +API +* pluggable storage + +NAT +* NAT type detection => need peer identifiers (x100 amplification) + +MFOLD +* integrate multi-peer changes by Jori +* do global swarm stats + +OTHER +* httpgw or nginx? +* Sha1Hash constructor ambiguity +* don't #include .cpp +* think of using HTTP (?) as a fallback +* add header/footer, better abstract to the draft +* Gertjan: separate peer from channel? cng ctrl per peer ? +* packing hashes into a single datagram (tracking 1000s) +* partial channels / lightweight channels + +THOUGHTS +* 6 degrees of sep = 3-hop TorrentSmell +* 60% immediately not connectable +* support traffic diff -Nru tribler-6.2.0/Tribler/SwiftEngine/avail.cpp tribler-6.2.0/Tribler/SwiftEngine/avail.cpp --- tribler-6.2.0/Tribler/SwiftEngine/avail.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/avail.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,242 @@ +/* + * availability.h + * Tree keeping track of the availability of each bin in a swarm + * + * Created by Riccardo Petrocco + * Copyright 2009-2012 Delft University of Technology. All rights reserved. + * + */ +#include "swift.h" + +using namespace swift; + +#define DEBUGAVAILABILITY 0 + + +uint8_t Availability::get(const bin_t bin) +{ + if (bin.is_none()) + return 255; + else if (size_) + return avail_[bin.toUInt()]; + + return 0; +} + + +void Availability::setBin(bin_t bin) +{ + if (bin != bin_t::NONE) + { + bin_t beg = bin.base_left(); + bin_t end = bin.base_right(); + + for (int i = beg.toUInt(); i<=end.toUInt(); i++) + { + // for the moment keep a counter + // TODO make it percentage + avail_[i]++; + } + } + +} + + +void Availability::removeBin(bin_t bin) +{ + bin_t beg = bin.base_left(); + bin_t end = bin.base_right(); + + for (int i = beg.toUInt(); i<=end.toUInt(); i++) + { + avail_[i]--; + } +} + + +void Availability::setBinmap(binmap_t * binmap) +{ + + if (binmap->is_filled()) + for (int i=0; iis_empty()) + { + //status(); + bin_t tmp_b; + binmap_t tmp_bm; + tmp_b = binmap_t::find_complement(tmp_bm, *binmap, 0); + + while (tmp_b != bin_t::NONE) + { + setBin(tmp_b); + //binmap_t::copy(tmp_bm, *binmap, tmp_b); + tmp_bm.set(tmp_b); + tmp_b = binmap_t::find_complement(tmp_bm, *binmap, 0); + } + //status(); + + } + + return; +} + +void Availability::removeBinmap(binmap_t &binmap) +{ + if (binmap.is_filled()) + for (int i=0; i setting %s (%llu)\n",tintstr(),channel_id,target.str(bin_name_buf),target.toUInt()); + } + + if (size_>0 && !binmap.is_filled(target)) + { + bin_t beg = target.base_left(); + bin_t end = target.base_right(); + + for (int i = beg.toUInt(); i<=end.toUInt(); i++) + { + // for the moment keep a counter + // TODO make it percentage + if (!binmap.is_filled(bin_t(i))) + avail_[i]++; + //TODO avoid running into sub-trees that r filled + } + } + // keep track of the incoming have msgs + else + { + + for (WaitingPeers::iterator vpci = waiting_peers_.begin(); vpci != waiting_peers_.end(); ++vpci) + { + if (vpci->first == channel_id) + { + waiting_peers_.erase(vpci); + break; + } + } + + waiting_peers_.push_back(std::make_pair(channel_id, &binmap)); + } +} + + +void Availability::remove(uint32_t channel_id, binmap_t& binmap) +{ + if (DEBUGAVAILABILITY) + { + dprintf("%s #%u Availability -> removing peer\n",tintstr(),channel_id); + } + if (size_<=0) + { + WaitingPeers::iterator vpci = waiting_peers_.begin(); + for(; vpci != waiting_peers_.end(); ++vpci) + { + if (vpci->first == channel_id) + { + waiting_peers_.erase(vpci); + break; + } + } + } + + else + removeBinmap(binmap); + // remove the binmap from the availability + + return; +} + + +void Availability::setSize(uint64_t size) +{ + if (size && !size_) + { + // TODO can be optimized (bithacks) + uint64_t r = 0; + uint64_t s = size; + + // check if the binmap is not complete + if (s & (s-1)) + { + while (s >>= 1) + { + r++; + } + s = 1<<(r+1); + } + // consider higher layers + s += s-1; + size_ = s; + avail_ = new uint8_t[s](); + + // Initialize with the binmaps we already received + for(WaitingPeers::iterator vpci = waiting_peers_.begin(); vpci != waiting_peers_.end(); ++vpci) + { + setBinmap(vpci->second); + } + + + if (DEBUGAVAILABILITY) + { + char bin_name_buf[32]; + dprintf("%s #1 Availability -> setting size in chunk %lu \t avail size %u\n",tintstr(), size, s); + } + } +} + +bin_t Availability::getRarest(const bin_t range, int width) +{ + assert(range.toUInt()width) + { + idx = curr.toUInt(); + if ( avail_[curr.left().toUInt()] <= avail_[curr.right().toUInt()] ) + curr.to_left(); + else + curr.to_right(); + } + return curr; +} + +void Availability::status() const +{ + printf("availability:\n"); + + if (size_ > 0) + { + for (int i = 0; i < size_; i++) + printf("%d", avail_[i]); + } + + printf("\n"); +} + + + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/avail.h tribler-6.2.0/Tribler/SwiftEngine/avail.h --- tribler-6.2.0/Tribler/SwiftEngine/avail.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/avail.h 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,98 @@ +/* + * availability.h + * Tree keeping track of the availability of each bin in a swarm + * + * Created by Riccardo Petrocco + * Copyright 2009-2012 Delft University of Technology. All rights reserved. + * + */ +#include "bin.h" +#include "binmap.h" +#include "compat.h" +#include +#include + +#ifndef AVAILABILITY_H +#define AVAILABILITY_H + +namespace swift { + +typedef std::vector< std::pair > WaitingPeers; + +class Availability +{ + public: + + /** + * Constructor + */ + Availability(void) { size_ = 0; } + + + /** + * Constructor + */ + explicit Availability(int size) + { + assert(size <= 0); + size_ = size; + avail_ = new uint8_t[size]; + } + + ~Availability(void) + { + if (size_) + delete [] avail_; + } + + /** return the availability array */ + uint8_t* get() { return avail_; } + + /** returns the availability of a single bin */ + uint8_t get(const bin_t bin); + + /** set/update the availability */ + void set(uint32_t channel_id, binmap_t& binmap, bin_t target); + + /** removes the binmap of leaving peers */ + void remove(uint32_t channel_id, binmap_t& binmap); + + /** returns the size of the availability tree */ + int size() { return size_; } + + /** sets the size of the availability tree once we know the size of the file */ + void setSize(uint64_t size); + + /** sets a binmap */ + void setBinmap(binmap_t *binmap); + + /** get rarest bin, of specified width, within a range */ + bin_t getRarest(const bin_t range, int width); + + /** Echo the availability status to stdout */ + void status() const; + + protected: + uint8_t *avail_; + uint64_t size_; + // a list of incoming have msgs, those are saved only it the file size is still unknown + // TODO fix... set it depending on the # of channels * something + WaitingPeers waiting_peers_; + //binmap_t *waiting_[20]; + + + + /** removes the binmap */ + void removeBinmap(binmap_t& binmap); + + /** removes the bin */ + void removeBin(bin_t bin); + + /** sets a bin */ + void setBin(bin_t bin); + +}; + +} + +#endif diff -Nru tribler-6.2.0/Tribler/SwiftEngine/avgspeed.cpp tribler-6.2.0/Tribler/SwiftEngine/avgspeed.cpp --- tribler-6.2.0/Tribler/SwiftEngine/avgspeed.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/avgspeed.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,64 @@ +/* + * avgspeed.cpp + * Class to compute moving average speed + * + * Created by Arno Bakker + * Copyright 2009 Delft University of Technology. All rights reserved. + * + */ +#include "avgspeed.h" + +using namespace swift; + +MovingAverageSpeed::MovingAverageSpeed(tint speed_interval, tint fudge) +{ + speed_interval_ = speed_interval; + fudge_ = fudge; + t_start_ = usec_time() - fudge_; + t_end_ = t_start_; + speed_ = 0.0; + resetstate_ = false; +} + + +void MovingAverageSpeed::AddPoint(uint64_t amount) +{ + // Arno, 2012-01-04: Resetting this measurement includes not adding + // points for a few seconds after the reset, to accomodate the case + // of going from high speed to low speed and content still coming in. + // + if (resetstate_) { + if ((t_start_ + speed_interval_/2) > usec_time()) { + return; + } + resetstate_ = false; + } + + tint t = usec_time(); + speed_ = (speed_ * ((double)(t_end_ - t_start_)/((double)TINT_SEC)) + (double)amount) / ((t - t_start_)/((double)TINT_SEC) + 0.0001); + t_end_ = t; + if (t_start_ < t - speed_interval_) + t_start_ = t - speed_interval_; +} + + +double MovingAverageSpeed::GetSpeed() +{ + AddPoint(0); + return speed_; +} + + +double MovingAverageSpeed::GetSpeedNeutral() +{ + return speed_; +} + + +void MovingAverageSpeed::Reset() +{ + resetstate_ = true; + t_start_ = usec_time() - fudge_; + t_end_ = t_start_; + speed_ = 0.0; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/avgspeed.h tribler-6.2.0/Tribler/SwiftEngine/avgspeed.h --- tribler-6.2.0/Tribler/SwiftEngine/avgspeed.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/avgspeed.h 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,36 @@ +/* + * avgspeed.h + * Class to compute moving average speed + * + * Created by Arno Bakker + * Copyright 2009 Delft University of Technology. All rights reserved. + * + */ +#include "compat.h" + +#ifndef AVGSPEED_H +#define AVGSPEED_H + +namespace swift { + + +class MovingAverageSpeed +{ + public: + MovingAverageSpeed( tint speed_interval = 5 * TINT_SEC, tint fudge = TINT_SEC ); + void AddPoint( uint64_t amount ); + double GetSpeed(); + double GetSpeedNeutral(); + void Reset(); + protected: + tint speed_interval_; + tint t_start_; + tint t_end_; + double speed_; + tint fudge_; + bool resetstate_; +}; + +} + +#endif diff -Nru tribler-6.2.0/Tribler/SwiftEngine/bin.cpp tribler-6.2.0/Tribler/SwiftEngine/bin.cpp --- tribler-6.2.0/Tribler/SwiftEngine/bin.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/bin.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,169 @@ +/* + * bin.cpp + * swift + * + * Created by Victor Grishchenko on 10/10/09. + * Reimplemented by Alexander G. Pronchenkov on 05/05/10 + * + * Copyright 2010 Delft University of Technology. All rights reserved. + * + */ + +#include "bin.h" +#include + + +const bin_t bin_t::NONE(8 * sizeof(bin_t::uint_t), 0); +const bin_t bin_t::ALL(8 * sizeof(bin_t::uint_t) - 1, 0); + + +/* Methods */ + +/** + * Gets the layer value of a bin + */ +int bin_t::layer(void) const +{ + if (is_none()) { + return -1; + } + + int r = 0; + +#ifdef _MSC_VER +# pragma warning (push) +# pragma warning (disable:4146) +#endif + register uint_t tail; + tail = v_ + 1; + tail = tail & (-tail); +#ifdef _MSC_VER +# pragma warning (pop) +#endif + + if (tail > 0x80000000U) { + r = 32; + tail >>= 16; // FIXME: hide warning + tail >>= 16; + } + + // courtesy of Sean Eron Anderson + // http://graphics.stanford.edu/~seander/bithacks.html + static const char DeBRUIJN[32] = { 0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8, 31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9 }; + + return r + DeBRUIJN[ 0x1f & ((tail * 0x077CB531U) >> 27) ]; +} + +/* String operations */ + +namespace { + +char* append(char* buf, int x) +{ + char* l = buf; + char* r = buf; + + if (x < 0) { + *r++ = '-'; + x = -x; + } + + do { + *r++ = '0' + x % 10; + x /= 10; + } while (x); + + char* e = r--; + + while (l < r) { + const char t = *l; + *l++ = *r; + *r-- = t; + } + + *e = '\0'; + + return e; +} + +char* append(char* buf, bin_t::uint_t x) +{ + char* l = buf; + char* r = buf; + + do { + *r++ = '0' + x % 10; + x /= 10; + } while (x); + + char* e = r--; + + while (l < r) { + const char t = *l; + *l++ = *r; + *r-- = t; + } + + *e = '\0'; + + return e; +} + +char* append(char* buf, const char* s) +{ + char* e = buf; + + while (*s) { + *e++ = *s++; + } + + *e = '\0'; + + return e; +} + +char* append(char* buf, char c) +{ + char* e = buf; + + *e++ = c; + *e = '\0'; + + return e; +} + +} /* namespace */ + + +/** + * Get the standard-form of this bin, e.g. "(2,1)". + * (buffer should have enough of space) + */ +const char* bin_t::str(char* buf) const +{ + char* e = buf; + + if (is_all()) { + e = append(e, "(ALL)"); + } else if (is_none()) { + e = append(e, "(NONE)"); + } else { + e = append(e, '('); + e = append(e, layer()); + e = append(e, ','); + e = append(e, layer_offset()); + e = append(e, ')'); + } + + return buf; +} + + +/** + * Output operator + */ +std::ostream & operator << (std::ostream & ostream, const bin_t & bin) +{ + char bin_name_buf[64]; + return ostream << bin.str(bin_name_buf); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/bin.h tribler-6.2.0/Tribler/SwiftEngine/bin.h --- tribler-6.2.0/Tribler/SwiftEngine/bin.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/bin.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,782 @@ +/* + * bin.h + * swift + * + * Created by Victor Grishchenko on 10/10/09. + * Reimplemented by Alexander G. Pronchenkov on 05/05/10 + * + * Copyright 2010 Delft University of Technology. All rights reserved. + * + */ + +#ifndef __bin_h__ +#define __bin_h__ + +#include + + +/** + * Numbering for (aligned) logarithmic bins. + * + * Each number stands for an interval + * [layer_offset * 2^layer, (layer_offset + 1) * 2^layer). + * + * The following value is called as base_offset: + * layer_offset * 2^layer -- is called + * + * Bin numbers in the tail111 encoding: meaningless bits in + * the tail are set to 0111...11, while the head denotes the offset. + * bin = 2 ^ (layer + 1) * layer_offset + 2 ^ layer - 1 + * + * Thus, 1101 is the bin at layer 1, offset 3 (i.e. fourth). + */ + +/** + * + * +-----------------00111-----------------+ + * | | + * +-------00011-------+ +-------01011-------+ + * | | | | + * +--00001--+ +--00101--+ +--01001--+ +--01101--+ + * | | | | | | | | + * 00000 00010 00100 00110 01000 01010 01100 1110 + * + * + * + * 7 + * / \ + * 3 11 + * / \ / \ + * 1 5 9 13 + * / \ / \ / \ / \ + * 0 2 4 6 8 10 12 14 + * + * Once we have peak hashes, this structure is more natural than bin-v1 + * + */ + +class bin_t { +public: + /** + * Basic integer type + */ + typedef unsigned long long uint_t; + + + /** + * Constants + */ + static const bin_t NONE; + static const bin_t ALL; + + + /** + * Constructor + */ + bin_t(void); + + + /** + * Constructor + */ + explicit bin_t(uint_t val); + + + /** + * Constructor + */ + bin_t(int layer, uint_t layer_offset); + + + /** + * Gets the bin value + */ + uint_t toUInt(void) const; + + + /** + * Operator equal + */ + bool operator == (const bin_t& bin) const; + + + /** + * Operator non-equal + */ + bool operator != (const bin_t& bin) const; + + + /** + * Operator less than + */ + bool operator < (const bin_t& bin) const; + + + /** + * Operator greater than + */ + bool operator > (const bin_t& bin) const; + + + /** + * Operator less than or equal + */ + bool operator <= (const bin_t& bin) const; + + + /** + * Operator greater than or equal + */ + bool operator >= (const bin_t& bin) const; + + /** + * Decompose the bin + */ + void decompose(int* layer, uint_t* layer_offset) const; + + + /** + * Gets the beginning of the bin(ary interval) + */ + uint_t base_offset(void) const; + + + /** + * Gets the length of the bin interval + */ + uint_t base_length(void) const; + + + /** + * Gets the bin's layer, i.e. log2(base_length) + */ + int layer(void) const; + + + /** + * Gets the bin layer bits + */ + uint_t layer_bits(void) const; + + + /** + * Gets the bin layer offset + */ + uint_t layer_offset(void) const; + + + /** + * Whether the bin is none + */ + bool is_none(void) const; + + + /** + * Whether the bin is all + */ + bool is_all(void) const; + + + /** + * Whether the bin is base (layer == 0) + */ + bool is_base(void) const; + + + /** + * Checks whether is bin is a left child + */ + bool is_left(void) const; + + + /** + * Checks whether is bin is a left child + */ + bool is_right(void) const; + + + /** + * Sets this object to the parent + */ + bin_t& to_parent(void); + + + /** + * Sets this object to the left child + */ + bin_t& to_left(void); + + + /** + * Sets this object to the right child + */ + bin_t& to_right(void); + + + /** + * Sets this object to the sibling + */ + bin_t& to_sibling(void); + + + /** + * Sets this object to the leftmost base sub-bin + */ + bin_t& to_base_left(void); + + + /** + * Sets this object to the rightmost base sub-bin + */ + bin_t& to_base_right(void); + + + /** + * Sets this object to the permutated state + */ + bin_t& to_twisted(uint_t mask); + + + /** + * Sets this object to a layer shifted state + */ + bin_t& to_layer_shifted(int zlayer); + + + /** + * Gets the parent bin + */ + bin_t parent(void) const; + + + /** + * Gets the left child + */ + bin_t left(void) const; + + + /** + * Gets the right child + */ + bin_t right(void) const; + + + /** + * Gets the sibling bin + */ + bin_t sibling(void) const; + + + /** + * Gets the leftmost base sub-bin + */ + bin_t base_left(void) const; + + + /** + * Gets the rightmost base sub-bin + */ + bin_t base_right(void) const; + + + /** + * Performs a permutation + */ + bin_t twisted(uint_t mask) const; + + + /** + * Gets the bin after a layer shifting + */ + bin_t layer_shifted(int zlayer) const; + + + /** + * Checks for contains + */ + bool contains(const bin_t& bin) const; + + + /** + * Get the standard-form of this bin, e.g. "(2,1)". + * (buffer should have enough of space) + */ + const char* str(char* buf) const; + + +private: + + /** Bin value */ + uint_t v_; +}; + + +/** + * Output operator + */ +std::ostream & operator << (std::ostream & ostream, const bin_t & bin); + + +/** + * Constructor + */ +inline bin_t::bin_t(void) +{ } + + +/** + * Constructor + */ +inline bin_t::bin_t(uint_t val) + : v_(val) +{ } + + +/** + * Constructor + */ +inline bin_t::bin_t(int layer, uint_t offset) +{ + if (static_cast(layer) < 8 * sizeof(uint_t)) { + v_ = ((2 * offset + 1) << layer) - 1; + } else { + v_ = static_cast(-1); // Definition of the NONE bin + } +} + + +/** + * Gets the bin value + */ +inline bin_t::uint_t bin_t::toUInt(void) const +{ + return v_; +} + + +/** + * Operator equal + */ +inline bool bin_t::operator == (const bin_t& bin) const +{ + return v_ == bin.v_; +} + + +/** + * Operator non-equal + */ +inline bool bin_t::operator != (const bin_t& bin) const +{ + return v_ != bin.v_; +} + + +/** + * Operator less than + */ +inline bool bin_t::operator < (const bin_t& bin) const +{ + return v_ < bin.v_; +} + + +/** + * Operator great than + */ +inline bool bin_t::operator > (const bin_t& bin) const +{ + return v_ > bin.v_; +} + + +/** + * Operator less than or equal + */ +inline bool bin_t::operator <= (const bin_t& bin) const +{ + return v_ <= bin.v_; +} + + +/** + * Operator great than or equal + */ +inline bool bin_t::operator >= (const bin_t& bin) const +{ + return v_ >= bin.v_; +} + + +/** + * Decompose the bin + */ +inline void bin_t::decompose(int* layer, uint_t* layer_offset) const +{ + const int l = this->layer(); + if (layer) { + *layer = l; + } + if (layer_offset) { + *layer_offset = v_ >> (l + 1); + } +} + + +/** + * Gets a beginning of the bin interval + */ +inline bin_t::uint_t bin_t::base_offset(void) const +{ + return (v_ & (v_ + 1)) >> 1; +} + + +/** + * Gets the length of the bin interval + */ +inline bin_t::uint_t bin_t::base_length(void) const +{ +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4146) +#endif + const uint_t t = v_ + 1; + return t & -t; +#ifdef _MSC_VER +#pragma warning (pop) +#endif +} + + +/** + * Gets the layer bits + */ +inline bin_t::uint_t bin_t::layer_bits(void) const +{ + return v_ ^ (v_ + 1); +} + + +/** + * Gets the offset value of a bin + */ +inline bin_t::uint_t bin_t::layer_offset(void) const +{ + return v_ >> (layer() + 1); +} + + +/** + * Does the bin is none + */ +inline bool bin_t::is_none(void) const +{ + return *this == NONE; +} + + +/** + * Does the bin is all + */ +inline bool bin_t::is_all(void) const +{ + return *this == ALL; +} + + +/** + * Checks is bin is base (layer == 0) + */ +inline bool bin_t::is_base(void) const +{ + return !(v_ & 1); +} + + +/** + * Checks is bin is a left child + */ +inline bool bin_t::is_left(void) const +{ + return !(v_ & (layer_bits() + 1)); +} + + +/** + * Checks whether is bin is a left child + */ +inline bool bin_t::is_right(void) const +{ + return !is_left(); +} + + +/** + * Sets this object to the parent + */ +inline bin_t& bin_t::to_parent(void) +{ + const uint_t lbs = layer_bits(); + const uint_t nlbs = -2 - lbs; /* ~(lbs + 1) */ + + v_ = (v_ | lbs) & nlbs; + + return *this; +} + + +/** + * Sets this object to the left child + */ +inline bin_t& bin_t::to_left(void) +{ + register uint_t t; + +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4146) +#endif + t = v_ + 1; + t &= -t; + t >>= 1; +#ifdef _MSC_VER +#pragma warning (pop) +#endif + +// if (t == 0) { +// return NONE; +// } + + v_ ^= t; + + return *this; +} + + +/** +* Sets this object to the right child +*/ +inline bin_t& bin_t::to_right(void) +{ + register uint_t t; + +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4146) +#endif + t = v_ + 1; + t &= -t; + t >>= 1; +#ifdef _MSC_VER +#pragma warning (pop) +#endif + +// if (t == 0) { +// return NONE; +// } + + v_ += t; + + return *this; +} + + +/** + * Sets this object to the sibling + */ +inline bin_t& bin_t::to_sibling(void) +{ + v_ ^= (v_ ^ (v_ + 1)) + 1; + + return *this; +} + + +/** + * Sets this object to the leftmost base sub-bin + */ +inline bin_t& bin_t::to_base_left(void) +{ + if (!is_none()) { + v_ &= (v_ + 1); + } + + return *this; +} + + +/** + * Sets this object to the rightmost base sub-bin + */ +inline bin_t& bin_t::to_base_right(void) +{ + if (!is_none()) { + v_ = (v_ | (v_ + 1)) - 1; + } + + return *this; +} + + +/** + * Performs a permutation + */ +inline bin_t& bin_t::to_twisted(uint_t mask) +{ + v_ ^= ((mask << 1) & ~layer_bits()); + + return *this; +} + + +/** + * Sets this object to a layer shifted state + */ +inline bin_t& bin_t::to_layer_shifted(int zlayer) +{ + if (layer_bits() >> zlayer) { + v_ >>= zlayer; + } else { + v_ = (v_ >> zlayer) & ~static_cast(1); + } + + return *this; +} + + +/** + * Gets the parent bin + */ +inline bin_t bin_t::parent(void) const +{ + const uint_t lbs = layer_bits(); + const uint_t nlbs = -2 - lbs; /* ~(lbs + 1) */ + + return bin_t((v_ | lbs) & nlbs); +} + + +/** + * Gets the left child + */ +inline bin_t bin_t::left(void) const +{ + register uint_t t; + +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4146) +#endif + t = v_ + 1; + t &= -t; + t >>= 1; +#ifdef _MSC_VER +#pragma warning (pop) +#endif + +// if (t == 0) { +// return NONE; +// } + + return bin_t(v_ ^ t); +} + + +/** + * Gets the right child + */ +inline bin_t bin_t::right(void) const +{ + register uint_t t; + +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4146) +#endif + t = v_ + 1; + t &= -t; + t >>= 1; +#ifdef _MSC_VER +#pragma warning (pop) +#endif + +// if (t == 0) { +// return NONE; +// } + + return bin_t(v_ + t); +} + + +/** + * Gets the sibling bin + */ +inline bin_t bin_t::sibling(void) const +{ + return bin_t(v_ ^ (layer_bits() + 1)); +} + + +/** + * Gets the leftmost base sub-bin + */ +inline bin_t bin_t::base_left(void) const +{ + if (is_none()) { + return NONE; + } + + return bin_t(v_ & (v_ + 1)); +} + + +/** + * Gets the rightmost base sub-bin + */ +inline bin_t bin_t::base_right(void) const +{ + if (is_none()) { + return NONE; + } + + return bin_t((v_ | (v_ + 1)) - 1); +} + + +/** + * Performs a permutation + */ +inline bin_t bin_t::twisted(uint_t mask) const +{ + return bin_t( v_ ^ ((mask << 1) & ~layer_bits()) ); +} + + +/** + * Gets the bin after a layer shifting + */ +inline bin_t bin_t::layer_shifted(int zlayer) const +{ + if (layer_bits() >> zlayer) { + return bin_t( v_ >> zlayer ); + } else { + return bin_t( (v_ >> zlayer) & ~static_cast(1) ); + } +} + + +/** + * Checks for contains + */ +inline bool bin_t::contains(const bin_t& bin) const +{ + if (is_none()) { + return false; + } + + return (v_ & (v_ + 1)) <= bin.v_ && bin.v_ < (v_ | (v_ + 1)); +} + + +#endif /*_bin_h__*/ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/bin_utils.h tribler-6.2.0/Tribler/SwiftEngine/bin_utils.h --- tribler-6.2.0/Tribler/SwiftEngine/bin_utils.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/bin_utils.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,88 @@ +#ifndef __bin_utils_h__ +#define __bin_utils_h__ + +#include "bin.h" +#include "compat.h" + + +/** + * Generating a list of peak bins for corresponding length + */ +inline int gen_peaks(uint64_t length, bin_t * peaks) { + int pp = 0; + uint8_t layer = 0; + + while (length) { + if (length & 1) + peaks[pp++] = bin_t(((2 * length - 1) << layer) - 1); + length >>= 1; + layer++; + } + + for(int i = 0; i < (pp >> 1); ++i) { + bin_t memo = peaks[pp - 1 - i]; + peaks[pp - 1 - i] = peaks[i]; + peaks[i] = memo; + } + + peaks[pp] = bin_t::NONE; + return pp; +} + + +/** + * Checking for that the bin value is fit to uint32_t + */ +inline bool bin_isUInt32(const bin_t & bin) { + if( bin.is_all() ) + return true; + if( bin.is_none() ) + return true; + + const uint64_t v = bin.toUInt(); + + return static_cast(v) == v && v != 0xffffffff && v != 0x7fffffff; +} + + +/** + * Convert the bin value to uint32_t + */ +inline uint32_t bin_toUInt32(const bin_t & bin) { + if( bin.is_all() ) + return 0x7fffffff; + if( bin.is_none() ) + return 0xffffffff; + return static_cast(bin.toUInt()); +} + + +/** + * Convert the bin value to uint64_t + */ +inline uint64_t bin_toUInt64(const bin_t & bin) { + return bin.toUInt(); +} + + +/** + * Restore the bin from an uint32_t value + */ +inline bin_t bin_fromUInt32(uint32_t v) { + if( v == 0x7fffffff ) + return bin_t::ALL; + if( v == 0xffffffff ) + return bin_t::NONE; + return bin_t(static_cast(v)); +} + + +/** + * Restore the bin from an uint64_t value + */ +inline bin_t bin_fromUInt64(uint64_t v) { + return bin_t(static_cast(v)); +} + + +#endif /*_bin_utils_h__*/ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/binmap.cpp tribler-6.2.0/Tribler/SwiftEngine/binmap.cpp --- tribler-6.2.0/Tribler/SwiftEngine/binmap.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/binmap.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,2185 @@ +#include +#include +#include +#include +#include + +#include + + +#include "binmap.h" + +using namespace swift; + +namespace swift { + +inline size_t _max_(const size_t x, const size_t y) +{ + return x < y ? y : x; +} + +typedef binmap_t::ref_t ref_t; +typedef binmap_t::bitmap_t bitmap_t; + +/* Bitmap constants */ +const bitmap_t BITMAP_EMPTY = static_cast(0); +const bitmap_t BITMAP_FILLED = static_cast(-1); + +const bin_t::uint_t BITMAP_LAYER_BITS = 2 * 8 * sizeof(bitmap_t) - 1; + +const ref_t ROOT_REF = 0; + +#ifdef _MSC_VER +# pragma warning (push) +# pragma warning ( disable:4309 ) +#endif + +const bitmap_t BITMAP[] = { + static_cast(0x00000001), static_cast(0x00000003), + static_cast(0x00000002), static_cast(0x0000000f), + static_cast(0x00000004), static_cast(0x0000000c), + static_cast(0x00000008), static_cast(0x000000ff), + static_cast(0x00000010), static_cast(0x00000030), + static_cast(0x00000020), static_cast(0x000000f0), + static_cast(0x00000040), static_cast(0x000000c0), + static_cast(0x00000080), static_cast(0x0000ffff), + static_cast(0x00000100), static_cast(0x00000300), + static_cast(0x00000200), static_cast(0x00000f00), + static_cast(0x00000400), static_cast(0x00000c00), + static_cast(0x00000800), static_cast(0x0000ff00), + static_cast(0x00001000), static_cast(0x00003000), + static_cast(0x00002000), static_cast(0x0000f000), + static_cast(0x00004000), static_cast(0x0000c000), + static_cast(0x00008000), static_cast(0xffffffff), + static_cast(0x00010000), static_cast(0x00030000), + static_cast(0x00020000), static_cast(0x000f0000), + static_cast(0x00040000), static_cast(0x000c0000), + static_cast(0x00080000), static_cast(0x00ff0000), + static_cast(0x00100000), static_cast(0x00300000), + static_cast(0x00200000), static_cast(0x00f00000), + static_cast(0x00400000), static_cast(0x00c00000), + static_cast(0x00800000), static_cast(0xffff0000), + static_cast(0x01000000), static_cast(0x03000000), + static_cast(0x02000000), static_cast(0x0f000000), + static_cast(0x04000000), static_cast(0x0c000000), + static_cast(0x08000000), static_cast(0xff000000), + static_cast(0x10000000), static_cast(0x30000000), + static_cast(0x20000000), static_cast(0xf0000000), + static_cast(0x40000000), static_cast(0xc0000000), + static_cast(0x80000000), /* special */ static_cast(0xffffffff) /* special */ +}; + +#ifdef _MSC_VER +#pragma warning (pop) +#endif + + +/** + * Get the leftmost bin that corresponded to bitmap (the bin is filled in bitmap) + */ +bin_t::uint_t bitmap_to_bin(register bitmap_t b) +{ + static const unsigned char BITMAP_TO_BIN[] = { + 0xff, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 8, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 10, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 9, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 12, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 8, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 10, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 9, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 14, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 8, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 10, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 9, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 13, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 8, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 10, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 3, + 11, 0, 2, 1, 4, 0, 2, 1, 6, 0, 2, 1, 5, 0, 2, 7 + }; + + assert (sizeof(bitmap_t) <= 4); + assert (b != BITMAP_EMPTY); + + unsigned char t; + + t = BITMAP_TO_BIN[ b & 0xff ]; + if (t < 16) { + if (t != 7) { + return static_cast(t); + } + + b += 1; + b &= -b; + if (0 == b) { + return BITMAP_LAYER_BITS / 2; + } + if (0 == (b & 0xffff)) { + return 15; + } + return 7; + } + + b >>= 8; + t = BITMAP_TO_BIN[ b & 0xff ]; + if (t <= 15) { + return 16 + t; + } + + /* Recursion */ + // return 32 + bitmap_to_bin( b >> 8 ); + + assert (sizeof(bitmap_t) == 4); + + b >>= 8; + t = BITMAP_TO_BIN[ b & 0xff ]; + if (t < 16) { + if (t != 7) { + return 32 + static_cast(t); + } + + b += 1; + b &= -b; + if (0 == (b & 0xffff)) { + return 47; + } + return 39; + } + + b >>= 8; + return 48 + BITMAP_TO_BIN[ b & 0xff ]; +} + + +/** + * Get the leftmost bin that corresponded to bitmap (the bin is filled in bitmap) + */ +bin_t bitmap_to_bin(const bin_t& bin, const bitmap_t bitmap) +{ + assert (bitmap != BITMAP_EMPTY); + + if (bitmap == BITMAP_FILLED) { + return bin; + } + + return bin_t(bin.base_left().toUInt() + bitmap_to_bin(bitmap)); +} + +} /* namespace */ + + +/* Methods */ + + +/** + * Constructor + */ +binmap_t::binmap_t() + : root_bin_(63) +{ + assert (sizeof(bitmap_t) <= 4); + + cell_ = NULL; + cells_number_ = 0; + allocated_cells_number_ = 0; + free_top_ = ROOT_REF; + + const ref_t root_ref = alloc_cell(); + + assert (root_ref == ROOT_REF && cells_number_ > 0); +} + + +/** + * Destructor + */ +binmap_t::~binmap_t() +{ + if (cell_) { + free(cell_); + } +} + + +/** + * Allocates one cell (dirty allocation) + */ +ref_t binmap_t::_alloc_cell() +{ + assert (allocated_cells_number_ < cells_number_); + + /* Pop an element from the free cell list */ + const ref_t ref = free_top_; + assert (cell_[ref].is_free_); + + free_top_ = cell_[ ref ].free_next_; + + assert (!(cell_[ ref ].is_free_ = false)); /* Reset flag in DEBUG */ + + ++allocated_cells_number_; + + return ref; +} + + +/** + * Allocates one cell + */ +ref_t binmap_t::alloc_cell() +{ + if (!reserve_cells(1)) { + return ROOT_REF /* MEMORY ERROR or OVERFLOW ERROR */; + } + + const ref_t ref = _alloc_cell(); + + /* Cleans cell */ + memset(&cell_[ref], 0, sizeof(cell_[0])); + + return ref; +} + + +/** + * Reserve cells allocation capacity + */ +bool binmap_t::reserve_cells(size_t count) +{ + if (cells_number_ - allocated_cells_number_ < count) { + /* Finding new sizeof of the buffer */ + const size_t old_cells_number = cells_number_; + const size_t new_cells_number = _max_(16U, _max_(2 * old_cells_number, allocated_cells_number_ + count)); + + /* Check for reference capacity */ + if (static_cast(new_cells_number) < old_cells_number) { + fprintf(stderr, "Warning: binmap_t::reserve_cells: REFERENCE LIMIT ERROR\n"); + return false /* REFERENCE LIMIT ERROR */; + } + + /* Check for integer overflow */ + static const size_t MAX_NUMBER = (static_cast(-1) / sizeof(cell_[0])); + if (MAX_NUMBER < new_cells_number) { + fprintf(stderr, "Warning: binmap_t::reserve_cells: INTEGER OVERFLOW\n"); + return false /* INTEGER OVERFLOW */; + } + + /* Reallocate memory */ + cell_t* const cell = static_cast(realloc(cell_, new_cells_number * sizeof(cell_[0]))); + if (cell == NULL) { + fprintf(stderr, "Warning: binmap_t::reserve_cells: MEMORY ERROR\n"); + return false /* MEMORY ERROR */; + } + + // Arno, 2012-09-13: Clear cells before use. + if (new_cells_number > cells_number_) { + for (int i=cells_number_; i(idx + 1); + } + + free_top_ = static_cast(old_cells_number); + } + + return true; +} + + +/** + * Releases the cell + */ +void binmap_t::free_cell(ref_t ref) +{ + assert (ref > 0); + assert (!cell_[ref].is_free_); + + if (cell_[ref].is_left_ref_) { + free_cell(cell_[ref].left_.ref_); + } + if (cell_[ref].is_right_ref_) { + free_cell(cell_[ref].right_.ref_); + } + assert ((cell_[ref].is_free_ = true)); /* Set flag in DEBUG */ + cell_[ref].free_next_ = free_top_; + + free_top_ = ref; + + --allocated_cells_number_; +} + + +/** + * Extend root + */ +bool binmap_t::extend_root() +{ + assert (!root_bin_.is_all()); + + if (!cell_[ROOT_REF].is_left_ref_ && !cell_[ROOT_REF].is_right_ref_ && cell_[ROOT_REF].left_.bitmap_ == cell_[ROOT_REF].right_.bitmap_) { + /* Setup the root cell */ + cell_[ROOT_REF].right_.bitmap_ = BITMAP_EMPTY; + + } else { + /* Allocate new cell */ + const ref_t ref = alloc_cell(); + if (ref == ROOT_REF) { + return false /* ALLOC ERROR */; + } + + /* Move old root to the cell */ + cell_[ref] = cell_[ROOT_REF]; + + /* Setup new root */ + cell_[ROOT_REF].is_left_ref_ = true; + cell_[ROOT_REF].is_right_ref_ = false; + + cell_[ROOT_REF].left_.ref_ = ref; + cell_[ROOT_REF].right_.bitmap_ = BITMAP_EMPTY; + } + + /* Reset bin */ + root_bin_.to_parent(); + return true; +} + + +/** + * Pack a trace of cells + */ +void binmap_t::pack_cells(ref_t* href) +{ + ref_t ref = *href--; + if (ref == ROOT_REF) { + return; + } + + if (cell_[ref].is_left_ref_ || cell_[ref].is_right_ref_ || + cell_[ref].left_.bitmap_ != cell_[ref].right_.bitmap_) { + return; + } + + const bitmap_t bitmap = cell_[ref].left_.bitmap_; + + do { + ref = *href--; + + if (!cell_[ref].is_left_ref_) { + if (cell_[ref].left_.bitmap_ != bitmap) { + break; + } + + } else if (!cell_[ref].is_right_ref_) { + if (cell_[ref].right_.bitmap_ != bitmap) { + break; + } + + } else { + break; + } + + } while (ref != ROOT_REF); + + const ref_t par_ref = href[2]; + + if (cell_[ref].is_left_ref_ && cell_[ref].left_.ref_ == par_ref) { + cell_[ref].is_left_ref_ = false; + cell_[ref].left_.bitmap_ = bitmap; + } else { + cell_[ref].is_right_ref_ = false; + cell_[ref].right_.bitmap_ = bitmap; + } + + free_cell(par_ref); +} + + +/** + * Whether binmap is empty + */ +bool binmap_t::is_empty() const +{ + const cell_t& cell = cell_[ROOT_REF]; + + return !cell.is_left_ref_ && !cell.is_right_ref_ && + cell.left_.bitmap_ == BITMAP_EMPTY && cell.right_.bitmap_ == BITMAP_EMPTY; +} + + +/** + * Whether binmap is filled + */ +bool binmap_t::is_filled() const +{ + const cell_t& cell = cell_[ROOT_REF]; + + return root_bin_.is_all() && !cell.is_left_ref_ && !cell.is_right_ref_ && + cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED; +} + + +/** + * Whether range/bin is empty + */ +bool binmap_t::is_empty(const bin_t& bin) const +{ + /* Process hi-layers case */ + if (!root_bin_.contains(bin)) { + return !bin.contains(root_bin_) || is_empty(); + } + + /* Trace the bin */ + ref_t cur_ref; + bin_t cur_bin; + + trace(&cur_ref, &cur_bin, bin); + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Process common case */ + const cell_t& cell = cell_[cur_ref]; + + if (bin.layer_bits() > BITMAP_LAYER_BITS) { + if (bin < cur_bin) { + return cell.left_.bitmap_ == BITMAP_EMPTY; + } + if (cur_bin < bin) { + return cell.right_.bitmap_ == BITMAP_EMPTY; + } + return !cell.is_left_ref_ && !cell.is_right_ref_ && + cell.left_.bitmap_ == BITMAP_EMPTY && cell.right_.bitmap_ == BITMAP_EMPTY; + } + + /* Process low-layers case */ + assert (bin != cur_bin); + + const bitmap_t bm1 = (bin < cur_bin) ? cell.left_.bitmap_ : cell.right_.bitmap_; + const bitmap_t bm2 = BITMAP[ BITMAP_LAYER_BITS & bin.toUInt() ]; + + return (bm1 & bm2) == BITMAP_EMPTY; +} + + +/** + * Whether range/bin is filled + */ +bool binmap_t::is_filled(const bin_t& bin) const +{ + /* Process hi-layers case */ + if (!root_bin_.contains(bin)) { + return false; + } + + /* Trace the bin */ + ref_t cur_ref; + bin_t cur_bin; + + trace(&cur_ref, &cur_bin, bin); + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Process common case */ + const cell_t& cell = cell_[cur_ref]; + + if (bin.layer_bits() > BITMAP_LAYER_BITS) { + if (bin < cur_bin) { + return cell.left_.bitmap_ == BITMAP_FILLED; + } + if (cur_bin < bin) { + return cell.right_.bitmap_ == BITMAP_FILLED; + } + return !cell.is_left_ref_ && !cell.is_right_ref_ && + cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED; + } + + /* Process low-layers case */ + assert (bin != cur_bin); + + const bitmap_t bm1 = (bin < cur_bin) ? cell.left_.bitmap_ : cell.right_.bitmap_; + const bitmap_t bm2 = BITMAP[ BITMAP_LAYER_BITS & bin.toUInt() ]; + + return (bm1 & bm2) == bm2; +} + + +/** + * Return the topmost solid bin which covers the specified bin + */ +bin_t binmap_t::cover(const bin_t& bin) const +{ + /* Process hi-layers case */ + if (!root_bin_.contains(bin)) { + if (!bin.contains(root_bin_)) { + return root_bin_.sibling(); + } + if (is_empty()) { + return bin_t::ALL; + } + return bin_t::NONE; + } + + /* Trace the bin */ + ref_t cur_ref; + bin_t cur_bin; + + trace(&cur_ref, &cur_bin, bin); + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Process common case */ + const cell_t& cell = cell_[cur_ref]; + + if (bin.layer_bits() > BITMAP_LAYER_BITS) { + if (bin < cur_bin) { + if (cell.left_.bitmap_ == BITMAP_EMPTY || cell.left_.bitmap_ == BITMAP_FILLED) { + return cur_bin.left(); + } + return bin_t::NONE; + } + if (cur_bin < bin) { + if (cell.right_.bitmap_ == BITMAP_EMPTY || cell.right_.bitmap_ == BITMAP_FILLED) { + return cur_bin.right(); + } + return bin_t::NONE; + } + if (cell.is_left_ref_ || cell.is_right_ref_) { + return bin_t::NONE; + } + if (cell.left_.bitmap_ != cell.right_.bitmap_) { + return bin_t::NONE; + } + assert (cur_bin == root_bin_); + if (cell.left_.bitmap_ == BITMAP_EMPTY) { + return bin_t::ALL; + } + if (cell.left_.bitmap_ == BITMAP_FILLED) { + return cur_bin; + } + return bin_t::NONE; + } + + /* Process low-layers case */ + assert (bin != cur_bin); + + bitmap_t bm1; + if (bin < cur_bin) { + bm1 = cell.left_.bitmap_; + cur_bin.to_left(); + } else { + bm1 = cell.right_.bitmap_; + cur_bin.to_right(); + } + + if (bm1 == BITMAP_EMPTY) { + if (is_empty()) { + return bin_t::ALL; + } + return cur_bin; + } + if (bm1 == BITMAP_FILLED) { + if (is_filled()) { + return bin_t::ALL; + } + return cur_bin; + } + + /* Trace the bitmap */ + bin_t b = bin; + bitmap_t bm2 = BITMAP[ BITMAP_LAYER_BITS & b.toUInt() ]; + + if ((bm1 & bm2) == BITMAP_EMPTY) { + do { + cur_bin = b; + b.to_parent(); + bm2 = BITMAP[ BITMAP_LAYER_BITS & b.toUInt() ]; + } while ((bm1 & bm2) == BITMAP_EMPTY); + + return cur_bin; + + } else if ((bm1 & bm2) == bm2) { + do { + cur_bin = b; + b.to_parent(); + bm2 = BITMAP[ BITMAP_LAYER_BITS & b.toUInt() ]; + } while ((bm1 & bm2) == bm2); + + return cur_bin; + } + + return bin_t::NONE; +} + + +/** + * Find first empty bin + */ +bin_t binmap_t::find_empty() const +{ + /* Trace the bin */ + bitmap_t bitmap = BITMAP_FILLED; + + ref_t cur_ref; + bin_t cur_bin; + + do { + /* Processing the root */ + if (cell_[ROOT_REF].is_left_ref_) { + cur_ref = cell_[ROOT_REF].left_.ref_; + cur_bin = root_bin_.left(); + } else if (cell_[ROOT_REF].left_.bitmap_ != BITMAP_FILLED) { + if (cell_[ ROOT_REF].left_.bitmap_ == BITMAP_EMPTY) { + if (!cell_[ ROOT_REF].is_right_ref_ && cell_[ ROOT_REF ].right_.bitmap_ == BITMAP_EMPTY) { + return bin_t::ALL; + } + return root_bin_.left(); + } + bitmap = cell_[ROOT_REF].left_.bitmap_; + cur_bin = root_bin_.left(); + break; + } else if (cell_[ROOT_REF].is_right_ref_) { + cur_ref = cell_[ROOT_REF].right_.ref_; + cur_bin = root_bin_.right(); + } else { + if (cell_[ROOT_REF].right_.bitmap_ == BITMAP_FILLED) { + if (root_bin_.is_all()) { + return bin_t::NONE; + } + return root_bin_.sibling(); + } + bitmap = cell_[ROOT_REF].right_.bitmap_; + cur_bin = root_bin_.right(); + break; + } + + /* Processing middle layers */ + for ( ;;) { + if (cell_[cur_ref].is_left_ref_) { + cur_ref = cell_[cur_ref].left_.ref_; + cur_bin.to_left(); + } else if (cell_[cur_ref].left_.bitmap_ != BITMAP_FILLED) { + bitmap = cell_[cur_ref].left_.bitmap_; + cur_bin.to_left(); + break; + } else if (cell_[cur_ref].is_right_ref_) { + cur_ref = cell_[cur_ref].right_.ref_; + cur_bin.to_right(); + } else { + assert (cell_[cur_ref].right_.bitmap_ != BITMAP_FILLED); + bitmap = cell_[cur_ref].right_.bitmap_; + cur_bin.to_right(); + break; + } + } + + } while (false); + + /* Getting result */ + assert (bitmap != BITMAP_FILLED); + + return bitmap_to_bin(cur_bin, ~bitmap); +} + + +/** + * Find first filled bin + */ +bin_t binmap_t::find_filled() const +{ + /* Trace the bin */ + bitmap_t bitmap = BITMAP_EMPTY; + + ref_t cur_ref; + bin_t cur_bin; + + do { + /* Processing the root */ + if (cell_[ROOT_REF].is_left_ref_) { + cur_ref = cell_[ROOT_REF].left_.ref_; + cur_bin = root_bin_.left(); + } else if (cell_[ROOT_REF].left_.bitmap_ != BITMAP_EMPTY) { + if (cell_[ ROOT_REF].left_.bitmap_ == BITMAP_FILLED) { + if (!cell_[ ROOT_REF].is_right_ref_ && cell_[ ROOT_REF ].right_.bitmap_ == BITMAP_FILLED) { + return root_bin_; + } + return root_bin_.left(); + } + bitmap = cell_[ROOT_REF].left_.bitmap_; + cur_bin = root_bin_.left(); + break; + } else if (cell_[ROOT_REF].is_right_ref_) { + cur_ref = cell_[ROOT_REF].right_.ref_; + cur_bin = root_bin_.right(); + } else { + if (cell_[ROOT_REF].right_.bitmap_ == BITMAP_EMPTY) { + return bin_t::NONE; + } + bitmap = cell_[ROOT_REF].right_.bitmap_; + cur_bin = root_bin_.right(); + break; + } + + /* Processing middle layers */ + for ( ;;) { + if (cell_[cur_ref].is_left_ref_) { + cur_ref = cell_[cur_ref].left_.ref_; + cur_bin.to_left(); + } else if (cell_[cur_ref].left_.bitmap_ != BITMAP_EMPTY) { + bitmap = cell_[cur_ref].left_.bitmap_; + cur_bin.to_left(); + break; + } else if (cell_[cur_ref].is_right_ref_) { + cur_ref = cell_[cur_ref].right_.ref_; + cur_bin.to_right(); + } else { + assert (cell_[cur_ref].right_.bitmap_ != BITMAP_EMPTY); + bitmap = cell_[cur_ref].right_.bitmap_; + cur_bin.to_right(); + break; + } + } + + } while (false); + + /* Getting result */ + assert (bitmap != BITMAP_EMPTY); + + return bitmap_to_bin(cur_bin, bitmap); +} + + +/** + * Arno: Find first empty bin right of start (start inclusive) + */ +bin_t binmap_t::find_empty(bin_t start) const +{ + bin_t cur_bin = start; + + if (is_empty(cur_bin)) + return cur_bin; + do + { + // Move up till we find ancestor that is not filled. + cur_bin = cur_bin.parent(); + if (!is_filled(cur_bin)) + { + // Ancestor is not filled + break; + } + if (cur_bin == root_bin_) + { + // Hit top, full tree, sort of. For some reason root_bin_ not + // set to real top (but to ALL), so we may actually return a + // bin that is outside the size of the content here. + return bin_t::NONE; + } + } + while (true); + + // Move down + do + { + if (!is_filled(cur_bin.left())) + { + cur_bin.to_left(); + } + else if (!is_filled(cur_bin.right())) + { + cur_bin.to_right(); + } + if (cur_bin.is_base()) + { + // Found empty bin + return cur_bin; + } + } while(!cur_bin.is_base()); // safety catch + + return bin_t::NONE; +} + + + +#define LR_LEFT (0x00) +#define RL_RIGHT (0x01) +#define RL_LEFT (0x02) +#define LR_RIGHT (0x03) + + +#define SSTACK() \ + int _top_ = 0; \ + bin_t _bin_[64]; \ + ref_t _sref_[64]; \ + char _dir_[64]; + +#define DSTACK() \ + int _top_ = 0; \ + bin_t _bin_[64]; \ + ref_t _dref_[64]; \ + char _dir_[64]; + +#define SDSTACK() \ + int _top_ = 0; \ + bin_t _bin_[64]; \ + ref_t _sref_[64]; \ + ref_t _dref_[64]; \ + char _dir_[64]; + + +#define SPUSH(b, sr, twist) \ + do { \ + _bin_[_top_] = b; \ + _sref_[_top_] = sr; \ + _dir_[_top_] = (0 != (twist & (b.base_length() >> 1))); \ + ++_top_; \ + } while (false) + +#define DPUSH(b, dr, twist) \ + do { \ + _bin_[_top_] = b; \ + _dref_[_top_] = dr; \ + _dir_[_top_] = (0 != (twist & (b.base_length() >> 1))); \ + ++_top_; \ + } while (false) + +#define SDPUSH(b, sr, dr, twist) \ + do { \ + _bin_[_top_] = b; \ + _sref_[_top_] = sr; \ + _dref_[_top_] = dr; \ + _dir_[_top_] = (0 != (twist & (b.base_length() >> 1))); \ + ++_top_; \ + } while (false) + + +#define SPOP() \ + assert (_top_ < 65); \ + --_top_; \ + const bin_t b = _bin_[_top_]; \ + const cell_t& sc = source.cell_[_sref_[_top_]]; \ + const bool is_left = !(_dir_[_top_] & 0x01); \ + if (0 == (_dir_[_top_] & 0x02)) { \ + _dir_[_top_++] ^= 0x03; \ + } + +#define DPOP() \ + assert (_top_ < 65); \ + --_top_; \ + const bin_t b = _bin_[_top_]; \ + const cell_t& dc = destination.cell_[_dref_[_top_]]; \ + const bool is_left = !(_dir_[_top_] & 0x01); \ + if (0 == (_dir_[_top_] & 0x02)) { \ + _dir_[_top_++] ^= 0x03; \ + } + +#define SDPOP() \ + assert (_top_ < 65); \ + --_top_; \ + const bin_t b = _bin_[_top_]; \ + const cell_t& sc = source.cell_[_sref_[_top_]]; \ + const cell_t& dc = destination.cell_[_dref_[_top_]]; \ + const bool is_left = !(_dir_[_top_] & 0x01); \ + if (0 == (_dir_[_top_] & 0x02)) { \ + _dir_[_top_++] ^= 0x03; \ + } + + +/** + * Find first additional bin in source + * + * @param destination + * the destination binmap + * @param source + * the source binmap + */ +bin_t binmap_t::find_complement(const binmap_t& destination, const binmap_t& source, const bin_t::uint_t twist) +{ + return find_complement(destination, source, bin_t::ALL, twist); + + if (destination.is_empty()) { + const cell_t& cell = source.cell_[ROOT_REF]; + if (!cell.is_left_ref_ && !cell.is_right_ref_ && cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED) { + return source.root_bin_; + } + return _find_complement(source.root_bin_, BITMAP_EMPTY, ROOT_REF, source, twist); + } + + if (destination.root_bin_.contains(source.root_bin_)) { + ref_t dref; + bin_t dbin; + + destination.trace(&dref, &dbin, source.root_bin_); + + if (dbin == source.root_bin_) { + return binmap_t::_find_complement(dbin, dref, destination, ROOT_REF, source, twist); + } + + assert (source.root_bin_ < dbin); + + if (destination.cell_[dref].left_.bitmap_ != BITMAP_FILLED) { + if (destination.cell_[dref].left_.bitmap_ == BITMAP_EMPTY) { + const cell_t& cell = source.cell_[ROOT_REF]; + if (!cell.is_left_ref_ && !cell.is_right_ref_ && cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED) { + return source.root_bin_; + } + } + return binmap_t::_find_complement(source.root_bin_, destination.cell_[dref].left_.bitmap_, ROOT_REF, source, twist); + } + + return bin_t::NONE; + + } else { + SSTACK(); + + /* Initialization */ + SPUSH(source.root_bin_, ROOT_REF, twist); + + /* Main loop */ + do { + SPOP(); + + if (is_left) { + if (b.left() == destination.root_bin_) { + if (sc.is_left_ref_) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.ref_, source, twist); + if (!res.is_none()) { + return res; + } + } else if (sc.left_.bitmap_ != BITMAP_EMPTY) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + } + continue; + } + + if (sc.is_left_ref_) { + SPUSH(b.left(), sc.left_.ref_, twist); + continue; + + } else if (sc.left_.bitmap_ != BITMAP_EMPTY) { + if (0 == (twist & (b.left().base_length() - 1) & ~(destination.root_bin_.base_length() - 1))) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + return binmap_t::_find_complement(destination.root_bin_.sibling(), BITMAP_EMPTY, sc.left_.bitmap_, twist); + + } else if (sc.left_.bitmap_ != BITMAP_FILLED) { + return binmap_t::_find_complement(b.left(), BITMAP_EMPTY, sc.left_.bitmap_, twist); + + } else { + bin_t::uint_t s = twist & (b.left().base_length() - 1); + /* Sorry for the following hardcode hack: Flow the highest bit of s */ + s |= s >> 1; s |= s >> 2; + s |= s >> 4; s |= s >> 8; + s |= s >> 16; + s |= (s >> 16) >> 16; // FIXME: hide warning + return bin_t(s + 1 + (s >> 1)); /* bin_t(s >> 1).sibling(); */ + } + } + + } else { + if (sc.is_right_ref_) { + return binmap_t::_find_complement(b.right(), BITMAP_EMPTY, sc.right_.ref_, source, twist); + } else if (sc.right_.bitmap_ != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.right(), BITMAP_EMPTY, sc.right_.bitmap_, twist); + } + continue; + } + } while (_top_ > 0); + + return bin_t::NONE; + } +} + + +bin_t binmap_t::find_complement(const binmap_t& destination, const binmap_t& source, bin_t range, const bin_t::uint_t twist) +{ + ref_t sref = ROOT_REF; + bitmap_t sbitmap = BITMAP_EMPTY; + bool is_sref = true; + + if (range.contains(source.root_bin_)) { + range = source.root_bin_; + is_sref = true; + sref = ROOT_REF; + + } else if (source.root_bin_.contains(range)) { + bin_t sbin; + source.trace(&sref, &sbin, range); + + if (range == sbin) { + is_sref = true; + } else { + is_sref = false; + + if (range < sbin) { + sbitmap = source.cell_[sref].left_.bitmap_; + } else { + sbitmap = source.cell_[sref].right_.bitmap_; + } + + sbitmap &= BITMAP[ BITMAP_LAYER_BITS & range.toUInt() ]; + + if (sbitmap == BITMAP_EMPTY) { + return bin_t::NONE; + } + } + + } else { + return bin_t::NONE; + } + + assert (is_sref || sbitmap != BITMAP_EMPTY); + + if (destination.is_empty()) { + if (is_sref) { + const cell_t& cell = source.cell_[sref]; + if (!cell.is_left_ref_ && !cell.is_right_ref_ && cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED) { + return range; + } else { + return _find_complement(range, BITMAP_EMPTY, sref, source, twist); + } + } else { + return _find_complement(range, BITMAP_EMPTY, sbitmap, twist); + } + } + + if (destination.root_bin_.contains(range)) { + ref_t dref; + bin_t dbin; + destination.trace(&dref, &dbin, range); + + if (range == dbin) { + if (is_sref) { + return _find_complement(range, dref, destination, sref, source, twist); + } else { + return _find_complement(range, dref, destination, sbitmap, twist); + } + + } else { + bitmap_t dbitmap; + + if (range < dbin) { + dbitmap = destination.cell_[dref].left_.bitmap_; + } else { + dbitmap = destination.cell_[dref].right_.bitmap_; + } + + if (dbitmap == BITMAP_FILLED) { + return bin_t::NONE; + + } else if (is_sref) { + if (dbitmap == BITMAP_EMPTY) { + const cell_t& cell = source.cell_[sref]; + if (!cell.is_left_ref_ && !cell.is_right_ref_ && cell.left_.bitmap_ == BITMAP_FILLED && cell.right_.bitmap_ == BITMAP_FILLED) { + return range; + } + } + + return _find_complement(range, dbitmap, sref, source, twist); + + } else { + if ((sbitmap & ~dbitmap) != BITMAP_EMPTY) { + return _find_complement(range, dbitmap, sbitmap, twist); + } else { + return bin_t::NONE; + } + } + } + + } else if (!range.contains(destination.root_bin_)) { + if (is_sref) { + return _find_complement(range, BITMAP_EMPTY, sref, source, twist); + } else { + return _find_complement(range, BITMAP_EMPTY, sbitmap, twist); + } + + } else { // range.contains(destination.m_root_bin) + if (is_sref) { + SSTACK(); + + SPUSH(range, sref, twist); + + do { + SPOP(); + + if (is_left) { + if (b.left() == destination.root_bin_) { + if (sc.is_left_ref_) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.ref_, source, twist); + if (!res.is_none()) { + return res; + } + } else if (sc.left_.bitmap_ != BITMAP_EMPTY) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + } + continue; + } + + if (sc.is_left_ref_) { + SPUSH(b.left(), sc.left_.ref_, twist); + continue; + + } else if (sc.left_.bitmap_ != BITMAP_EMPTY) { + if (0 == (twist & (b.left().base_length() - 1) & ~(destination.root_bin_.base_length() - 1))) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sc.left_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + return binmap_t::_find_complement(destination.root_bin_.sibling(), BITMAP_EMPTY, sc.left_.bitmap_, twist); + + } else if (sc.left_.bitmap_ != BITMAP_FILLED) { + return binmap_t::_find_complement(b.left(), BITMAP_EMPTY, sc.left_.bitmap_, twist); + + } else { + bin_t::uint_t s = twist & (b.left().base_length() - 1); + /* Sorry for the following hardcode hack: Flow the highest bit of s */ + s |= s >> 1; s |= s >> 2; + s |= s >> 4; s |= s >> 8; + s |= s >> 16; + s |= (s >> 16) >> 16; // FIXME: hide warning + return bin_t(s + 1 + (s >> 1)); /* bin_t(s >> 1).sibling(); */ + } + } + + } else { + if (sc.is_right_ref_) { + return binmap_t::_find_complement(b.right(), BITMAP_EMPTY, sc.right_.ref_, source, twist); + } else if (sc.right_.bitmap_ != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.right(), BITMAP_EMPTY, sc.right_.bitmap_, twist); + } + continue; + } + } while (_top_ > 0); + + return bin_t::NONE; + + } else { + if (0 == (twist & (range.base_length() - 1) & ~(destination.root_bin_.base_length() - 1))) { + const bin_t res = binmap_t::_find_complement(destination.root_bin_, ROOT_REF, destination, sbitmap, twist); + if (!res.is_none()) { + return res; + } + return binmap_t::_find_complement(destination.root_bin_.sibling(), BITMAP_EMPTY, sbitmap, twist); + + } else if (sbitmap != BITMAP_FILLED) { + return binmap_t::_find_complement(range, BITMAP_EMPTY, sbitmap, twist); + + } else { + bin_t::uint_t s = twist & (range.base_length() - 1); + /* Sorry for the following hardcode hack: Flow the highest bit of s */ + s |= s >> 1; s |= s >> 2; + s |= s >> 4; s |= s >> 8; + s |= s >> 16; + s |= (s >> 16) >> 16; // FIXME: hide warning + return bin_t(s + 1 + (s >> 1)); /* bin_t(s >> 1).sibling(); */ + } + } + } +} + + +bin_t binmap_t::_find_complement(const bin_t& bin, const ref_t dref, const binmap_t& destination, const ref_t sref, const binmap_t& source, const bin_t::uint_t twist) +{ + /* Initialization */ + SDSTACK(); + SDPUSH(bin, sref, dref, twist); + + /* Main loop */ + do { + SDPOP(); + + if (is_left) { + if (sc.is_left_ref_) { + if (dc.is_left_ref_) { + SDPUSH(b.left(), sc.left_.ref_, dc.left_.ref_, twist); + continue; + + } else if (dc.left_.bitmap_ != BITMAP_FILLED) { + const bin_t res = binmap_t::_find_complement(b.left(), dc.left_.bitmap_, sc.left_.ref_, source, twist); + if (!res.is_none()) { + return res; + } + continue; + } + + } else if (sc.left_.bitmap_ != BITMAP_EMPTY) { + if (dc.is_left_ref_) { + const bin_t res = binmap_t::_find_complement(b.left(), dc.left_.ref_, destination, sc.left_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + continue; + + } else if ((sc.left_.bitmap_ & ~dc.left_.bitmap_) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.left(), dc.left_.bitmap_, sc.left_.bitmap_, twist); + } + } + + } else { + if (sc.is_right_ref_) { + if (dc.is_right_ref_) { + SDPUSH(b.right(), sc.right_.ref_, dc.right_.ref_, twist); + continue; + + } else if (dc.right_.bitmap_ != BITMAP_FILLED) { + const bin_t res = binmap_t::_find_complement(b.right(), dc.right_.bitmap_, sc.right_.ref_, source, twist); + if (!res.is_none()) { + return res; + } + continue; + } + + } else if (sc.right_.bitmap_ != BITMAP_EMPTY) { + if (dc.is_right_ref_) { + const bin_t res = binmap_t::_find_complement(b.right(), dc.right_.ref_, destination, sc.right_.bitmap_, twist); + if (!res.is_none()) { + return res; + } + continue; + + } else if ((sc.right_.bitmap_ & ~dc.right_.bitmap_) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.right(), dc.right_.bitmap_, sc.right_.bitmap_, twist); + } + } + } + } while (_top_ > 0); + + return bin_t::NONE; +} + + +bin_t binmap_t::_find_complement(const bin_t& bin, const bitmap_t dbitmap, const ref_t sref, const binmap_t& source, const bin_t::uint_t twist) +{ + assert (dbitmap != BITMAP_EMPTY || sref != ROOT_REF || + source.cell_[ROOT_REF].is_left_ref_ || + source.cell_[ROOT_REF].is_right_ref_ || + source.cell_[ROOT_REF].left_.bitmap_ != BITMAP_FILLED || + source.cell_[ROOT_REF].right_.bitmap_ != BITMAP_FILLED); + + /* Initialization */ + SSTACK(); + SPUSH(bin, sref, twist); + + /* Main loop */ + do { + SPOP(); + + if (is_left) { + if (sc.is_left_ref_) { + SPUSH(b.left(), sc.left_.ref_, twist); + continue; + } else if ((sc.left_.bitmap_ & ~dbitmap) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.left(), dbitmap, sc.left_.bitmap_, twist); + } + + } else { + if (sc.is_right_ref_) { + SPUSH(b.right(), sc.right_.ref_, twist); + continue; + } else if ((sc.right_.bitmap_ & ~dbitmap) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.right(), dbitmap, sc.right_.bitmap_, twist); + } + } + } while (_top_ > 0); + + return bin_t::NONE; +} + + +bin_t binmap_t::_find_complement(const bin_t& bin, const ref_t dref, const binmap_t& destination, const bitmap_t sbitmap, const bin_t::uint_t twist) +{ + /* Initialization */ + DSTACK(); + DPUSH(bin, dref, twist); + + /* Main loop */ + do { + DPOP(); + + if (is_left) { + if (dc.is_left_ref_) { + DPUSH(b.left(), dc.left_.ref_, twist); + continue; + + } else if ((sbitmap & ~dc.left_.bitmap_) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.left(), dc.left_.bitmap_, sbitmap, twist); + } + + } else { + if (dc.is_right_ref_) { + DPUSH(b.right(), dc.right_.ref_, twist); + continue; + + } else if ((sbitmap & ~dc.right_.bitmap_) != BITMAP_EMPTY) { + return binmap_t::_find_complement(b.right(), dc.right_.bitmap_, sbitmap, twist); + } + } + } while (_top_ > 0); + + return bin_t::NONE; +} + + +bin_t binmap_t::_find_complement(const bin_t& bin, const bitmap_t dbitmap, const bitmap_t sbitmap, bin_t::uint_t twist) +{ + bitmap_t bitmap = sbitmap & ~dbitmap; + + assert (bitmap != BITMAP_EMPTY); + + if (bitmap == BITMAP_FILLED) { + return bin; + } + + twist &= bin.base_length() - 1; + + if (sizeof(bitmap_t) == 2) { + if (twist & 1) { + bitmap = ((bitmap & 0x5555) << 1) | ((bitmap & 0xAAAA) >> 1); + } + if (twist & 2) { + bitmap = ((bitmap & 0x3333) << 2) | ((bitmap & 0xCCCC) >> 2); + } + if (twist & 4) { + bitmap = ((bitmap & 0x0f0f) << 4) | ((bitmap & 0xf0f0) >> 4); + } + if (twist & 8) { + bitmap = ((bitmap & 0x00ff) << 8) | ((bitmap & 0xff00) >> 8); + } + + // Arno, 2012-03-21: Do workaround (see below) here as well? + + return bin_t(bin.base_left().twisted(twist & ~0x0f).toUInt() + bitmap_to_bin(bitmap)).to_twisted(twist & 0x0f); + + } else { + if (twist & 1) { + bitmap = ((bitmap & 0x55555555) << 1) | ((bitmap & 0xAAAAAAAA) >> 1); + } + if (twist & 2) { + bitmap = ((bitmap & 0x33333333) << 2) | ((bitmap & 0xCCCCCCCC) >> 2); + } + if (twist & 4) { + bitmap = ((bitmap & 0x0f0f0f0f) << 4) | ((bitmap & 0xf0f0f0f0) >> 4); + } + if (twist & 8) { + bitmap = ((bitmap & 0x00ff00ff) << 8) | ((bitmap & 0xff00ff00) >> 8); + } + if (twist & 16) { + bitmap = ((bitmap & 0x0000ffff) << 16) | ((bitmap & 0xffff0000) >> 16); + } + + bin_t diff = bin_t(bin.base_left().twisted(twist & ~0x1f).toUInt() + bitmap_to_bin(bitmap)).to_twisted(twist & 0x1f); + + // Arno, 2012-03-21: Sanity check, if it fails, attempt workaround + if (!bin.contains(diff)) + { + // Bug: Proposed bin is outside of specified range. The bug appears + // to be that the code assumes that the range parameter (called bin + // here) is aligned on a 32-bit boundary. I.e. the width of a + // half_t. Hence when the code does range + bitmap_to_bin(x) + // to find the base-layer offset of the bit on which the source + // and dest bitmaps differ, the result may be too high. + // + // What I do here is to round the rangestart to 32 bits, and + // then add bitmap_to_bin(bitmap), divided by two as that function + // returns the bit in a "bin number" format (=bit * 2). + // + // In other words, the "bin" parameter should tell us at what + // base offset of the 32-bit dbitmap and sbitmap is. At the moment + // it doesn't always, because "bin" is not rounded to 32-bit. + // + // see tests/binstest3.cpp + + bin_t::uint_t rangestart = bin.base_left().twisted(twist & ~0x1f).layer_offset(); + bin_t::uint_t b2b = bitmap_to_bin(bitmap); + bin_t::uint_t absoff = ((int)(rangestart/32))*32 + b2b/2; + + diff = bin_t(0,absoff); + diff = diff.to_twisted(twist & 0x1f); + + //char binstr[32]; + //fprintf(stderr,"__fc solution %s\n", diff.str(binstr) ); + } + return diff; + } +} + + +/** + * Sets bins + * + * @param bin + * the bin + */ +void binmap_t::set(const bin_t& bin) +{ + if (bin.is_none()) { + return; + } + + if (bin.layer_bits() > BITMAP_LAYER_BITS) { + _set__high_layer_bitmap(bin, BITMAP_FILLED); + } else { + _set__low_layer_bitmap(bin, BITMAP_FILLED); + } +} + + +/** + * Resets bins + * + * @param bin + * the bin + */ +void binmap_t::reset(const bin_t& bin) +{ + if (bin.is_none()) { + return; + } + + if (bin.layer_bits() > BITMAP_LAYER_BITS) { + _set__high_layer_bitmap(bin, BITMAP_EMPTY); + } else { + _set__low_layer_bitmap(bin, BITMAP_EMPTY); + } +} + + +/** + * Empty all bins + */ +void binmap_t::clear() +{ + cell_t& cell = cell_[ROOT_REF]; + + if (cell.is_left_ref_) { + free_cell(cell.left_.ref_); + } + if (cell.is_right_ref_) { + free_cell(cell.right_.ref_); + } + + cell.is_left_ref_ = false; + cell.is_right_ref_ = false; + cell.left_.bitmap_ = BITMAP_EMPTY; + cell.right_.bitmap_ = BITMAP_EMPTY; +} + + +/** + * Fill the binmap. Creates a new filled binmap. Size is given by the source root + */ +void binmap_t::fill(const binmap_t& source) +{ + root_bin_ = source.root_bin_; + /* Extends root if needed */ + while (!root_bin_.contains(source.root_bin_)) { + if (!extend_root()) { + return /* ALLOC ERROR */; + } + } + set(source.root_bin_); + + cell_t& cell = cell_[ROOT_REF]; + + cell.is_left_ref_ = false; + cell.is_right_ref_ = false; + cell.left_.bitmap_ = BITMAP_FILLED; + cell.right_.bitmap_ = BITMAP_FILLED; +} + + + +/** + * Get number of allocated cells + */ +size_t binmap_t::cells_number() const +{ + return allocated_cells_number_; +} + + +/** + * Get total size of the binmap + */ +size_t binmap_t::total_size() const +{ + return sizeof(*this) + sizeof(cell_[0]) * cells_number_; +} + + + +/** + * Echo the binmap status to stdout + */ +void binmap_t::status() const +{ + printf("bitmap:\n"); + for (int i = 0; i < 16; ++i) { + for (int j = 0; j < 64; ++j) { + printf("%d", is_filled(bin_t(i * 64 + j))); + } + printf("\n"); + } + + printf("size: %u bytes\n", static_cast(total_size())); + printf("cells number: %u (of %u)\n", static_cast(allocated_cells_number_), static_cast(cells_number_)); + printf("root bin: %llu\n", static_cast(root_bin_.toUInt())); +} + + +/** Trace the bin */ +inline void binmap_t::trace(ref_t* ref, bin_t* bin, const bin_t& target) const +{ + assert (root_bin_.contains(target)); + + ref_t cur_ref = ROOT_REF; + bin_t cur_bin = root_bin_; + + while (target != cur_bin) { + if (target < cur_bin) { + if (cell_[cur_ref].is_left_ref_) { + cur_ref = cell_[cur_ref].left_.ref_; + cur_bin.to_left(); + } else { + break; + } + } else { + if (cell_[cur_ref].is_right_ref_) { + cur_ref = cell_[cur_ref].right_.ref_; + cur_bin.to_right(); + } else { + break; + } + } + } + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + if (ref) { + *ref = cur_ref; + } + if (bin) { + *bin = cur_bin; + } +} + + +/** Trace the bin */ +inline void binmap_t::trace(ref_t* ref, bin_t* bin, ref_t** history, const bin_t& target) const +{ + assert (history); + assert (root_bin_.contains(target)); + + ref_t* href = *history; + ref_t cur_ref = ROOT_REF; + bin_t cur_bin = root_bin_; + + *href++ = ROOT_REF; + while (target != cur_bin) { + if (target < cur_bin) { + if (cell_[cur_ref].is_left_ref_) { + cur_ref = cell_[cur_ref].left_.ref_; + cur_bin.to_left(); + } else { + break; + } + } else { + if (cell_[cur_ref].is_right_ref_) { + cur_ref = cell_[cur_ref].right_.ref_; + cur_bin.to_right(); + } else { + break; + } + } + + *href++ = cur_ref; + } + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + if (ref) { + *ref = cur_ref; + } + if (bin) { + *bin = cur_bin; + } + + *history = href; +} + + +/** + * Copy a binmap to another + */ +void binmap_t::copy(binmap_t& destination, const binmap_t& source) +{ + destination.root_bin_ = source.root_bin_; + binmap_t::copy(destination, ROOT_REF, source, ROOT_REF); +} + + +/** + * Copy a range from one binmap to another binmap + */ +void binmap_t::copy(binmap_t& destination, const binmap_t& source, const bin_t& range) +{ + ref_t int_ref; + bin_t int_bin; + + if (range.contains(destination.root_bin_)) { + if (source.root_bin_.contains(range)) { + source.trace(&int_ref, &int_bin, range); + destination.root_bin_ = range; + binmap_t::copy(destination, ROOT_REF, source, int_ref); + } else if (range.contains(source.root_bin_)) { + destination.root_bin_ = source.root_bin_; + binmap_t::copy(destination, ROOT_REF, source, ROOT_REF); + } else { + destination.reset(range); + } + + } else { + if (source.root_bin_.contains(range)) { + source.trace(&int_ref, &int_bin, range); + + const cell_t& cell = source.cell_[int_ref]; + + if (range.layer_bits() <= BITMAP_LAYER_BITS) { + if (range < int_bin) { + destination._set__low_layer_bitmap(range, cell.left_.bitmap_); + } else { + destination._set__low_layer_bitmap(range, cell.right_.bitmap_); + } + + } else { + if (range == int_bin) { + if (cell.is_left_ref_ || cell.is_right_ref_ || cell.left_.bitmap_ != cell.right_.bitmap_) { + binmap_t::_copy__range(destination, source, int_ref, range); + } else { + destination._set__high_layer_bitmap(range, cell.left_.bitmap_); + } + } else if (range < int_bin) { + destination._set__high_layer_bitmap(range, cell.left_.bitmap_); + } else { + destination._set__high_layer_bitmap(range, cell.right_.bitmap_); + } + } + + } else if (range.contains(source.root_bin_)) { + destination.reset(range); // Probably it could be optimized + + const cell_t& cell = source.cell_[ ROOT_REF ]; + + if (cell.is_left_ref_ || cell.is_right_ref_ || cell.left_.bitmap_ != cell.right_.bitmap_) { + binmap_t::_copy__range(destination, source, ROOT_REF, source.root_bin_); + } else { + destination._set__high_layer_bitmap(source.root_bin_, cell.left_.bitmap_); + } + + } else { + destination.reset(range); + } + } +} + + +inline void binmap_t::_set__low_layer_bitmap(const bin_t& bin, const bitmap_t _bitmap) +{ + assert (bin.layer_bits() <= BITMAP_LAYER_BITS); + + const bitmap_t bin_bitmap = BITMAP[ bin.toUInt() & BITMAP_LAYER_BITS ]; + const bitmap_t bitmap = _bitmap & bin_bitmap; + + /* Extends root if needed */ + if (!root_bin_.contains(bin)) { + /* Trivial case */ + if (bitmap == BITMAP_EMPTY) { + return; + } + do { + if (!extend_root()) { + return /* ALLOC ERROR */; + } + } while (!root_bin_.contains(bin)); + } + + /* Get the pre-range */ + const bin_t pre_bin( (bin.toUInt() & (~(BITMAP_LAYER_BITS + 1))) | BITMAP_LAYER_BITS ); + + /* The trace the bin with history */ + ref_t _href[64]; + ref_t* href = _href; + ref_t cur_ref; + bin_t cur_bin; + + /* Process first stage -- do not touch existed tree */ + trace(&cur_ref, &cur_bin, &href, pre_bin); + + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Checking that we need to do anything */ + bitmap_t bm = BITMAP_EMPTY; + { + cell_t& cell = cell_[cur_ref]; + + if (bin < cur_bin) { + assert (!cell.is_left_ref_); + bm = cell.left_.bitmap_; + if ((bm & bin_bitmap) == bitmap) { + return; + } + if (cur_bin == pre_bin) { + cell.left_.bitmap_ = (cell.left_.bitmap_ & ~bin_bitmap) | bitmap; + pack_cells(href - 1); + return; + } + } else { + assert (!cell.is_right_ref_); + bm = cell.right_.bitmap_; + if ((bm & bin_bitmap) == bitmap) { + return; + } + if (cur_bin == pre_bin) { + cell.right_.bitmap_ = (cell.right_.bitmap_ & ~bin_bitmap) | bitmap; + pack_cells(href - 1); + return; + } + } + } + + /* Reserving proper number of cells */ + if (!reserve_cells( cur_bin.layer() - pre_bin.layer() )) { + return /* MEMORY ERROR or OVERFLOW ERROR */; + } + + /* Continue to trace */ + do { + const ref_t ref = _alloc_cell(); + + cell_[ref].is_left_ref_ = false; + cell_[ref].is_right_ref_ = false; + cell_[ref].left_.bitmap_ = bm; + cell_[ref].right_.bitmap_ = bm; + + if (pre_bin < cur_bin) { + cell_[cur_ref].is_left_ref_ = true; + cell_[cur_ref].left_.ref_ = ref; + cur_bin.to_left(); + } else { + cell_[cur_ref].is_right_ref_ = true; + cell_[cur_ref].right_.ref_ = ref; + cur_bin.to_right(); + } + + cur_ref = ref; + } while (cur_bin != pre_bin); + + assert (cur_bin == pre_bin); + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Complete setting */ + if (bin < cur_bin) { + cell_[cur_ref].left_.bitmap_ = (cell_[cur_ref].left_.bitmap_ & ~bin_bitmap) | bitmap; + } else { + cell_[cur_ref].right_.bitmap_ = (cell_[cur_ref].right_.bitmap_ & ~bin_bitmap) | bitmap; + } +} + + +inline void binmap_t::_set__high_layer_bitmap(const bin_t& bin, const bitmap_t bitmap) +{ + assert (bin.layer_bits() > BITMAP_LAYER_BITS); + + /* First trivial case */ + if (bin.contains(root_bin_)) { + cell_t& cell = cell_[ROOT_REF]; + if (cell.is_left_ref_) { + free_cell(cell.left_.ref_); + } + if (cell.is_right_ref_) { + free_cell(cell.right_.ref_); + } + + root_bin_ = bin; + cell.is_left_ref_ = false; + cell.is_right_ref_ = false; + cell.left_.bitmap_ = bitmap; + cell.right_.bitmap_ = bitmap; + + return; + } + + /* Get the pre-range */ + bin_t pre_bin = bin.parent(); + + /* Extends root if needed */ + if (!root_bin_.contains(pre_bin)) { + /* Second trivial case */ + if (bitmap == BITMAP_EMPTY) { + return; + } + + do { + if (!extend_root()) { + return /* ALLOC ERROR */; + } + } while (!root_bin_.contains(pre_bin)); + } + + /* The trace the bin with history */ + ref_t _href[64]; + ref_t* href = _href; + ref_t cur_ref; + bin_t cur_bin; + + /* Process first stage -- do not touch existed tree */ + trace(&cur_ref, &cur_bin, &href, pre_bin); + + /* Checking that we need to do anything */ + bitmap_t bm = BITMAP_EMPTY; + { + cell_t& cell = cell_[cur_ref]; + if (bin < cur_bin) { + if (cell.is_left_ref_) { + /* assert (cur_bin == pre_bin); */ + cell.is_left_ref_ = false; + free_cell(cell.left_.ref_); + } else { + bm = cell.left_.bitmap_; + if (bm == bitmap) { + return; + } + } + if (cur_bin == pre_bin) { + cell.left_.bitmap_ = bitmap; + pack_cells(href - 1); + return; + } + } else { + if (cell.is_right_ref_) { + /* assert (cur_bin == pre_bin); */ + cell.is_right_ref_ = false; + free_cell(cell.right_.ref_); + } else { + bm = cell.right_.bitmap_; + if (bm == bitmap) { + return; + } + } + if (cur_bin == pre_bin) { + cell.right_.bitmap_ = bitmap; + pack_cells(href - 1); + return; + } + } + } + + /* Reserving proper number of cells */ + if (!reserve_cells( cur_bin.layer() - pre_bin.layer() )) { + return /* MEMORY ERROR or OVERFLOW ERROR */; + } + + /* Continue to trace */ + do { + const ref_t ref = _alloc_cell(); + + cell_[ref].is_left_ref_ = false; + cell_[ref].is_right_ref_ = false; + cell_[ref].left_.bitmap_ = bm; + cell_[ref].right_.bitmap_ = bm; + + if (pre_bin < cur_bin) { + cell_[cur_ref].is_left_ref_ = true; + cell_[cur_ref].left_.ref_ = ref; + cur_bin.to_left(); + } else { + cell_[cur_ref].is_right_ref_ = true; + cell_[cur_ref].right_.ref_ = ref; + cur_bin.to_right(); + } + + cur_ref = ref; + } while (cur_bin != pre_bin); + + assert (cur_bin == pre_bin); + assert (cur_bin.layer_bits() > BITMAP_LAYER_BITS); + + /* Complete setting */ + if (bin < cur_bin) { + cell_[cur_ref].left_.bitmap_ = bitmap; + } else { + cell_[cur_ref].right_.bitmap_ = bitmap; + } +} + + +void binmap_t::_copy__range(binmap_t& destination, const binmap_t& source, const ref_t sref, const bin_t sbin) +{ + assert (sbin.layer_bits() > BITMAP_LAYER_BITS); + + assert (sref == ROOT_REF || + source.cell_[ sref ].is_left_ref_ || source.cell_[ sref ].is_right_ref_ || + source.cell_[ sref ].left_.bitmap_ != source.cell_[ sref ].right_.bitmap_ + ); + + /* Extends root if needed */ + while (!destination.root_bin_.contains(sbin)) { + if (!destination.extend_root()) { + return /* ALLOC ERROR */; + } + } + + /* The trace the bin */ + ref_t cur_ref; + bin_t cur_bin; + + /* Process first stage -- do not touch existed tree */ + destination.trace(&cur_ref, &cur_bin, sbin); + + /* Continue unpacking if needed */ + if (cur_bin != sbin) { + bitmap_t bm = BITMAP_EMPTY; + + if (sbin < cur_bin) { + bm = destination.cell_[cur_ref].left_.bitmap_; + } else { + bm = destination.cell_[cur_ref].right_.bitmap_; + } + + /* Reserving proper number of cells */ + if (!destination.reserve_cells( cur_bin.layer() - sbin.layer() )) { + return /* MEMORY ERROR or OVERFLOW ERROR */; + } + + /* Continue to trace */ + do { + const ref_t ref = destination._alloc_cell(); + + destination.cell_[ref].is_left_ref_ = false; + destination.cell_[ref].is_right_ref_ = false; + destination.cell_[ref].left_.bitmap_ = bm; + destination.cell_[ref].right_.bitmap_ = bm; + + if (sbin < cur_bin) { + destination.cell_[cur_ref].is_left_ref_ = true; + destination.cell_[cur_ref].left_.ref_ = ref; + cur_bin.to_left(); + } else { + destination.cell_[cur_ref].is_right_ref_ = true; + destination.cell_[cur_ref].right_.ref_ = ref; + cur_bin.to_right(); + } + + cur_ref = ref; + } while (cur_bin != sbin); + } + + /* Make copying */ + copy(destination, cur_ref, source, sref); +} + + +/** + * Clone binmap cells to another binmap + */ +void binmap_t::copy(binmap_t& destination, const ref_t dref, const binmap_t& source, const ref_t sref) +{ + assert (dref == ROOT_REF || + source.cell_[ sref ].is_left_ref_ || source.cell_[ sref ].is_right_ref_ || + source.cell_[ sref ].left_.bitmap_ != source.cell_[ sref ].right_.bitmap_ + ); + + size_t sref_size = 0; + size_t dref_size = 0; + + ref_t sstack[128]; + ref_t dstack[128]; + size_t top = 0; + + /* Get size of the source subtree */ + sstack[top++] = sref; + do { + assert (top < sizeof(sstack) / sizeof(sstack[0])); + + ++sref_size; + + const cell_t& scell = source.cell_[ sstack[--top] ]; + if (scell.is_left_ref_) { + sstack[top++] = scell.left_.ref_; + } + if (scell.is_right_ref_) { + sstack[top++] = scell.right_.ref_; + } + + } while (top > 0); + + /* Get size of the destination subtree */ + dstack[top++] = dref; + do { + assert (top < sizeof(dstack) / sizeof(dstack[0])); + + ++dref_size; + + const cell_t& dcell = destination.cell_[ dstack[--top] ]; + if (dcell.is_left_ref_) { + dstack[top++] = dcell.left_.ref_; + } + if (dcell.is_right_ref_) { + dstack[top++] = dcell.right_.ref_; + } + + } while (top > 0); + + /* Reserving proper number of cells */ + if (dref_size < sref_size) { + if (!destination.reserve_cells( sref_size - dref_size)) { + return /* MEMORY ERROR or OVERFLOW ERROR */; + } + } + + /* Release the destination subtree */ + if (destination.cell_[dref].is_left_ref_) { + destination.free_cell(destination.cell_[dref].left_.ref_); + } + if (destination.cell_[dref].is_right_ref_) { + destination.free_cell(destination.cell_[dref].right_.ref_); + } + + /* Make cloning */ + sstack[top] = sref; + dstack[top] = dref; + ++top; + + do { + --top; + const cell_t& scell = source.cell_[ sstack[top] ]; + cell_t& dcell = destination.cell_[ dstack[top] ]; + + /* Processing left ref */ + if (scell.is_left_ref_) { + dcell.is_left_ref_ = true; + dcell.left_.ref_ = destination._alloc_cell(); + + sstack[top] = scell.left_.ref_; + dstack[top] = dcell.left_.ref_; + ++top; + } else { + dcell.is_left_ref_ = false; + dcell.left_.bitmap_ = scell.left_.bitmap_; + } + + /* Processing right ref */ + if (scell.is_right_ref_) { + dcell.is_right_ref_ = true; + dcell.right_.ref_ = destination._alloc_cell(); + + sstack[top] = scell.right_.ref_; + dstack[top] = dcell.right_.ref_; + ++top; + } else { + dcell.is_right_ref_ = false; + dcell.right_.bitmap_ = scell.right_.bitmap_; + } + } while (top > 0); +} + +int binmap_t::write_cell(FILE *fp,cell_t c) +{ + fprintf_retiffail(fp,"leftb %d\n", c.left_.bitmap_); + fprintf_retiffail(fp,"rightb %d\n", c.right_.bitmap_); + fprintf_retiffail(fp,"is_left %d\n", c.is_left_ref_ ? 1 : 0 ); + fprintf_retiffail(fp,"is_right %d\n", c.is_right_ref_ ? 1 : 0 ); + fprintf_retiffail(fp,"is_free %d\n", c.is_free_ ? 1 : 0 ); + return 0; +} + + +int binmap_t::read_cell(FILE *fp,cell_t *c) +{ + bitmap_t left,right; + int is_left,is_right,is_free; + fscanf_retiffail(fp,"leftb %d\n", &left); + fscanf_retiffail(fp,"rightb %d\n", &right); + fscanf_retiffail(fp,"is_left %d\n", &is_left ); + fscanf_retiffail(fp,"is_right %d\n", &is_right ); + fscanf_retiffail(fp,"is_free %d\n", &is_free ); + + //fprintf(stderr,"binmapread_cell: l%ld r%ld %d %d %d\n", left, right, is_left, is_right, is_free ); + + c->left_.bitmap_ = left; + c->right_.bitmap_ = right; + c->is_left_ref_ = (bool)is_left; + c->is_right_ref_ = (bool)is_right; + c->is_free_ = (bool)is_free; + + return 0; +} + +// Arno, 2011-10-20: Persistent storage +int binmap_t::serialize(FILE *fp) +{ + fprintf_retiffail(fp,"root bin %lli\n",root_bin_.toUInt() ); + fprintf_retiffail(fp,"free top %i\n",free_top_ ); + fprintf_retiffail(fp,"alloc cells " PRISIZET"\n", allocated_cells_number_); + fprintf_retiffail(fp,"cells num " PRISIZET"\n", cells_number_); + for (size_t i=0; i +#include "bin.h" +#include "compat.h" +#include "serialize.h" + +namespace swift { + +/** + * Binmap class + */ +class binmap_t : Serializable { +public: + /** Type of bitmap */ + typedef int32_t bitmap_t; + /** Type of reference */ + typedef uint32_t ref_t; + + + /** + * Constructor + */ + binmap_t(); + + + /** + * Destructor + */ + ~binmap_t(); + + + /** + * Set the bin + */ + void set(const bin_t& bin); + + + /** + * Reset the bin + */ + void reset(const bin_t& bin); + + + /** + * Empty all bins + */ + void clear(); + + + /** + * Ric: Fill all bins, size is given by the source's root + */ + void fill(const binmap_t& source); + + + /** + * Whether binmap is empty + */ + bool is_empty() const; + + + /** + * Whether binmap is filled + */ + bool is_filled() const; + + + /** + * Whether range/bin is empty + */ + bool is_empty(const bin_t& bin) const; + + + /** + * Whether range/bin is filled + */ + bool is_filled(const bin_t& bin) const; + + + /** + * Return the topmost solid bin which covers the specified bin + */ + bin_t cover(const bin_t& bin) const; + + + /** + * Find first empty bin + */ + bin_t find_empty() const; + + + /** + * Find first filled bin + */ + bin_t find_filled() const; + + /** + * Arno: Find first empty bin right of start (start inclusive) + */ + bin_t find_empty(bin_t start) const; + + /** + * Get number of allocated cells + */ + size_t cells_number() const; + + + /** + * Get total size of the binmap (Arno: =number of bytes it occupies in memory) + */ + size_t total_size() const; + + + /** + * Echo the binmap status to stdout + */ + void status() const; + + + /** + * Find first additional bin in source + */ + static bin_t find_complement(const binmap_t& destination, const binmap_t& source, const bin_t::uint_t twist); + + + /** + * Find first additional bin of the source inside specified range + */ + static bin_t find_complement(const binmap_t& destination, const binmap_t& source, bin_t range, const bin_t::uint_t twist); + + + /** + * Copy one binmap to another + */ + static void copy(binmap_t& destination, const binmap_t& source); + + + /** + * Copy a range from one binmap to another binmap + */ + static void copy(binmap_t& destination, const binmap_t& source, const bin_t& range); + + + // Arno, 2011-10-20: Persistent storage + int serialize(FILE *fp); + int deserialize(FILE *fp); +private: + #pragma pack(push, 1) + + /** + * Structure of cell halves + */ + typedef struct { + union { + bitmap_t bitmap_; + ref_t ref_; + }; + } half_t; + + /** + * Structure of cells + */ + typedef union { + struct { + half_t left_; + half_t right_; + bool is_left_ref_ : 1; + bool is_right_ref_ : 1; + bool is_free_ : 1; + }; + ref_t free_next_; + } cell_t; + + #pragma pack(pop) + +private: + + /** Allocates one cell (dirty allocation) */ + ref_t _alloc_cell(); + + /** Allocates one cell */ + ref_t alloc_cell(); + + /** Reserve cells allocation capacity */ + bool reserve_cells(size_t count); + + /** Releases the cell */ + void free_cell(ref_t cell); + + /** Extend root */ + bool extend_root(); + + /** Pack a trace of cells */ + void pack_cells(ref_t* cells); + + + /** Pointer to the list of blocks */ + cell_t* cell_; + + /** Number of available cells */ + size_t cells_number_; + + /** Number of allocated cells */ + size_t allocated_cells_number_; + + /** Front of the free cell list */ + ref_t free_top_; + + /** The root bin */ + bin_t root_bin_; + + + /** Trace the bin */ + void trace(ref_t* ref, bin_t* bin, const bin_t& target) const; + + /** Trace the bin */ + void trace(ref_t* ref, bin_t* bin, ref_t** history, const bin_t& target) const; + + + /** Sets low layer bitmap */ + void _set__low_layer_bitmap(const bin_t& bin, const bitmap_t bitmap); + + /** Sets high layer bitmap */ + void _set__high_layer_bitmap(const bin_t& bin, const bitmap_t bitmap); + + + /** Clone binmap cells to another binmap */ + static void copy(binmap_t& destination, const ref_t dref, const binmap_t& source, const ref_t sref); + + static void _copy__range(binmap_t& destination, const binmap_t& source, const ref_t sref, const bin_t sbin); + + + /** Find first additional bin in source */ + static bin_t _find_complement(const bin_t& bin, const ref_t dref, const binmap_t& destination, const ref_t sref, const binmap_t& source, const bin_t::uint_t twist); + static bin_t _find_complement(const bin_t& bin, const bitmap_t dbitmap, const ref_t sref, const binmap_t& source, const bin_t::uint_t twist); + static bin_t _find_complement(const bin_t& bin, const ref_t dref, const binmap_t& destination, const bitmap_t sbitmap, const bin_t::uint_t twist); + static bin_t _find_complement(const bin_t& bin, const bitmap_t dbitmap, const bitmap_t sbitmap, const bin_t::uint_t twist); + + + /* Disabled */ + binmap_t& operator = (const binmap_t&); + + /* Disabled */ + binmap_t(const binmap_t&); + + // Arno, 2011-10-20: Persistent storage + int write_cell(FILE *fp,cell_t c); + int read_cell(FILE *fp,cell_t *c); +}; + +} // namespace end + +#endif /*_binmap_h__*/ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/channel.cpp tribler-6.2.0/Tribler/SwiftEngine/channel.cpp --- tribler-6.2.0/Tribler/SwiftEngine/channel.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/channel.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,675 @@ +/* + * channel.cpp + * class representing a virtual connection to a peer. In addition, + * it contains generic functions for socket management (see sock_open + * class variable) + * + * Created by Victor Grishchenko on 3/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include +#include "compat.h" +//#include +#include "swift.h" + +using namespace std; +using namespace swift; + +/* + * Class variables + */ + +swift::tint now_t::now = Channel::Time(); +tint Channel::start = now_t::now; +tint Channel::epoch = now_t::now/360000000LL*360000000LL; // make logs mergeable +uint64_t Channel::global_dgrams_up=0, Channel::global_dgrams_down=0, + Channel::global_raw_bytes_up=0, Channel::global_raw_bytes_down=0, + Channel::global_bytes_up=0, Channel::global_bytes_down=0; +sckrwecb_t Channel::sock_open[] = {}; +int Channel::sock_count = 0; +swift::tint Channel::last_tick = 0; +int Channel::MAX_REORDERING = 4; +bool Channel::SELF_CONN_OK = false; +swift::tint Channel::TIMEOUT = TINT_SEC*60; +channels_t Channel::channels(1); +Address Channel::tracker; +//tbheap Channel::send_queue; +FILE* Channel::debug_file = NULL; +#include "ext/simple_selector.cpp" +//PeerSelector* Channel::peer_selector = new SimpleSelector(); +tint Channel::MIN_PEX_REQUEST_INTERVAL = TINT_SEC; + + +/* + * Instance methods + */ + +Channel::Channel (FileTransfer* transfer, int socket, Address peer_addr) : + // Arno, 2011-10-03: Reordered to avoid g++ Wall warning + peer_(peer_addr), socket_(socket==INVALID_SOCKET?default_socket():socket), // FIXME + transfer_(transfer), peer_channel_id_(0), own_id_mentioned_(false), + data_in_(TINT_NEVER,bin_t::NONE), data_in_dbl_(bin_t::NONE), + data_out_cap_(bin_t::ALL),hint_out_size_(0), + // Gertjan fix 996e21e8abfc7d88db3f3f8158f2a2c4fc8a8d3f + // "Changed PEX rate limiting to per channel limiting" + last_pex_request_time_(0), next_pex_request_time_(0), + pex_request_outstanding_(false), pex_requested_(false), // Ric: init var that wasn't initialiazed + useless_pex_count_(0), + rtt_avg_(TINT_SEC), dev_avg_(0), dip_avg_(TINT_SEC), + last_send_time_(0), last_recv_time_(0), last_data_out_time_(0), last_data_in_time_(0), + last_loss_time_(0), next_send_time_(0), open_time_(NOW), cwnd_(1), + cwnd_count1_(0), send_interval_(TINT_SEC), + send_control_(PING_PONG_CONTROL), sent_since_recv_(0), + lastrecvwaskeepalive_(false), lastsendwaskeepalive_(false), // Arno: nap bug fix + ack_rcvd_recent_(0), + ack_not_rcvd_recent_(0), owd_min_bin_(0), owd_min_bin_start_(NOW), + owd_cur_bin_(0), dgrams_sent_(0), dgrams_rcvd_(0), + raw_bytes_up_(0), raw_bytes_down_(0), bytes_up_(0), bytes_down_(0), + scheduled4close_(false), + direct_sending_(false) +{ + if (peer_==Address()) + peer_ = tracker; + this->id_ = channels.size(); + channels.push_back(this); + transfer_->hs_in_.push_back(bin_t(id_)); + for(int i=0; i<4; i++) { + owd_min_bins_[i] = TINT_NEVER; + owd_current_[i] = TINT_NEVER; + } + evsend_ptr_ = new struct event; + evtimer_assign(evsend_ptr_,evbase,&Channel::LibeventSendCallback,this); + evtimer_add(evsend_ptr_,tint2tv(next_send_time_)); + + // RATELIMIT + transfer->mychannels_.push_back(this); + + dprintf("%s #%u init channel %s transfer %d\n",tintstr(),id_,peer_.str(), transfer_->fd() ); + //fprintf(stderr,"new Channel %d %s\n", id_, peer_.str() ); +} + + +Channel::~Channel () { + dprintf("%s #%u dealloc channel\n",tintstr(),id_); + channels[id_] = NULL; + ClearEvents(); + + // RATELIMIT + if (transfer_ != NULL) + { + channels_t::iterator iter; + for (iter=transfer().mychannels_.begin(); iter!=transfer().mychannels_.end(); iter++) + { + if (*iter == this) + break; + } + transfer_->mychannels_.erase(iter); + } +} + + +void Channel::ClearEvents() +{ + if (evsend_ptr_ != NULL) { + if (evtimer_pending(evsend_ptr_,NULL)) + evtimer_del(evsend_ptr_); + delete evsend_ptr_; + evsend_ptr_ = NULL; + } +} + + + + +bool Channel::IsComplete() { + // Check if peak hash bins are filled. + if (hashtree()->peak_count() == 0) + return false; + + for(int i=0; ipeak_count(); i++) { + bin_t peak = hashtree()->peak(i); + if (!ack_in_.is_filled(peak)) + return false; + } + return true; +} + + + +uint16_t Channel::GetMyPort() { + struct sockaddr_in mysin = {}; + socklen_t mysinlen = sizeof(mysin); + if (getsockname(socket_, (struct sockaddr *)&mysin, &mysinlen) < 0) + { + print_error("error on getsockname"); + return 0; + } + else + return ntohs(mysin.sin_port); +} + +bool Channel::IsDiffSenderOrDuplicate(Address addr, uint32_t chid) +{ + if (peer() != addr) + { + // Got message from different address than I send to + // + if (!own_id_mentioned_ && addr.is_private()) { + // Arno, 2012-02-27: Got HANDSHAKE reply from IANA private address, + // check for duplicate connections: + // + // When two peers A and B are behind the same firewall, they will get + // extB, resp. extA addresses from the tracker. They will both + // connect to their counterpart but because the incoming packet + // will be from the intNAT address the duplicates are not + // recognized. + // + // Solution: when the second datagram comes in (HANDSHAKE reply), + // see if you have had a first datagram from the same addr + // (HANDSHAKE). If so, close the channel if his port number is + // larger than yours (such that one channel remains). + // + recv_peer_ = addr; + + Channel *c = transfer().FindChannel(addr,this); + if (c != NULL) { + // I already initiated a connection to this peer, + // this new incoming message would establish a duplicate. + // One must break the connection, decide using port + // number: + dprintf("%s #%u found duplicate channel to %s\n", + tintstr(),chid,addr.str()); + + if (addr.port() > GetMyPort()) { + //Schedule4Close(); + dprintf("%s #%u closing duplicate channel to %s\n", + tintstr(),chid,addr.str()); + return true; + } + } + } + else + { + // Received HANDSHAKE reply from other address than I sent + // HANDSHAKE to, and the address is not an IANA private + // address (=no NAT in play), so close. + //Schedule4Close(); + dprintf("%s #%u invalid peer address %s!=%s\n", + tintstr(),chid,peer().str(),addr.str()); + return true; + } + } + return false; +} + + + + + + + +/* + * Class methods + */ +tint Channel::Time () { + //HiResTimeOfDay* tod = HiResTimeOfDay::Instance(); + //tint ret = tod->getTimeUSec(); + //DLOG(INFO)<<"now is "<= 0 ); + dbnd_ensure( make_socket_nonblocking(fd) ); // FIXME may remove this + int enable = true; + dbnd_ensure ( setsockopt(fd, SOL_SOCKET, SO_SNDBUF, + (setsockoptptr_t)&sndbuf, sizeof(int)) == 0 ); + dbnd_ensure ( setsockopt(fd, SOL_SOCKET, SO_RCVBUF, + (setsockoptptr_t)&rcvbuf, sizeof(int)) == 0 ); + //setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (setsockoptptr_t)&enable, sizeof(int)); + dbnd_ensure ( ::bind(fd, (sockaddr*)&addr, len) == 0 ); + + callbacks.sock = fd; + sock_open[sock_count++] = callbacks; + return fd; +} + +Address Channel::BoundAddress(evutil_socket_t sock) { + + struct sockaddr_in myaddr; + socklen_t mylen = sizeof(myaddr); + int ret = getsockname(sock,(sockaddr*)&myaddr,&mylen); + if (ret >= 0) { + return Address(myaddr); + } + else { + return Address(); + } +} + + +Address swift::BoundAddress(evutil_socket_t sock) { + return Channel::BoundAddress(sock); +} + + +int Channel::SendTo (evutil_socket_t sock, const Address& addr, struct evbuffer *evb) { + + int length = evbuffer_get_length(evb); + int r = sendto(sock,(const char *)evbuffer_pullup(evb, length),length,0, + (struct sockaddr*)&(addr.addr),sizeof(struct sockaddr_in)); + if (r<0) { + print_error("can't send"); + evbuffer_drain(evb, length); // Arno: behaviour is to pretend the packet got lost + } + else + evbuffer_drain(evb,r); + global_dgrams_up++; + global_raw_bytes_up+=length; + Time(); + return r; +} + +int Channel::RecvFrom (evutil_socket_t sock, Address& addr, struct evbuffer *evb) { + socklen_t addrlen = sizeof(struct sockaddr_in); + struct evbuffer_iovec vec; + if (evbuffer_reserve_space(evb, SWIFT_MAX_RECV_DGRAM_SIZE, &vec, 1) < 0) { + print_error("error on evbuffer_reserve_space"); + return 0; + } + int length = recvfrom (sock, (char *)vec.iov_base, SWIFT_MAX_RECV_DGRAM_SIZE, 0, + (struct sockaddr*)&(addr.addr), &addrlen); + if (length<0) { + length = 0; + + // Linux and Windows report "ICMP port unreachable" if the dest port could + // not be reached: + // http://support.microsoft.com/kb/260018 + // http://www.faqs.org/faqs/unix-faq/socket/ +#ifdef _WIN32 + if (WSAGetLastError() == 10054) // Sometimes errno == 2 ?! +#else + if (errno == ECONNREFUSED) +#endif + { + CloseChannelByAddress(addr); + } + else + print_error("error on recv"); + } + vec.iov_len = length; + if (evbuffer_commit_space(evb, &vec, 1) < 0) { + length = 0; + print_error("error on evbuffer_commit_space"); + } + global_dgrams_down++; + global_raw_bytes_down+=length; + Time(); + return length; +} + + +void Channel::CloseSocket(evutil_socket_t sock) { + for(int i=0; ih_addr_list[0]; + } +} + + +Address::Address(const char* ip_port) { + clear(); + if (strlen(ip_port)>=1024) + return; + char ipp[1024]; + strncpy(ipp,ip_port,1024); + char* semi = strchr(ipp,':'); + if (semi) { + *semi = 0; + set_ipv4(ipp); + set_port(semi+1); + } else { + if (strchr(ipp, '.')) { + set_ipv4(ipp); + set_port((uint16_t)0); + } else { + set_ipv4((uint32_t)INADDR_ANY); + set_port(ipp); + } + } +} + + +uint32_t Address::LOCALHOST = INADDR_LOOPBACK; + + +/* + * Utility methods 1 + */ + + +const char* swift::tintstr (tint time) { + if (time==0) + time = now_t::now; + static char ret_str[4][32]; // wow + static int i; + i = (i+1) & 3; + if (time==TINT_NEVER) + return "NEVER"; + time -= Channel::epoch; + assert(time>=0); + int hours = time/TINT_HOUR; + time %= TINT_HOUR; + int mins = time/TINT_MIN; + time %= TINT_MIN; + int secs = time/TINT_SEC; + time %= TINT_SEC; + int msecs = time/TINT_MSEC; + time %= TINT_MSEC; + int usecs = time/TINT_uSEC; + sprintf(ret_str[i],"%i_%02i_%02i_%03i_%03i",hours,mins,secs,msecs,usecs); + return ret_str[i]; +} + + +std::string swift::sock2str (struct sockaddr_in addr) { + char ipch[32]; +#ifdef _WIN32 + //Vista only: InetNtop(AF_INET,&(addr.sin_addr),ipch,32); + // IPv4 only: + struct in_addr inaddr; + memcpy(&inaddr, &(addr.sin_addr), sizeof(inaddr)); + strncpy(ipch, inet_ntoa(inaddr),32); +#else + inet_ntop(AF_INET,&(addr.sin_addr),ipch,32); +#endif + sprintf(ipch+strlen(ipch),":%i",ntohs(addr.sin_port)); + return std::string(ipch); +} + + +/* + * Swift top-level API implementation + */ + +int swift::Listen (Address addr) { + sckrwecb_t cb; + cb.may_read = &Channel::LibeventReceiveCallback; + cb.sock = Channel::Bind(addr,cb); + // swift UDP receive + event_assign(&Channel::evrecv, Channel::evbase, cb.sock, EV_READ, + cb.may_read, NULL); + event_add(&Channel::evrecv, NULL); + return cb.sock; +} + +void swift::Shutdown (int sock_des) { + Channel::Shutdown(); +} + +int swift::Open (std::string filename, const Sha1Hash& roothash, Address tracker, bool force_check_diskvshash, bool check_netwvshash, uint32_t chunk_size) { + FileTransfer* ft = new FileTransfer(filename, roothash, force_check_diskvshash, check_netwvshash, chunk_size); + if (ft->fd() && ft->IsOperational()) { + + // initiate tracker connections + // SWIFTPROC + ft->SetTracker(tracker); + ft->ConnectToTracker(); + + return ft->fd(); + } else { + delete ft; + return -1; + } +} + + +void swift::Close (int fd) { + if (fdAddPeer(address,root); +} + + +ssize_t swift::Read(int fdes, void *buf, size_t nbyte, int64_t offset) +{ + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->GetStorage()->Read(buf,nbyte,offset); + else + return -1; +} + +ssize_t swift::Write(int fdes, const void *buf, size_t nbyte, int64_t offset) +{ + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->GetStorage()->Write(buf,nbyte,offset); + else + return -1; +} + + +uint64_t swift::Size (int fdes) { + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->hashtree()->size(); + else + return 0; +} + + +bool swift::IsComplete (int fdes) { + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->hashtree()->is_complete(); + else + return 0; +} + + +uint64_t swift::Complete (int fdes) { + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->hashtree()->complete(); + else + return 0; +} + + +uint64_t swift::SeqComplete (int fdes, int64_t offset) { + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->hashtree()->seq_complete(offset); + else + return 0; +} + + +const Sha1Hash& swift::RootMerkleHash (int file) { + FileTransfer* trans = FileTransfer::file(file); + if (!trans) + return Sha1Hash::ZERO; + return trans->hashtree()->root_hash(); +} + + +/** Returns the number of bytes in a chunk for this transmission */ +uint32_t swift::ChunkSize(int fdes) +{ + if (FileTransfer::files.size()>fdes && FileTransfer::files[fdes]) + return FileTransfer::files[fdes]->hashtree()->chunk_size(); + else + return 0; +} + + +// CHECKPOINT +int swift::Checkpoint(int transfer) { + // Save transfer's binmap for zero-hashcheck restart + FileTransfer *ft = FileTransfer::file(transfer); + if (ft == NULL) + return -1; + if (ft->IsZeroState()) + return -1; + + MmapHashTree *ht = (MmapHashTree *)ft->hashtree(); + if (ht == NULL) + { + fprintf(stderr,"swift: checkpointing: ht is NULL\n"); + return -1; + } + + std::string binmap_filename = ft->GetStorage()->GetOSPathName(); + binmap_filename.append(".mbinmap"); + //fprintf(stderr,"swift: HACK checkpointing %s at %lli\n", binmap_filename.c_str(), Complete(transfer)); + FILE *fp = fopen_utf8(binmap_filename.c_str(),"wb"); + if (!fp) { + print_error("cannot open mbinmap for writing"); + return -1; + } + + int ret = ht->serialize(fp); + if (ret < 0) + print_error("writing to mbinmap"); + fclose(fp); + return ret; +} + + +// SEEK +int swift::Seek(int fd, int64_t offset, int whence) +{ + dprintf("%s F%i Seek: to %lld\n",tintstr(), fd, offset ); + + FileTransfer *ft = FileTransfer::file(fd); + if (ft == NULL) + return -1; + + if (whence == SEEK_SET) + { + if (offset >= swift::Size(fd)) + return -1; // seek beyond end of content + + // Which bin to seek to? + int64_t coff = offset - (offset % ft->hashtree()->chunk_size()); // ceil to chunk + bin_t offbin = bin_t(0,coff/ft->hashtree()->chunk_size()); + + char binstr[32]; + dprintf("%s F%i Seek: to bin %s\n",tintstr(), fd, offbin.str(binstr) ); + + return ft->picker().Seek(offbin,whence); + } + else + return -1; // TODO +} + + +/* + * Utility methods 2 + */ + +int swift::evbuffer_add_string(struct evbuffer *evb, std::string str) { + return evbuffer_add(evb, str.c_str(), str.size()); +} + +int swift::evbuffer_add_8(struct evbuffer *evb, uint8_t b) { + return evbuffer_add(evb, &b, 1); +} + +int swift::evbuffer_add_16be(struct evbuffer *evb, uint16_t w) { + uint16_t wbe = htons(w); + return evbuffer_add(evb, &wbe, 2); +} + +int swift::evbuffer_add_32be(struct evbuffer *evb, uint32_t i) { + uint32_t ibe = htonl(i); + return evbuffer_add(evb, &ibe, 4); +} + +int swift::evbuffer_add_64be(struct evbuffer *evb, uint64_t l) { + uint32_t lbe[2]; + lbe[0] = htonl((uint32_t)(l>>32)); + lbe[1] = htonl((uint32_t)(l&0xffffffff)); + return evbuffer_add(evb, lbe, 8); +} + +int swift::evbuffer_add_hash(struct evbuffer *evb, const Sha1Hash& hash) { + return evbuffer_add(evb, hash.bits, Sha1Hash::SIZE); +} + +uint8_t swift::evbuffer_remove_8(struct evbuffer *evb) { + uint8_t b; + if (evbuffer_remove(evb, &b, 1) < 1) + return 0; + return b; +} + +uint16_t swift::evbuffer_remove_16be(struct evbuffer *evb) { + uint16_t wbe; + if (evbuffer_remove(evb, &wbe, 2) < 2) + return 0; + return ntohs(wbe); +} + +uint32_t swift::evbuffer_remove_32be(struct evbuffer *evb) { + uint32_t ibe; + if (evbuffer_remove(evb, &ibe, 4) < 4) + return 0; + return ntohl(ibe); +} + +uint64_t swift::evbuffer_remove_64be(struct evbuffer *evb) { + uint32_t lbe[2]; + if (evbuffer_remove(evb, lbe, 8) < 8) + return 0; + uint64_t l = ntohl(lbe[0]); + l<<=32; + l |= ntohl(lbe[1]); + return l; +} + +Sha1Hash swift::evbuffer_remove_hash(struct evbuffer* evb) { + char bits[Sha1Hash::SIZE]; + if (evbuffer_remove(evb, bits, Sha1Hash::SIZE) < Sha1Hash::SIZE) + return Sha1Hash::ZERO; + return Sha1Hash(false, bits); +} + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/cmdgw.cpp tribler-6.2.0/Tribler/SwiftEngine/cmdgw.cpp --- tribler-6.2.0/Tribler/SwiftEngine/cmdgw.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/cmdgw.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,1183 @@ +/* + * cmdgw.cpp + * command gateway for controling swift engine via a TCP connection + * + * Created by Arno Bakker + * Copyright 2010-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include +#include + +#include "swift.h" +#include "compat.h" +#include +#include +#include + + +using namespace swift; + +// Send PLAY after receiving N bytes +#define CMDGW_MAX_PREBUF_BYTES (256*1024) + +// Report swift download progress every 2^layer * chunksize bytes (so 0 = report every chunk) +#define CMDGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER 0 + +// Status of the swarm download +#define DLSTATUS_ALLOCATING_DISKSPACE 0 +#define DLSTATUS_HASHCHECKING 2 +#define DLSTATUS_DOWNLOADING 3 +#define DLSTATUS_SEEDING 4 +#define DLSTATUS_STOPPED_ON_ERROR 6 + +#define MAX_CMD_MESSAGE 1024 + +#define ERROR_NO_ERROR 0 +#define ERROR_UNKNOWN_CMD -1 +#define ERROR_MISS_ARG -2 +#define ERROR_BAD_ARG -3 +#define ERROR_BAD_SWARM -4 + +#define CMDGW_MAX_CLIENT 1024 // Arno: == maximum number of swarms per proc + +struct cmd_gw_t { + int id; + evutil_socket_t cmdsock; + int transfer; // swift FD + bool moreinfo; // whether to report detailed stats (see SETMOREINFO cmd) + tint startt; // ARNOSMPTODO: debug speed measurements, remove + std::string mfspecname; // MULTIFILE + uint64_t startoff; // MULTIFILE: starting offset in content range of desired file + uint64_t endoff; // MULTIFILE: ending offset (careful, for an e.g. 100 byte interval this is 99) + +} cmd_requests[CMDGW_MAX_CLIENT]; + + +int cmd_gw_reqs_open = 0; +int cmd_gw_reqs_count = 0; +int cmd_gw_conns_open = 0; + +struct evconnlistener *cmd_evlistener = NULL; +struct evbuffer *cmd_evbuffer = NULL; // Data received on cmd socket : WARNING: one for all cmd sockets + +/* + * SOCKTUNNEL + * We added the ability for a process to tunnel data over swift's UDP socket. + * The process should send TUNNELSEND commands over the CMD TCP socket and will + * receive TUNNELRECV commands from swift, containing data received via UDP + * on channel 0xffffffff. + */ +typedef enum { + CMDGW_TUNNEL_SCAN4CRLF, + CMDGW_TUNNEL_READTUNNEL +} cmdgw_tunnel_t; + +cmdgw_tunnel_t cmd_tunnel_state=CMDGW_TUNNEL_SCAN4CRLF; +uint32_t cmd_tunnel_expect=0; +Address cmd_tunnel_dest_addr; +uint32_t cmd_tunnel_dest_chanid; +evutil_socket_t cmd_tunnel_sock=INVALID_SOCKET; + +// HTTP gateway address for PLAY cmd +Address cmd_gw_httpaddr; + +bool cmd_gw_debug=false; + +tint cmd_gw_last_open=0; + + +// Fwd defs +void CmdGwDataCameInCallback(struct bufferevent *bev, void *ctx); +bool CmdGwReadLine(evutil_socket_t cmdsock); +void CmdGwNewRequestCallback(evutil_socket_t cmdsock, char *line); +void CmdGwProcessData(evutil_socket_t cmdsock); + + +void CmdGwFreeRequest(cmd_gw_t* req) +{ + req->id = -1; + req->cmdsock = -1; + req->transfer = -1; + req->moreinfo = false; + req->startt = 0; + req->mfspecname = ""; + req->startoff = -1; + req->endoff = -1; +} + + +void CmdGwCloseConnection(evutil_socket_t sock) +{ + // Close cmd connection and stop all associated downloads. + // Doesn't remove .mhash state or content + + if (cmd_gw_debug) + fprintf(stderr,"CmdGwCloseConnection: ENTER %d\n", sock ); + + bool scanning = true; + while (scanning) + { + scanning = false; + for(int i=0; icmdsock==sock) + { + dprintf("%s @%i stopping-on-close transfer %i\n",tintstr(),req->id,req->transfer); + swift::Close(req->transfer); + + // Remove from list and reiterate over it + CmdGwFreeRequest(req); + *req = cmd_requests[--cmd_gw_reqs_open]; + scanning = true; + break; + } + } + } + + // Arno, 2012-07-06: Close + swift::close_socket(sock); + + cmd_gw_conns_open--; + + // Arno, 2012-10-11: New policy Immediate shutdown on connection close, + // see CmdGwUpdateDLStatesCallback() + fprintf(stderr,"cmd: Shutting down on CMD connection close\n"); + event_base_loopexit(Channel::evbase, NULL); +} + + +cmd_gw_t* CmdGwFindRequestByTransfer (int transfer) +{ + for(int i=0; itransfer); + if (ft == NULL) + continue; + Sha1Hash got_hash = ft->root_hash(); + if (want_hash == got_hash) + return req; + } + return NULL; +} + + +void CmdGwGotCHECKPOINT(Sha1Hash &want_hash) +{ + // Checkpoint the specified download + if (cmd_gw_debug) + fprintf(stderr,"cmd: GotCHECKPOINT: %s\n",want_hash.hex().c_str()); + + cmd_gw_t* req = CmdGwFindRequestByRootHash(want_hash); + if (req == NULL) + return; + + swift::Checkpoint(req->transfer); +} + + +void CmdGwGotREMOVE(Sha1Hash &want_hash, bool removestate, bool removecontent) +{ + // Remove the specified download + if (cmd_gw_debug) + fprintf(stderr,"cmd: GotREMOVE: %s %d %d\n",want_hash.hex().c_str(),removestate,removecontent); + + cmd_gw_t* req = CmdGwFindRequestByRootHash(want_hash); + if (req == NULL) + { + if (cmd_gw_debug) + fprintf(stderr,"cmd: GotREMOVE: %s not found, bad swarm?\n",want_hash.hex().c_str()); + return; + } + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) + return; + + dprintf("%s @%i remove transfer %i\n",tintstr(),req->id,req->transfer); + + //MULTIFILE + // Arno, 2012-05-23: Copy all filename to be deleted to a set. This info is lost after + // swift::Close() and we need to call Close() to let the storage layer close the open files. + // TODO: remove the dirs we created, if now empty. + std::set delset; + std::string contentfilename = ft->GetStorage()->GetOSPathName(); + + // Delete content + .mhash from filesystem, if desired + if (removecontent) + delset.insert(contentfilename); + + if (removestate) + { + std::string mhashfilename = contentfilename + ".mhash"; + delset.insert(mhashfilename); + + // Arno, 2012-01-10: .mbinmap gots to go too. + std::string mbinmapfilename = contentfilename + ".mbinmap"; + delset.insert(mbinmapfilename); + } + + // MULTIFILE + if (removecontent && ft->GetStorage()->IsReady()) + { + storage_files_t::iterator iter; + storage_files_t sfs = ft->GetStorage()->GetStorageFiles(); + for (iter = sfs.begin(); iter < sfs.end(); iter++) + { + StorageFile *sf = *iter; + std::string cfn = sf->GetOSPathName(); + delset.insert(cfn); + } + } + + swift::Close(req->transfer); + ft = NULL; + // All ft info now invalid + + std::set::iterator iter; + for (iter=delset.begin(); iter!=delset.end(); iter++) + { + std::string filename = *iter; + if (cmd_gw_debug) + fprintf(stderr,"CmdGwREMOVE: removing %s\n", filename.c_str() ); + int ret = remove_utf8(filename); + if (ret < 0) + { + if (cmd_gw_debug) + print_error("Could not remove file"); + } + } + + CmdGwFreeRequest(req); + *req = cmd_requests[--cmd_gw_reqs_open]; +} + + +void CmdGwGotMAXSPEED(Sha1Hash &want_hash, data_direction_t ddir, double speed) +{ + // Set maximum speed on the specified download + //fprintf(stderr,"cmd: GotMAXSPEED: %s %d %lf\n",want_hash.hex().c_str(),ddir,speed); + + cmd_gw_t* req = CmdGwFindRequestByRootHash(want_hash); + if (req == NULL) + return; + FileTransfer *ft = FileTransfer::file(req->transfer); + + // Arno, 2012-05-25: SetMaxSpeed resets the current speed history, so + // be careful here. + double curmax = ft->GetMaxSpeed(ddir); + if (curmax != speed) + { + if (cmd_gw_debug) + fprintf(stderr,"cmd: CmdGwGotMAXSPEED: %s was %lf want %lf, setting\n", want_hash.hex().c_str(), curmax, speed ); + ft->SetMaxSpeed(ddir,speed); + } +} + + +void CmdGwGotSETMOREINFO(Sha1Hash &want_hash, bool enable) +{ + cmd_gw_t* req = CmdGwFindRequestByRootHash(want_hash); + if (req == NULL) + return; + req->moreinfo = enable; +} + +void CmdGwGotPEERADDR(Sha1Hash &want_hash, Address &peer) +{ + cmd_gw_t* req = CmdGwFindRequestByRootHash(want_hash); + if (req == NULL) + return; + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) + return; + + ft->AddPeer(peer); +} + + + +void CmdGwSendINFOHashChecking(evutil_socket_t cmdsock, Sha1Hash root_hash) +{ + // Send INFO DLSTATUS_HASHCHECKING message. + + char cmd[MAX_CMD_MESSAGE]; + sprintf(cmd,"INFO %s %d %lli/%lli %lf %lf %u %u\r\n",root_hash.hex().c_str(),DLSTATUS_HASHCHECKING,(uint64_t)0,(uint64_t)0,0.0,3.14,0,0); + + //fprintf(stderr,"cmd: SendINFO: %s", cmd); + send(cmdsock,cmd,strlen(cmd),0); +} + + +void CmdGwSendINFO(cmd_gw_t* req, int dlstatus) +{ + // Send INFO message. + if (cmd_gw_debug) + fprintf(stderr,"cmd: SendINFO: F%d initdlstatus %d\n", req->transfer, dlstatus ); + + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) + // Download was removed or closed somehow. + return; + + Sha1Hash root_hash = ft->root_hash(); + + char cmd[MAX_CMD_MESSAGE]; + uint64_t size = swift::Size(req->transfer); + uint64_t complete = swift::Complete(req->transfer); + if (size > 0 && size == complete) + dlstatus = DLSTATUS_SEEDING; + if (!ft->IsOperational()) + dlstatus = DLSTATUS_STOPPED_ON_ERROR; + + uint32_t numleech = ft->GetNumLeechers(); + uint32_t numseeds = ft->GetNumSeeders(); + + double dlspeed = ft->GetCurrentSpeed(DDIR_DOWNLOAD); + double ulspeed = ft->GetCurrentSpeed(DDIR_UPLOAD); + sprintf(cmd,"INFO %s %d %lli/%lli %lf %lf %u %u\r\n",root_hash.hex().c_str(),dlstatus,complete,size,dlspeed,ulspeed,numleech,numseeds); + + send(req->cmdsock,cmd,strlen(cmd),0); + + // MORESTATS + if (req->moreinfo) { + // Send detailed ul/dl stats in JSON format. + + std::ostringstream oss; + oss.setf(std::ios::fixed,std::ios::floatfield); + oss.precision(5); + channels_t::iterator iter; + channels_t peerchans = ft->GetChannels(); + + oss << "MOREINFO" << " " << root_hash.hex() << " "; + + double tss = (double)Channel::Time() / 1000000.0L; + oss << "{\"timestamp\":\"" << tss << "\", "; + oss << "\"channels\":"; + oss << "["; + for (iter=peerchans.begin(); iter!=peerchans.end(); iter++) { + Channel *c = *iter; + if (c != NULL) { + if (iter!=peerchans.begin()) + oss << ", "; + oss << "{"; + oss << "\"ip\": \"" << c->peer().ipv4str() << "\", "; + oss << "\"port\": " << c->peer().port() << ", "; + oss << "\"raw_bytes_up\": " << c->raw_bytes_up() << ", "; + oss << "\"raw_bytes_down\": " << c->raw_bytes_down() << ", "; + oss << "\"bytes_up\": " << c->bytes_up() << ", "; + oss << "\"bytes_down\": " << c->bytes_down() << " "; + oss << "}"; + } + } + oss << "], "; + oss << "\"raw_bytes_up\": " << Channel::global_raw_bytes_up << ", "; + oss << "\"raw_bytes_down\": " << Channel::global_raw_bytes_down << ", "; + oss << "\"bytes_up\": " << Channel::global_bytes_up << ", "; + oss << "\"bytes_down\": " << Channel::global_bytes_down << " "; + oss << "}"; + + oss << "\r\n"; + + std::stringbuf *pbuf=oss.rdbuf(); + size_t slen = strlen(pbuf->str().c_str()); + send(req->cmdsock,pbuf->str().c_str(),slen,0); + } +} + + +void CmdGwSendPLAY(cmd_gw_t *req) +{ + // Send PLAY message to user + if (cmd_gw_debug) + fprintf(stderr,"cmd: SendPLAY: %d\n", req->transfer ); + + Sha1Hash root_hash = FileTransfer::file(req->transfer)->root_hash(); + + char cmd[MAX_CMD_MESSAGE]; + // Slightly diff format: roothash as ID after CMD + if (req->mfspecname == "") + sprintf(cmd,"PLAY %s http://%s/%s\r\n",root_hash.hex().c_str(),cmd_gw_httpaddr.str(),root_hash.hex().c_str()); + else + sprintf(cmd,"PLAY %s http://%s/%s/%s\r\n",root_hash.hex().c_str(),cmd_gw_httpaddr.str(),root_hash.hex().c_str(),req->mfspecname.c_str()); + + if (cmd_gw_debug) + fprintf(stderr,"cmd: SendPlay: %s", cmd); + + send(req->cmdsock,cmd,strlen(cmd),0); +} + + +void CmdGwSendERRORBySocket(evutil_socket_t cmdsock, std::string msg, const Sha1Hash& roothash=Sha1Hash::ZERO) +{ + std::string cmd = "ERROR "; + cmd += roothash.hex(); + cmd += " "; + cmd += msg; + cmd += "\r\n"; + + if (cmd_gw_debug) + fprintf(stderr,"cmd: SendERROR: %s\n", cmd.c_str() ); + + char *wire = strdup(cmd.c_str()); + send(cmdsock,wire,strlen(wire),0); + free(wire); +} + + +void CmdGwSwiftPrebufferProgressCallback (int transfer, bin_t bin) +{ + // + // Subsequent bytes of content downloaded + // + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftPrebuffProgress: %d\n", transfer ); + + cmd_gw_t* req = CmdGwFindRequestByTransfer(transfer); + if (req == NULL) + return; + +#ifdef WIN32 + int64_t wantsize = min(req->endoff+1-req->startoff,CMDGW_MAX_PREBUF_BYTES); +#else + int64_t wantsize = std::min(req->endoff+1-req->startoff,(uint64_t)CMDGW_MAX_PREBUF_BYTES); +#endif + + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftPrebuffProgress: want %lld got %lld\n", swift::SeqComplete(req->transfer,req->startoff), wantsize ); + + + if (swift::SeqComplete(req->transfer,req->startoff) >= wantsize) + { + // First CMDGW_MAX_PREBUF_BYTES bytes received via swift, + // tell user to PLAY + // ARNOSMPTODO: bitrate-dependent prebuffering? + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftPrebufferProgress: Prebuf done %d\n", transfer ); + + swift::RemoveProgressCallback(transfer,&CmdGwSwiftPrebufferProgressCallback); + + CmdGwSendPLAY(req); + } + // wait for prebuffer +} + + +/* + * For single file content, install a new callback that checks whether + * we have enough data prebuffered. For multifile, wait till the first chunks + * containing the multi-file spec have been loaded, then set download pointer + * to the desired file (via swift::Seek) and then wait till enough data is + * prebuffered (via CmdGwSwiftPrebufferingProcessCallback). + */ + +void CmdGwSwiftFirstProgressCallback (int transfer, bin_t bin) +{ + // + // First bytes of content downloaded (first in absolute sense) + // + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftFirstProgress: %d\n", transfer ); + + cmd_gw_t* req = CmdGwFindRequestByTransfer(transfer); + if (req == NULL) + return; + + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) { + CmdGwSendERRORBySocket(req->cmdsock,"Unknown transfer?!",ft->root_hash()); + return; + } + if (!ft->GetStorage()->IsReady()) { + // Wait until (multi-file) storage is ready + return; + } + + swift::RemoveProgressCallback(transfer,&CmdGwSwiftFirstProgressCallback); + + if (req->mfspecname == "") + { + // Single file + req->startoff = 0; + req->endoff = swift::Size(req->transfer)-1; + CmdGwSwiftPrebufferProgressCallback(req->transfer,bin_t(0,0)); // in case file on disk + swift::AddProgressCallback(transfer,&CmdGwSwiftPrebufferProgressCallback,CMDGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER); + } + else + { + // MULTIFILE + // Have spec, seek to wanted file + storage_files_t sfs = ft->GetStorage()->GetStorageFiles(); + storage_files_t::iterator iter; + bool found = false; + for (iter = sfs.begin(); iter < sfs.end(); iter++) + { + StorageFile *sf = *iter; + if (sf->GetSpecPathName() == req->mfspecname) + { + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftFirstProgress: Seeking to multifile %s for %d\n", req->mfspecname.c_str(), transfer ); + + int ret = swift::Seek(req->transfer,sf->GetStart(),SEEK_SET); + if (ret < 0) + { + CmdGwSendERRORBySocket(req->cmdsock,"Error seeking to file in multi-file content.",ft->root_hash()); + return; + } + found = true; + req->startoff = sf->GetStart(); + req->endoff = sf->GetEnd(); + CmdGwSwiftPrebufferProgressCallback(req->transfer,bin_t(0,0)); // in case file on disk + swift::AddProgressCallback(transfer,&CmdGwSwiftPrebufferProgressCallback,CMDGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER); + break; + } + } + if (!found) { + + if (cmd_gw_debug) + fprintf(stderr,"cmd: SwiftFirstProgress: Error file not found %d\n", transfer ); + + CmdGwSendERRORBySocket(req->cmdsock,"Individual file not found in multi-file content.",ft->root_hash()); + return; + } + } +} + + +void CmdGwSwiftErrorCallback (evutil_socket_t cmdsock) +{ + // Error on swift socket callback + + const char *response = "ERROR Swift Engine Problem\r\n"; + send(cmdsock,response,strlen(response),0); + + //swift::close_socket(sock); +} + +void CmdGwSwiftAllocatingDiskspaceCallback (int transfer, bin_t bin) +{ + if (cmd_gw_debug) + fprintf(stderr,"cmd: CmdGwSwiftAllocatingDiskspaceCallback: ENTER %d\n", transfer ); + + // Called before swift starts reserving diskspace. + cmd_gw_t* req = CmdGwFindRequestByTransfer(transfer); + if (req == NULL) + return; + + CmdGwSendINFO(req,DLSTATUS_ALLOCATING_DISKSPACE); +} + + + +void CmdGwUpdateDLStateCallback(cmd_gw_t* req) +{ + // Periodic callback, tell user INFO + CmdGwSendINFO(req,DLSTATUS_DOWNLOADING); + + // Update speed measurements such that they decrease when DL/UL stops + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) // Concurrency between ERROR_BAD_SWARM and this periodic callback + return; + ft->OnRecvData(0); + ft->OnSendData(0); + + if (false) + { + // DEBUG download speed rate limit + double dlspeed = ft->GetCurrentSpeed(DDIR_DOWNLOAD); +#ifdef WIN32 + double dt = max(0.000001,(double)(usec_time() - req->startt)/TINT_SEC); +#else + double dt = std::max(0.000001,(double)(usec_time() - req->startt)/TINT_SEC); +#endif + double exspeed = (double)(swift::Complete(req->transfer)) / dt; + fprintf(stderr,"cmd: UpdateDLStateCallback: SPEED %lf == %lf\n", dlspeed, exspeed ); + } +} + + +int icount=0; + +void CmdGwUpdateDLStatesCallback() +{ + // Called by swift main approximately every second + // Loop over all swarms + for(int i=0; i 0) + { + tint diff = NOW - cmd_gw_last_open; + //fprintf(stderr,"cmd: time since last conn diff %lld\n", diff ); + if (diff > 10*TINT_SEC) + { + fprintf(stderr,"cmd: No CMD connection since X sec, shutting down\n"); + event_base_loopexit(Channel::evbase, NULL); + } + } + } + else + cmd_gw_last_open = NOW; + */ +} + + + +void CmdGwDataCameInCallback(struct bufferevent *bev, void *ctx) +{ + // Turn TCP stream into lines deliniated by \r\n + + evutil_socket_t cmdsock = bufferevent_getfd(bev); + if (cmd_gw_debug) + fprintf(stderr,"CmdGwDataCameIn: ENTER %d\n", cmdsock ); + + struct evbuffer *inputevbuf = bufferevent_get_input(bev); + + int inlen = evbuffer_get_length(inputevbuf); + + int ret = evbuffer_add_buffer(cmd_evbuffer,inputevbuf); + if (ret == -1) { + CmdGwCloseConnection(cmdsock); + return; + } + + int totlen = evbuffer_get_length(cmd_evbuffer); + + if (cmd_gw_debug) + fprintf(stderr,"cmdgw: TCPDataCameIn: State %d, got %d new bytes, have %d want %d\n", (int)cmd_tunnel_state, inlen, totlen, cmd_tunnel_expect ); + + CmdGwProcessData(cmdsock); +} + + +void CmdGwProcessData(evutil_socket_t cmdsock) +{ + // Process CMD data in the cmd_evbuffer + + if (cmd_tunnel_state == CMDGW_TUNNEL_SCAN4CRLF) + { + bool ok=false; + do + { + ok = CmdGwReadLine(cmdsock); + if (ok && cmd_tunnel_state == CMDGW_TUNNEL_READTUNNEL) + break; + } while (ok); + } + // Not else! + if (cmd_tunnel_state == CMDGW_TUNNEL_READTUNNEL) + { + // Got "TUNNELSEND addr size\r\n" command, now read + // size bytes, i.e., cmd_tunnel_expect bytes. + + if (cmd_gw_debug) + fprintf(stderr,"cmdgw: procTCPdata: tunnel state, got %d, want %d\n", evbuffer_get_length(cmd_evbuffer), cmd_tunnel_expect ); + + if (evbuffer_get_length(cmd_evbuffer) >= cmd_tunnel_expect) + { + // We have all the tunneled data + CmdGwTunnelSendUDP(cmd_evbuffer); + + // Process any remaining commands that came after the tunneled data + CmdGwProcessData(cmdsock); + } + } +} + + +bool CmdGwReadLine(evutil_socket_t cmdsock) +{ + // Parse cmd_evbuffer for lines, and call NewRequest when found + + size_t rd=0; + char *cmd = evbuffer_readln(cmd_evbuffer,&rd, EVBUFFER_EOL_CRLF_STRICT); + if (cmd != NULL) + { + CmdGwNewRequestCallback(cmdsock,cmd); + free(cmd); + return true; + } + else + return false; +} + +int CmdGwHandleCommand(evutil_socket_t cmdsock, char *copyline); + + +void CmdGwNewRequestCallback(evutil_socket_t cmdsock, char *line) +{ + // New command received from user + + // CMD request line + char *copyline = (char *)malloc(strlen(line)+1); + strcpy(copyline,line); + + int ret = CmdGwHandleCommand(cmdsock,copyline); + if (ret < 0) { + dprintf("cmd: Error processing command %s\n", line ); + std::string msg = ""; + if (ret == ERROR_UNKNOWN_CMD) + msg = "unknown command"; + else if (ret == ERROR_MISS_ARG) + msg = "missing parameter"; + else if (ret == ERROR_BAD_ARG) + msg = "bad parameter"; + // BAD_SWARM already sent, and not fatal + + if (msg != "") + { + CmdGwSendERRORBySocket(cmdsock,msg); + CmdGwCloseConnection(cmdsock); + } + } + + free(copyline); +} + + + + + +int CmdGwHandleCommand(evutil_socket_t cmdsock, char *copyline) +{ + char *method=NULL,*paramstr = NULL; + char * token = strchr(copyline,' '); // split into CMD PARAM + if (token != NULL) { + *token = '\0'; + paramstr = token+1; + } + else + paramstr = ""; + + method = copyline; + + if (cmd_gw_debug) + fprintf(stderr,"cmd: GOT %s %s\n", method, paramstr); + + char *savetok = NULL; + if (!strcmp(method,"START")) + { + // New START request + //fprintf(stderr,"cmd: START: new request %i\n",cmd_gw_reqs_count+1); + + // Format: START url destdir\r\n + // Arno, 2012-04-13: See if URL followed by storagepath for seeding + std::string pstr = paramstr; + std::string url="",storagepath=""; + int sidx = pstr.find(" "); + if (sidx == std::string::npos) + { + url = pstr; + storagepath = ""; + } + else + { + url = pstr.substr(0,sidx); + storagepath = pstr.substr(sidx+1); + } + + // Parse URL + parseduri_t puri; + if (!swift::ParseURI(url,puri)) + return ERROR_BAD_ARG; + + std::string trackerstr = puri["server"]; + std::string hashstr = puri["hash"]; + std::string mfstr = puri["filename"]; + std::string chunksizestr = puri["chunksizestr"]; + std::string durationstr = puri["durationstr"]; + + if (hashstr.length()!=40) { + dprintf("cmd: START: roothash too short %i\n", hashstr.length() ); + return ERROR_BAD_ARG; + } + uint32_t chunksize=SWIFT_DEFAULT_CHUNK_SIZE; + if (chunksizestr.length() > 0) + std::istringstream(chunksizestr) >> chunksize; + int duration=0; + if (durationstr.length() > 0) + std::istringstream(durationstr) >> duration; + + dprintf("cmd: START: %s with tracker %s chunksize %i duration %i\n",hashstr.c_str(),trackerstr.c_str(),chunksize,duration); + + // ARNOTODO: return duration in HTTPGW + + Address trackaddr; + trackaddr = Address(trackerstr.c_str()); + if (trackaddr==Address()) + { + dprintf("cmd: START: tracker address must be hostname:port, ip:port or just port\n"); + return ERROR_BAD_ARG; + } + // SetTracker(trackaddr); == set default tracker + + // initiate transmission + Sha1Hash root_hash = Sha1Hash(true,hashstr.c_str()); + + // Arno, 2012-06-12: Check for duplicate requests + cmd_gw_t* req = CmdGwFindRequestByRootHash(root_hash); + if (req != NULL) + { + dprintf("cmd: START: request for given root hash already exists\n"); + return ERROR_BAD_ARG; + } + + // Send INFO DLSTATUS_HASHCHECKING + CmdGwSendINFOHashChecking(cmdsock,root_hash); + + // ARNOSMPTODO: disable/interleave hashchecking at startup + int transfer = swift::Find(root_hash); + if (transfer==-1) { + std::string filename; + if (storagepath != "") + filename = storagepath; + else + filename = hashstr; + transfer = swift::Open(filename,root_hash,trackaddr,false,true,chunksize); + if (transfer == -1) + { + CmdGwSendERRORBySocket(cmdsock,"bad swarm",root_hash); + return ERROR_BAD_SWARM; + } + } + + // All is well, register req + req = cmd_requests + cmd_gw_reqs_open++; + req->id = ++cmd_gw_reqs_count; + req->cmdsock = cmdsock; + req->transfer = transfer; + req->startt = usec_time(); + req->mfspecname = mfstr; + + dprintf("%s @%i start transfer %i\n",tintstr(),req->id,req->transfer); + + // RATELIMIT + //FileTransfer::file(transfer)->SetMaxSpeed(DDIR_DOWNLOAD,512*1024); + + if (cmd_gw_debug) + fprintf(stderr,"cmd: Already on disk is %lli/%lli\n", swift::Complete(transfer), swift::Size(transfer)); + + // MULTIFILE + int64_t minsize=CMDGW_MAX_PREBUF_BYTES; + FileTransfer *ft = FileTransfer::file(transfer); + if (ft == NULL) + return ERROR_BAD_ARG; + storage_files_t sfs = ft->GetStorage()->GetStorageFiles(); + if (sfs.size() > 0) + minsize = sfs[0]->GetSize(); + + // Wait for prebuffering and then send PLAY to user + if (swift::SeqComplete(transfer) >= minsize) + { + CmdGwSwiftFirstProgressCallback(transfer,bin_t(0,0)); + CmdGwSendINFO(req, DLSTATUS_DOWNLOADING); + } + else + { + swift::AddProgressCallback(transfer,&CmdGwSwiftFirstProgressCallback,CMDGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER); + } + + ft->GetStorage()->AddOneTimeAllocationCallback(CmdGwSwiftAllocatingDiskspaceCallback); + } + else if (!strcmp(method,"REMOVE")) + { + // REMOVE roothash removestate removecontent\r\n + bool removestate = false, removecontent = false; + + token = strtok_r(paramstr," ",&savetok); // + if (token == NULL) + return ERROR_MISS_ARG; + char *hashstr = token; + token = strtok_r(NULL," ",&savetok); // removestate + if (token == NULL) + return ERROR_MISS_ARG; + removestate = !strcmp(token,"1"); + token = strtok_r(NULL,"",&savetok); // removecontent + if (token == NULL) + return ERROR_MISS_ARG; + removecontent = !strcmp(token,"1"); + + Sha1Hash root_hash = Sha1Hash(true,hashstr); + CmdGwGotREMOVE(root_hash,removestate,removecontent); + } + else if (!strcmp(method,"MAXSPEED")) + { + // MAXSPEED roothash direction speed-float-kb/s\r\n + data_direction_t ddir; + double speed; + + token = strtok_r(paramstr," ",&savetok); // + if (token == NULL) + return ERROR_MISS_ARG; + char *hashstr = token; + token = strtok_r(NULL," ",&savetok); // direction + if (token == NULL) + return ERROR_MISS_ARG; + ddir = !strcmp(token,"DOWNLOAD") ? DDIR_DOWNLOAD : DDIR_UPLOAD; + token = strtok_r(NULL,"",&savetok); // speed + if (token == NULL) + return ERROR_MISS_ARG; + int n = sscanf(token,"%lf",&speed); + if (n == 0) { + dprintf("cmd: MAXSPEED: speed is not a float\n"); + return ERROR_MISS_ARG; + } + Sha1Hash root_hash = Sha1Hash(true,hashstr); + CmdGwGotMAXSPEED(root_hash,ddir,speed*1024.0); + } + else if (!strcmp(method,"CHECKPOINT")) + { + // CHECKPOINT roothash\r\n + Sha1Hash root_hash = Sha1Hash(true,paramstr); + CmdGwGotCHECKPOINT(root_hash); + } + else if (!strcmp(method,"SETMOREINFO")) + { + // GETMOREINFO roothash toggle\r\n + token = strtok_r(paramstr," ",&savetok); // hash + if (token == NULL) + return ERROR_MISS_ARG; + char *hashstr = token; + token = strtok_r(NULL," ",&savetok); // bool + if (token == NULL) + return ERROR_MISS_ARG; + bool enable = (bool)!strcmp(token,"1"); + Sha1Hash root_hash = Sha1Hash(true,hashstr); + CmdGwGotSETMOREINFO(root_hash,enable); + } + else if (!strcmp(method,"SHUTDOWN")) + { + CmdGwCloseConnection(cmdsock); + // Tell libevent to stop processing events + event_base_loopexit(Channel::evbase, NULL); + } + else if (!strcmp(method,"TUNNELSEND")) + { + token = strtok_r(paramstr,"/",&savetok); // dest addr + if (token == NULL) + return ERROR_MISS_ARG; + char *addrstr = token; + token = strtok_r(NULL," ",&savetok); // channel + if (token == NULL) + return ERROR_MISS_ARG; + char *chanstr = token; + token = strtok_r(NULL," ",&savetok); // size + if (token == NULL) + return ERROR_MISS_ARG; + char *sizestr = token; + + cmd_tunnel_dest_addr = Address(addrstr); + int n = sscanf(chanstr,"%08x",&cmd_tunnel_dest_chanid); + if (n != 1) + return ERROR_BAD_ARG; + n = sscanf(sizestr,"%u",&cmd_tunnel_expect); + if (n != 1) + return ERROR_BAD_ARG; + + cmd_tunnel_state = CMDGW_TUNNEL_READTUNNEL; + + if (cmd_gw_debug) + fprintf(stderr,"cmdgw: Want tunnel %d bytes to %s\n", cmd_tunnel_expect, cmd_tunnel_dest_addr.str() ); + } + else if (!strcmp(method,"PEERADDR")) + { + // PEERADDR roothash addrstr\r\n + token = strtok_r(paramstr," ",&savetok); // hash + if (token == NULL) + return ERROR_MISS_ARG; + char *hashstr = token; + token = strtok_r(NULL," ",&savetok); // bool + if (token == NULL) + return ERROR_MISS_ARG; + char *addrstr = token; + Address peer(addrstr); + Sha1Hash root_hash = Sha1Hash(true,hashstr); + CmdGwGotPEERADDR(root_hash,peer); + } + + else + { + return ERROR_UNKNOWN_CMD; + } + + return ERROR_NO_ERROR; +} + + + +void CmdGwEventCameInCallback(struct bufferevent *bev, short events, void *ctx) +{ + if (events & BEV_EVENT_ERROR) + print_error("cmdgw: Error from bufferevent"); + if (events & (BEV_EVENT_EOF | BEV_EVENT_ERROR)) + { + // Called when error on cmd connection + evutil_socket_t cmdsock = bufferevent_getfd(bev); + CmdGwCloseConnection(cmdsock); + bufferevent_free(bev); + } +} + + +void CmdGwNewConnectionCallback(struct evconnlistener *listener, + evutil_socket_t fd, struct sockaddr *address, int socklen, + void *ctx) +{ + // New TCP connection on cmd listen socket + + fprintf(stderr,"cmd: Got new cmd connection %i\n",fd); + + struct event_base *base = evconnlistener_get_base(listener); + struct bufferevent *bev = bufferevent_socket_new(base, fd, BEV_OPT_CLOSE_ON_FREE); + + bufferevent_setcb(bev, CmdGwDataCameInCallback, NULL, CmdGwEventCameInCallback, NULL); + bufferevent_enable(bev, EV_READ|EV_WRITE); + + // ARNOTODO: free bufferevent when conn closes. + + // One buffer for all cmd connections, reset + if (cmd_evbuffer != NULL) + evbuffer_free(cmd_evbuffer); + cmd_evbuffer = evbuffer_new(); + + // SOCKTUNNEL: assume 1 command connection + cmd_tunnel_sock = fd; + + cmd_gw_conns_open++; +} + + +void CmdGwListenErrorCallback(struct evconnlistener *listener, void *ctx) +{ + // libevent got error on cmd listener + + fprintf(stderr,"CmdGwListenErrorCallback: Something wrong with CMDGW\n" ); + struct event_base *base = evconnlistener_get_base(listener); + int err = EVUTIL_SOCKET_ERROR(); + char errmsg[1024]; + sprintf(errmsg, "cmdgw: Got a fatal error %d (%s) on the listener.\n", err, evutil_socket_error_to_string(err)); + + print_error(errmsg); + dprintf("%s @0 closed cmd gateway\n",tintstr()); + + evconnlistener_free(cmd_evlistener); +} + + +bool InstallCmdGateway (struct event_base *evbase,Address cmdaddr,Address httpaddr) +{ + // Allocate libevent listener for cmd connections + // From http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html + + fprintf(stderr,"cmdgw: Creating new TCP listener on addr %s\n", cmdaddr.str() ); + + const struct sockaddr_in sin = (sockaddr_in)cmdaddr; + + cmd_evlistener = evconnlistener_new_bind(evbase, CmdGwNewConnectionCallback, NULL, + LEV_OPT_CLOSE_ON_FREE|LEV_OPT_REUSEABLE, -1, + (const struct sockaddr *)&sin, sizeof(sin)); + if (!cmd_evlistener) { + print_error("Couldn't create listener"); + return false; + } + evconnlistener_set_error_cb(cmd_evlistener, CmdGwListenErrorCallback); + + cmd_gw_httpaddr = httpaddr; + + cmd_evbuffer = evbuffer_new(); + + return true; +} + + + +// SOCKTUNNEL +void swift::CmdGwTunnelUDPDataCameIn(Address srcaddr, uint32_t srcchan, struct evbuffer* evb) +{ + // Message received on UDP socket, forward over TCP conn. + + if (cmd_gw_debug) + fprintf(stderr,"cmdgw: TunnelUDPData:DataCameIn %d bytes from %s/%08x\n", evbuffer_get_length(evb), srcaddr.str(), srcchan ); + + /* + * Format: + * TUNNELRECV ip:port/hexchanid nbytes\r\n + * + */ + + std::ostringstream oss; + oss << "TUNNELRECV " << srcaddr.str(); + oss << "/" << std::hex << srcchan; + oss << " " << std::dec << evbuffer_get_length(evb) << "\r\n"; + + std::stringbuf *pbuf=oss.rdbuf(); + size_t slen = strlen(pbuf->str().c_str()); + send(cmd_tunnel_sock,pbuf->str().c_str(),slen,0); + + slen = evbuffer_get_length(evb); + uint8_t *data = evbuffer_pullup(evb,slen); + send(cmd_tunnel_sock,(const char *)data,slen,0); + + evbuffer_drain(evb,slen); +} + + +void swift::CmdGwTunnelSendUDP(struct evbuffer *evb) +{ + // Received data from TCP connection, send over UDP to specified dest + cmd_tunnel_state = CMDGW_TUNNEL_SCAN4CRLF; + + if (cmd_gw_debug) + fprintf(stderr,"cmdgw: sendudp:"); + + struct evbuffer *sendevbuf = evbuffer_new(); + + // Add channel id. Currently always CMDGW_TUNNEL_DEFAULT_CHANNEL_ID=0xffffffff + // but we may add a TUNNELSUBSCRIBE command later to allow the allocation + // of different channels for different TCP clients. + int ret = evbuffer_add_32be(sendevbuf, cmd_tunnel_dest_chanid); + if (ret < 0) + { + evbuffer_drain(evb,cmd_tunnel_expect); + evbuffer_free(sendevbuf); + fprintf(stderr,"cmdgw: sendudp :can't copy prefix to sendbuf!"); + return; + } + ret = evbuffer_remove_buffer(evb, sendevbuf, cmd_tunnel_expect); + if (ret < 0) + { + evbuffer_drain(evb,cmd_tunnel_expect); + evbuffer_free(sendevbuf); + fprintf(stderr,"cmdgw: sendudp :can't copy to sendbuf!"); + return; + } + if (Channel::sock_count != 1) + { + fprintf(stderr,"cmdgw: sendudp: no single UDP socket!"); + evbuffer_free(sendevbuf); + return; + } + evutil_socket_t sock = Channel::sock_open[Channel::sock_count-1].sock; + + Channel::SendTo(sock,cmd_tunnel_dest_addr,sendevbuf); + + evbuffer_free(sendevbuf); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/compat.cpp tribler-6.2.0/Tribler/SwiftEngine/compat.cpp --- tribler-6.2.0/Tribler/SwiftEngine/compat.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/compat.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,534 @@ +/* + * compat.cpp + * swift + * + * Created by Arno Bakker, Victor Grishchenko + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "compat.h" +#include +#include +#include +#ifdef _WIN32 +#include +#include +#include +#include +#include +#include +#else +#include +#include +#endif +#include +#include + +namespace swift { + +#ifdef _WIN32 +static HANDLE map_handles[1024]; +#endif + +int64_t file_size (int fd) { + +#ifdef WIN32 + struct _stat32i64 st; + _fstat32i64(fd, &st); +#else + struct stat st; + st.st_size = 0; + fstat(fd, &st); +#endif + return st.st_size; +} + +int file_seek (int fd, int64_t offset) { +#ifndef _WIN32 + return lseek(fd,offset,SEEK_SET); +#else + return _lseeki64(fd,offset,SEEK_SET); +#endif +} + +int file_resize (int fd, int64_t new_size) { +#ifndef _WIN32 + return ftruncate(fd, new_size); +#else + // Arno, 2011-10-27: Use 64-bit version + if (_chsize_s(fd,new_size) != 0) + return -1; + else + return 0; +#endif +} + + +void print_error(const char* msg) { + perror(msg); +#ifdef _WIN32 + int e = WSAGetLastError(); + if (e) + fprintf(stderr,"windows error #%u\n",e); +#endif +} + +void* memory_map (int fd, size_t size) { + if (!size) + size = file_size(fd); + void *mapping; +#ifndef _WIN32 + mapping = mmap (NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); + if (mapping==MAP_FAILED) + return NULL; + return mapping; +#else + HANDLE fhandle = (HANDLE)_get_osfhandle(fd); + HANDLE maphandle = CreateFileMapping( fhandle, + NULL, + PAGE_READWRITE, + 0, + 0, + NULL ); + if (maphandle == NULL) + return NULL; + map_handles[fd] = maphandle; + + mapping = MapViewOfFile ( maphandle, + FILE_MAP_WRITE, + 0, + 0, + 0 ); + + return mapping; +#endif +} + +void memory_unmap (int fd, void* mapping, size_t size) { +#ifndef _WIN32 + munmap(mapping,size); + close(fd); +#else + UnmapViewOfFile(mapping); + CloseHandle(map_handles[fd]); +#endif +} + +#ifdef _WIN32 + +size_t pread(int fildes, void *buf, size_t nbyte, __int64 offset) +{ + _lseeki64(fildes,offset,SEEK_SET); + return read(fildes,buf,nbyte); +} + +size_t pwrite(int fildes, const void *buf, size_t nbyte, __int64 offset) +{ + _lseeki64(fildes,offset,SEEK_SET); + return write(fildes,buf,nbyte); +} + + +int inet_aton(const char *cp, struct in_addr *inp) +{ + inp->S_un.S_addr = inet_addr(cp); + return 1; +} + +#endif + +#ifdef _WIN32 + +LARGE_INTEGER get_freq() { + LARGE_INTEGER proc_freq; + if (!::QueryPerformanceFrequency(&proc_freq)) + print_error("HiResTimeOfDay: QueryPerformanceFrequency() failed"); + return proc_freq; +} + +tint usec_time(void) +{ + static LARGE_INTEGER last_time; + LARGE_INTEGER cur_time; + QueryPerformanceCounter(&cur_time); + if (cur_time.QuadPart" << std::endl; + + return utf16str; +#else + return NULL; +#endif +} + +std::string utf16to8(wchar_t* utf16str) +{ +#ifdef _WIN32 + //std::wcerr << "utf16to8: in " << utf16str << std::endl; + CW2A utf8obj(utf16str, CP_UTF8); + return std::string(utf8obj.m_psz); +#else + return "(nul)"; +#endif +} + + + +int open_utf8(const char *filename, int flags, mode_t mode) +{ +#ifdef _WIN32 + wchar_t *utf16fn = utf8to16(filename); + int ret = _wopen(utf16fn,flags,mode); + free(utf16fn); + return ret; +#else + return open(filename,flags,mode); // TODO: UNIX with locale != UTF-8 +#endif +} + + +FILE *fopen_utf8(const char *filename, const char *mode) +{ +#ifdef _WIN32 + wchar_t *utf16fn = utf8to16(filename); + wchar_t *utf16mode = utf8to16(mode); + FILE *fp = _wfopen(utf16fn,utf16mode); + free(utf16fn); + free(utf16mode); + return fp; +#else + return fopen(filename,mode); // TODO: UNIX with locale != UTF-8 +#endif +} + + + + +int64_t file_size_by_path_utf8(std::string pathname) { + int ret = 0; +#ifdef WIN32 + struct __stat64 st; + wchar_t *utf16c = utf8to16(pathname); + ret = _wstat64(utf16c, &st); + free(utf16c); +#else + struct stat st; + ret = stat(pathname.c_str(), &st); // TODO: UNIX with locale != UTF-8 +#endif + if (ret < 0) + return ret; + else + return st.st_size; +} + +int file_exists_utf8(std::string pathname) +{ + int ret = 0; +#ifdef WIN32 + struct __stat64 st; + wchar_t *utf16c = utf8to16(pathname); + ret = _wstat64(utf16c, &st); + free(utf16c); +#else + struct stat st; + ret = stat(pathname.c_str(), &st); // TODO: UNIX with locale != UTF-8 +#endif + if (ret < 0) + { + if (errno == ENOENT) + return 0; + else + return ret; + } + else if (st.st_mode & S_IFDIR) + return 2; + else + return 1; +} + + +int mkdir_utf8(std::string dirname) +{ +#ifdef WIN32 + wchar_t *utf16c = utf8to16(dirname); + int ret = _wmkdir(utf16c); + free(utf16c); +#else + int ret = mkdir(dirname.c_str(),S_IRUSR|S_IWUSR|S_IXUSR|S_IRGRP|S_IXGRP|S_IROTH|S_IXOTH); // TODO: UNIX with locale != UTF-8 +#endif + return ret; +} + + +int remove_utf8(std::string pathname) +{ +#ifdef WIN32 + wchar_t *utf16c = utf8to16(pathname); + int ret = _wremove(utf16c); + free(utf16c); +#else + int ret = remove(pathname.c_str()); // TODO: UNIX with locale != UTF-8 +#endif + return ret; +} + + + +#if _DIR_ENT_HAVE_D_TYPE +#define TEST_IS_DIR(unixde, st) ((bool)(unixde->d_type & DT_DIR)) +#else +#define TEST_IS_DIR(unixde, st) ((bool)(S_ISDIR(st.st_mode))) +#endif + +DirEntry *opendir_utf8(std::string pathname) +{ +#ifdef _WIN32 + HANDLE hFind; + WIN32_FIND_DATAW ffd; + + std::string pathsearch = pathname + "\\*.*"; + wchar_t *pathsearch_utf16 = utf8to16(pathsearch); + hFind = FindFirstFileW(pathsearch_utf16, &ffd); + free(pathsearch_utf16); + if (hFind != INVALID_HANDLE_VALUE) + { + std::string utf8fn = utf16to8(ffd.cFileName); + DirEntry *de = new DirEntry(utf8fn,(bool)((ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)); + de->hFind_ = hFind; + return de; + } + else + return NULL; +#else + DIR *dirp = opendir( pathname.c_str() ); // TODO: UNIX with locale != UTF-8 + if (dirp == NULL) + return NULL; + struct dirent *unixde = readdir(dirp); + if (unixde == NULL) + return NULL; + else + { +#if _DIR_ENT_HAVE_D_TYPE + if( unixde->d_type == DT_UNKNOWN ) { +#endif + std::string fullpath = pathname + FILE_SEP; + struct stat st; + st.st_mode = 0; + stat(fullpath.append(unixde->d_name).c_str(), &st); +#if _DIR_ENT_HAVE_D_TYPE + if( S_ISDIR(st.st_mode) ) + unixde->d_type = DT_DIR; + } +#endif + DirEntry *de = new DirEntry(unixde->d_name,TEST_IS_DIR(unixde, st)); + de->dirp_ = dirp; + de->basename_ = pathname; + return de; + } +#endif +} + + +DirEntry *readdir_utf8(DirEntry *prevde) +{ +#ifdef _WIN32 + WIN32_FIND_DATAW ffd; + BOOL ret = FindNextFileW(prevde->hFind_, &ffd); + if (!ret) + { + FindClose(prevde->hFind_); + return NULL; + } + else + { + std::string utf8fn = utf16to8(ffd.cFileName); + DirEntry *de = new DirEntry(utf8fn,(bool)((ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)); + de->hFind_ = prevde->hFind_; + return de; + } +#else + struct dirent *unixde = readdir(prevde->dirp_); + if (unixde == NULL) + { + closedir(prevde->dirp_); + return NULL; + } + else + { +#if _DIR_ENT_HAVE_D_TYPE + if( unixde->d_type == DT_UNKNOWN ) { +#endif + std::string fullpath = prevde->basename_ + FILE_SEP; + struct stat st; + st.st_mode = 0; + stat(fullpath.append(unixde->d_name).c_str(), &st); +#if _DIR_ENT_HAVE_D_TYPE + if( S_ISDIR(st.st_mode) ) + unixde->d_type = DT_DIR; + } +#endif + DirEntry *de = new DirEntry(unixde->d_name,TEST_IS_DIR(unixde, st)); + de->dirp_ = prevde->dirp_; + de->basename_ = prevde->basename_; + return de; + } +#endif +} + + + + + +std::string gettmpdir_utf8(void) +{ +#ifdef _WIN32 + DWORD ret = 0; + wchar_t utf16c[MAX_PATH]; + ret = GetTempPathW(MAX_PATH,utf16c); + if (ret == 0 || ret > MAX_PATH) + { + return "./"; + } + else + { + return utf16to8(utf16c); + } +#else + return "/tmp/"; +#endif +} + +int chdir_utf8(std::string dirname) +{ +#ifdef _WIN32 + wchar_t *utf16c = utf8to16(dirname); + int ret = !::SetCurrentDirectoryW(utf16c); + free(utf16c); + return ret; +#else + return chdir(dirname.c_str()); // TODO: UNIX with locale != UTF-8 +#endif +} + + +std::string getcwd_utf8(void) +{ +#ifdef _WIN32 + wchar_t szDirectory[MAX_PATH]; + !::GetCurrentDirectoryW(sizeof(szDirectory) - 1, szDirectory); + return utf16to8(szDirectory); +#else + char *cwd = getcwd(NULL,0); + std::string cwdstr(cwd); + free(cwd); + return cwdstr; +#endif +} + + +std::string dirname_utf8(std::string pathname) +{ + int idx = pathname.rfind(FILE_SEP); + if (idx != std::string::npos) + { + return pathname.substr(0,idx); + } + else + return ""; +} + + + +bool make_socket_nonblocking(evutil_socket_t fd) { +#ifdef _WIN32 + u_long enable = 1; + return 0==ioctlsocket(fd, FIONBIO, &enable); +#else + return 0==fcntl(fd, F_SETFL, O_NONBLOCK); +#endif +} + +bool close_socket (evutil_socket_t sock) { +#ifdef _WIN32 + return 0==closesocket(sock); +#else + return 0==::close(sock); +#endif +} + + +// Arno: not thread safe! +struct timeval* tint2tv (tint t) { + static struct timeval tv; + tv.tv_usec = t%TINT_SEC; + tv.tv_sec = t/TINT_SEC; + return &tv; +} + + +std::string hex2bin(std::string input) +{ + std::string res; + res.reserve(input.size() / 2); + for (int i = 0; i < input.size(); i += 2) + { + std::istringstream iss(input.substr(i, 2)); + int temp; + iss >> std::hex >> temp; + res += static_cast(temp); + } + return res; +} + + +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/compat.h tribler-6.2.0/Tribler/SwiftEngine/compat.h --- tribler-6.2.0/Tribler/SwiftEngine/compat.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/compat.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,270 @@ +/* + * compat.h + * compatibility wrappers + * + * Created by Arno Bakker, Victor Grishchenko + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#ifndef SWIFT_COMPAT_H +#define SWIFT_COMPAT_H + +#ifdef _MSC_VER +typedef unsigned char uint8_t; +typedef signed char int8_t; +typedef unsigned short uint16_t; +typedef short int16_t; +typedef unsigned int uint32_t; +typedef int int32_t; +typedef __int64 int64_t; +typedef unsigned __int64 uint64_t; +#else +#include +#endif + +#ifdef _WIN32 +#include +#include +#include +#include // for std::min/max +#include +#else +#include +#include +#include +#include +#include +#include +#include +#include +#endif + +#include +#include +#include +#include +#include + +#ifdef _MSC_VER +#include "getopt_win.h" +#else +#include +#endif + +#ifdef _WIN32 +#define strcasecmp stricmp +#define strtok_r strtok_s +#endif +#ifndef S_IRUSR +#define S_IRUSR _S_IREAD +#endif +#ifndef S_IWUSR +#define S_IWUSR _S_IWRITE +#endif +#ifndef S_IRGRP +#define S_IRGRP _S_IREAD +#endif +#ifndef S_IROTH +#define S_IROTH _S_IREAD +#endif + +#ifdef _WIN32 +typedef char* setsockoptptr_t; +typedef int socklen_t; +#else +typedef void* setsockoptptr_t; +#endif + +// libevent2 assumes WIN32 is defined +#ifdef _WIN32 +#define WIN32 _WIN32 +#endif +#include + +#ifndef _WIN32 +#define INVALID_SOCKET -1 +#endif + +#ifndef LONG_MAX +#include +#define LONG_MAX numeric_limits::max() +#endif + +#ifdef _WIN32 +// log2 is C99 which is not fully supported by MS VS +#define log2(x) (log(x)/log(2.0)) +#endif + + +// Arno, 2012-01-05: Handle 64-bit size_t & printf+scanf +#if SIZE_MAX > UINT_MAX +#define PRISIZET "%llu" +#else +#define PRISIZET "%lu" +#endif + +#ifdef _WIN32 +#define ssize_t SSIZE_T +#endif + +#ifdef _WIN32 +#define mode_t int +#endif + + + +#ifdef _WIN32 +#define OPENFLAGS O_RDWR|O_CREAT|_O_BINARY +#define ROOPENFLAGS O_RDONLY|_O_BINARY +#else +#define OPENFLAGS O_RDWR|O_CREAT +#define ROOPENFLAGS O_RDONLY +#endif + +#ifdef _WIN32 +#define FILE_SEP "\\" +#else +#define FILE_SEP "/" +#endif + + + +namespace swift { + +/** tint is the time integer type; microsecond-precise. */ +typedef int64_t tint; +#define TINT_HOUR ((swift::tint)1000000*60*60) +#define TINT_MIN ((swift::tint)1000000*60) +#define TINT_SEC ((swift::tint)1000000) +#define TINT_MSEC ((swift::tint)1000) +#define TINT_uSEC ((swift::tint)1) +#define TINT_NEVER ((swift::tint)0x3fffffffffffffffLL) + +#ifdef _WIN32 +#define tintabs _abs64 +#else +#define tintabs ::abs +#endif + + +/* + * UNICODE + * + * All filenames, etc. are stored internally as UTF-8 encoded std::strings + * which are converted when used to UTF-16 (Windows) or the locale (UNIX). + */ + +// Return UTF-16 representation of utf8str. Caller must free returned value. +wchar_t* utf8to16(std::string utf8str); +std::string utf16to8(wchar_t* utf16str); + +// open with filename in UTF-8 +int open_utf8(const char *pathname, int flags, mode_t mode); + +// fopen with filename in UTF-8 +FILE *fopen_utf8(const char *filename, const char *mode); + +// Returns OS temporary directory in UTF-8 encoding +std::string gettmpdir_utf8(void); + +// Changes current working dir to dirname in UTF-8 +int chdir_utf8(std::string dirname); + +// Returns current working directory in UTF-8. +std::string getcwd_utf8(void); + +// Returns the 64-bit size of a filename in UTF-8. +int64_t file_size_by_path_utf8(std::string pathname); + +/* Returns -1 on error, 0 on non-existence, 1 on existence and being a non-dir, 2 on existence and being a dir */ +int file_exists_utf8(std::string pathname); + +// mkdir with filename in UTF-8 +int mkdir_utf8(std::string dirname); + +// remove with filename in UTF-8 +int remove_utf8(std::string pathname); + + +// opendir() + readdir() UTF-8 versions +class DirEntry +{ + public: + DirEntry(std::string filename, bool isdir) : filename_(filename), isdir_(isdir) {} + std::string filename_; + bool isdir_; + +#ifdef _WIN32 + HANDLE hFind_; +#else + DIR *dirp_; + std::string basename_; +#endif +}; + +// Returns NULL on error. +DirEntry *opendir_utf8(std::string pathname); + +// Returns NULL on error, last entry. Automatically does closedir() +DirEntry *readdir_utf8(DirEntry *prevde); + + +std::string dirname_utf8(std::string pathname); + +/* + * Other filename-less functions + */ + +int64_t file_size (int fd); + +int file_seek (int fd, int64_t offset); + +int file_resize (int fd, int64_t new_size); + +void* memory_map (int fd, size_t size=0); +void memory_unmap (int fd, void*, size_t size); + +void print_error (const char* msg); + +#ifdef _WIN32 + +/** UNIX pread approximation. Does change file pointer. Is not thread-safe */ +size_t pread(int fildes, void *buf, size_t nbyte, __int64 offset); // off_t not 64-bit dynamically on Win32 + +/** UNIX pwrite approximation. Does change file pointer. Is not thread-safe */ +size_t pwrite(int fildes, const void *buf, size_t nbyte, __int64 offset); + +int inet_aton(const char *cp, struct in_addr *inp); + +#endif + +tint usec_time (); + +bool make_socket_nonblocking(evutil_socket_t s); + +bool close_socket (evutil_socket_t sock); + +struct timeval* tint2tv (tint t); + + +int inline stringreplace(std::string& source, const std::string& find, const std::string& replace) +{ + int num=0; + std::string::size_type fLen = find.size(); + std::string::size_type rLen = replace.size(); + for (std::string::size_type pos=0; (pos=source.find(find, pos))!=std::string::npos; pos+=rLen) + { + num++; + source.replace(pos, fLen, replace); + } + return num; +} + + +std::string hex2bin(std::string input); + + +}; + +#endif + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/do_tests.sh tribler-6.2.0/Tribler/SwiftEngine/do_tests.sh --- tribler-6.2.0/Tribler/SwiftEngine/do_tests.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/do_tests.sh 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,9 @@ +#!/bin/bash + +for tst in `ls tests/*test | grep -v ledbat`; do + if echo $tst; $tst > $tst.log; then + echo $tst OK + else + echo $tst FAIL + fi +done Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/apusapus.png and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/apusapus.png differ Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/binmaps-alenex.pdf and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/binmaps-alenex.pdf differ Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/cc-states.png and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/cc-states.png differ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.nroff tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.nroff --- tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.nroff 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.nroff 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,2087 @@ +.\" Auto generated Nroff by NroffEdit on April 12, 2010 +.pl 10.0i +.po 0 +.ll 7.2i +.lt 7.2i +.nr LL 7.2i +.nr LT 7.2i +.ds LF Grishchenko and Bakker +.ds RF FORMFEED[Page %] +.ds LH Internet-Draft +.ds RH December 19, 2011 +.ds CH swift +.ds CF Expires June 21, 2012 +.hy 0 +.nh +.ad l +.in 0 +.nf +.tl 'PPSP' 'A. Bakker' +.tl 'Internet-Draft' 'TU Delft' +.tl 'Intended status: Informational' +.tl 'Expires: June 21, 2012' 'December 19, 2011' + +.fi +.in 3 +.in 12 +.ti 8 +Peer-to-Peer Streaming Protocol (PPSP) \% + +.ti 0 +Abstract + +.in 3 +The Generic Multiparty Protocol (swift) is a peer-to-peer based transport +protocol for content dissemination. It can be used for streaming on-demand +and live video content, as well as conventional downloading. In swift, the +clients consuming the content participate in the dissemination by forwarding +the content to other clients via a mesh-like structure. It is a generic +protocol which can run directly on top of UDP, TCP, HTTP or as a RTP +profile. Features of swift are short time-till-playback and extensibility. +Hence, it can use different mechanisms to prevent freeriding, and work +with different peer discovery schemes (centralized trackers or +Distributed Hash Tables). Depending on the underlying transport +protocol, swift can also use different congestion control algorithms, +such as LEDBAT, and offer transparent NAT traversal. Finally, swift maintains +only a small amount of state per peer and detects malicious modification of +content. This documents describes swift and how it satisfies the requirements +for the IETF Peer-to-Peer Streaming Protocol (PPSP) Working Group's peer +protocol. + + +.ti 0 +Status of this memo + +This Internet-Draft is submitted to IETF in full conformance with the +provisions of BCP 78 and BCP 79. + +Internet-Drafts are working documents of the Internet Engineering Task Force +(IETF), its areas, and its working groups. Note that other groups may also +distribute working documents as Internet- Drafts. + +Internet-Drafts are draft documents valid for a maximum of six months and +may be updated, replaced, or obsoleted by other documents at any time. It +is inappropriate to use Internet-Drafts as reference material or to cite +them other than as "work in progress." + +The list of current Internet-Drafts can be accessed at +\%http://www.ietf.org/ietf/1id-abstracts.txt. + +The list of Internet-Draft Shadow Directories can be accessed at +http://www.ietf.org/shadow.html. + + +.nf +Copyright (c) 2011 IETF Trust and the persons identified as the +document authors. All rights reserved. + +This document is subject to BCP 78 and the IETF Trust's Legal +Provisions Relating to IETF Documents +\%(http://trustee.ietf.org/license-info) in effect on the date of +publication of this document. Please review these documents +carefully, as they describe your rights and restrictions with respect +to this document. Code Components extracted from this document must +include Simplified BSD License text as described in Section 4.e of +the Trust Legal Provisions and are provided without warranty as +described in the Simplified BSD License. + +.\" \# TD4 -- Set TOC depth by altering this value (TD5 = depth 5) +.\" \# TOC -- Beginning of auto updated Table of Contents +.in 0 +Table of Contents + +.nf + 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 + 1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 3 + 1.2. Conventions Used in This Document . . . . . . . . . . . . . 4 + 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . 5 + 2. Overall Operation . . . . . . . . . . . . . . . . . . . . . . . 6 + 2.1. Joining a Swarm . . . . . . . . . . . . . . . . . . . . . . 6 + 2.2. Exchanging Chunks . . . . . . . . . . . . . . . . . . . . . 6 + 2.3. Leaving a Swarm . . . . . . . . . . . . . . . . . . . . . . 7 + 3. Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 + 3.1. HANDSHAKE . . . . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3. HAVE . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3.1. Bin Numbers . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3.2. HAVE Message . . . . . . . . . . . . . . . . . . . . . 9 + 3.4. ACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 + 3.5. DATA and HASH . . . . . . . . . . . . . . . . . . . . . . . 10 + 3.5.1. Merkle Hash Tree . . . . . . . . . . . . . . . . . . . 10 + 3.5.2. Content Integrity Verification . . . . . . . . . . . . 11 + 3.5.3. The Atomic Datagram Principle . . . . . . . . . . . . . 11 + 3.5.4. DATA and HASH Messages . . . . . . . . . . . . . . . . 12 + 3.6. HINT . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 + 3.7. Peer Address Exchange and NAT Hole Punching . . . . . . . . 13 + 3.8. KEEPALIVE . . . . . . . . . . . . . . . . . . . . . . . . . 14 + 3.9. VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 14 + 3.10. Conveying Peer Capabilities . . . . . . . . . . . . . . . 14 + 3.11. Directory Lists . . . . . . . . . . . . . . . . . . . . . 14 + 4. Automatic Detection of Content Size . . . . . . . . . . . . . . 14 + 4.1. Peak Hashes . . . . . . . . . . . . . . . . . . . . . . . . 15 + 4.2. Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 16 + 5. Live streaming . . . . . . . . . . . . . . . . . . . . . . . . 17 + 6. Transport Protocols and Encapsulation . . . . . . . . . . . . . 17 + 6.1. UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 + 6.1.1. Chunk Size . . . . . . . . . . . . . . . . . . . . . . 17 + 6.1.2. Datagrams and Messages . . . . . . . . . . . . . . . . 18 + 6.1.3. Channels . . . . . . . . . . . . . . . . . . . . . . . 18 + 6.1.4. HANDSHAKE and VERSION . . . . . . . . . . . . . . . . . 19 + 6.1.5. HAVE . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.6. ACK . . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.7. HASH . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.8. DATA . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.9. KEEPALIVE . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.10. Flow and Congestion Control . . . . . . . . . . . . . 21 + 6.2. TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 + 6.3. RTP Profile for PPSP . . . . . . . . . . . . . . . . . . . 21 + 6.3.1. Design . . . . . . . . . . . . . . . . . . . . . . . . 22 + 6.3.2. PPSP Requirements . . . . . . . . . . . . . . . . . . . 24 + 6.4. HTTP (as PPSP) . . . . . . . . . . . . . . . . . . . . . . 27 + 6.4.1. Design . . . . . . . . . . . . . . . . . . . . . . . . 27 + 6.4.2. PPSP Requirements . . . . . . . . . . . . . . . . . . . 29 + 7. Security Considerations . . . . . . . . . . . . . . . . . . . . 32 + 8. Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . 32 + 8.1. 32 bit vs 64 bit . . . . . . . . . . . . . . . . . . . . . 32 + 8.2. IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 + 8.3. Congestion Control Algorithms . . . . . . . . . . . . . . . 32 + 8.4. Piece Picking Algorithms . . . . . . . . . . . . . . . . . 33 + 8.5. Reciprocity Algorithms . . . . . . . . . . . . . . . . . . 33 + 8.6. Different crypto/hashing schemes . . . . . . . . . . . . . 33 + 9. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 + 9.1. Design Goals . . . . . . . . . . . . . . . . . . . . . . . 34 + 9.2. Not TCP . . . . . . . . . . . . . . . . . . . . . . . . . 35 + 9.3. Generic Acknowledgments . . . . . . . . . . . . . . . . . 36 + Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . 37 + References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 + Authors' addresses . . . . . . . . . . . . . . . . . . . . . . . . 39 +.fi +.in 3 + +.\" \# ETC -- End of auto updated Table of Contents + + + +.ti 0 +1. Introduction + +.ti 0 +1.1. Purpose + +This document describes the Generic Multiparty Protocol (swift), designed +from the ground up for the task of disseminating the same content to a group +of interested parties. Swift supports streaming on-demand and +live video content, as well as conventional downloading, thus covering +today's three major use cases for content distribution. To fulfil this task, +clients consuming the content are put on equal footing with the servers +initially providing the content to create a peer-to-peer system where +everyone can provide data. Each peer connects to a random set of other peers +resulting in a mesh-like structure. + +Swift uses a simple method of naming content based on self-certification. In +particular, content in swift is identified by a single cryptographic hash +that is the root hash in a Merkle hash tree calculated recursively from the +content [ABMRKL]. This self-certifying hash tree allows every peer to +directly detect when a malicious peer tries to distribute fake content. It +also ensures only a small amount of information is needed to start a +download (just the root hash and some peer addresses). + +Swift uses a novel method of addressing chunks of content called "bin +numbers". Bin numbers allow the addressing of a binary interval of data +using a single integer. This reduces the amount of state that needs to be +recorded per peer and the space needed to denote intervals on the wire, +making the protocol light-weight. In general, this numbering system allows +swift to work with simpler data structures, e.g. to use arrays instead of +binary trees, thus reducing complexity. + +Swift is a generic protocol which can run directly on top of UDP, TCP, HTTP, +or as a layer below RTP, similar to SRTP [RFC3711]. As such, swift defines a +common set of messages that make up the protocol, which can have different +representations on the wire depending on the lower-level protocol used. When +the lower-level transport is UDP, swift can also use different congestion +control algorithms and facilitate NAT traversal. + +In addition, swift is extensible in the mechanisms it uses to promote client +contribution and prevent freeriding, that is, how to deal with peers +that only download content but never upload to others. Furthermore, +it can work with different peer discovery schemes, +such as centralized trackers or fast Distributed Hash Tables [JIM11]. + +This documents describes not only the swift protocol but also how it +satisfies the requirements for the IETF Peer-to-Peer Streaming Protocol +(PPSP) Working Group's peer protocol [PPSPCHART,I-D.ietf-ppsp-reqs]. +A reference implementation of swift over UDP is available [SWIFTIMPL]. + + +.ti 0 +1.2. Conventions Used in This Document + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", +"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in +this document are to be interpreted as described in [RFC2119]. + +.ti 0 +1.3. Terminology + +.in 3 +message +.br +.in 8 +The basic unit of swift communication. A message will have different +representations on the wire depending on the transport protocol used. Messages +are typically multiplexed into a datagram for transmission. + +.in 3 +datagram +.br +.in 8 +A sequence of messages that is offered as a unit to the underlying transport +protocol (UDP, etc.). The datagram is swift's Protocol Data Unit (PDU). + +.in 3 +content +.br +.in 8 +Either a live transmission, a pre-recorded multimedia asset, or a file. + +.in 3 +bin +.br +.in 8 +A number denoting a specific binary interval of the content (i.e., one +or more consecutive chunks). + +.in 3 +chunk +.br +.in 8 +The basic unit in which the content is divided. E.g. a block of N +kilobyte. + +.in 3 +hash +.br +.in 8 +The result of applying a cryptographic hash function, more specifically +a modification detection code (MDC) [HAC01], such as SHA1 [FIPS180-2], +to a piece of data. + +.in 3 +root hash +.br +.in 8 +The root in a Merkle hash tree calculated recursively from the content. + +.in 3 +swarm +.br +.in 8 +A group of peers participating in the distribution of the same content. + +.in 3 +swarm ID +.br +.in 8 +Unique identifier for a swarm of peers, in swift the root hash of the +content (video-on-demand,download) or a public key (live streaming). + +.in 3 +tracker +.br +.in 8 +An entity that records the addresses of peers participating in a swarm, +usually for a set of swarms, and makes this membership information +available to other peers on request. + +.in 3 +choking +.br +.in 8 +When a peer A is choking peer B it means that A is currently not +willing to accept requests for content from B. +.in 3 + + +.ti 0 +2. Overall Operation + +The basic unit of communication in swift is the message. Multiple messages are +multiplexed into a single datagram for transmission. A datagram (and hence the +messages it contains) will have different representations on the wire depending +on the transport protocol used (see Sec. 6). + + +.ti 0 +2.1. Joining a Swarm + +Consider a peer A that wants to download a certain content asset. +To commence a swift download, peer A must have the swarm ID of the content +and a list of one or more tracker contact points (e.g. host+port). The list +of trackers is optional in the presence of a decentralized tracking mechanism. +The swarm ID consists of the swift root hash of the content (video-on-demand, +downloading) or a public key (live streaming). + +Peer A now registers with the tracker following e.g. the PPSP tracker +protocol [I-D.ietf.ppsp-reqs] and receives the IP address and port of peers +already in the swarm, say B, C, and D. Peer A now sends a datagram +containing a HANDSHAKE message to B, C, and D. This message serves as an +end-to-end check that the peers are actually in the correct swarm, and +contains the root hash of the swarm. Peer B and C respond with datagrams +containing a HANDSHAKE message and one or more HAVE messages. A HAVE message +conveys (part of) the chunk availability of a peer and thus contains a bin +number that denotes what chunks of the content peer B, resp. C have. Peer D +sends a datagram with just a HANDSHAKE and omits HAVE messages as a way of +choking A. + +.ti 0 +2.2. Exchanging Chunks + +In response to B and C, A sends new datagrams to B and C containing HINT messages. +A HINT or request message indicates the chunks that a peer wants to +download, and contains a bin number. The HINT messages to B and C refer to +disjunct sets of chunks. B and C respond with datagrams containing HASH, +HAVE and DATA messages. The HASH messages contains all cryptographic hashes +that peer A needs to verify the integrity of the content chunk sent in the +DATA message, using the content's root hash as trusted anchor, see Sec. 3.5. +Using these hashes peer A verifies that the chunks received from B and C are +correct. It also updates the chunk availability of B and C using the information +in the received HAVE messages. + +After processing, A sends a datagram containing HAVE messages for the chunks +it just received to all its peers. In the datagram to B and C it includes an +ACK message acknowledging the receipt of the chunks, and adds HINT messages +for new chunks. ACK messages are not used when a reliable transport protocol +is used. When e.g. C finds that A obtained a chunk (from B) that C did not +yet have, C's next datagram includes a HINT for that chunk. + +Peer D does not send HAVE messages to A when it downloads chunks from other peers, +until D decides to unchoke peer A. In the case, it sends a datagram with +HAVE messages to inform A of its current availability. If B or C decide to +choke A they stop sending HAVE and DATA messages and A should then rerequest +from other peers. They may continue to send HINT messages, or periodic +KEEPALIVE messages such that A keeps sending them HAVE messages. + +Once peer A has received all content (video-on-demand use case) it stops +sending messages to all other peers that have all content (a.k.a. seeders). +Peer A MAY also contact the tracker or another source again to obtain more +peer addresses. + + +.ti 0 +2.3. Leaving a Swarm + +Depending on the transport protocol used, peers should either use explicit +leave messages or implicitly leave a swarm by stopping to respond +to messages. Peers that learn about the departure should remove these peers +from the current peer list. The implicit-leave mechanism works for both graceful and +ungraceful leaves (i.e., peer crashes or disconnects). When leaving gracefully, a +peer should deregister from the tracker following the (PPSP) tracker protocol. + + +.ti 0 +3. Messages + +.fi +In general, no error codes or responses are used in the protocol; absence +of any response indicates an error. Invalid messages are discarded. + +For the sake of simplicity, one swarm of peers always deals with one content +asset (e.g. file) only. Retrieval of large collections of files is done by +retrieving a directory list file and then recursively retrieving files, which +might also turn to be directory lists, as described in Sec. 3.11. + +.ti 0 +3.1. HANDSHAKE + +As an end-to-end check that the peers are actually in the correct swarm, the +initiating peer and the addressed peer SHOULD send a HANDSHAKE message in +the first datagrams they exchange. The only payload of the HANDSHAKE message +is the root hash of the content. + +After the handshakes are exchanged, the initiator knows that the peer really +responds. Hence, the second datagram the initiator sends MAY already contain +some heavy payload. To minimize the number of initialization roundtrips, +implementations MAY dispense with the HANDSHAKE message. To the same end, +the first two datagrams exchanged MAY also contain some minor payload, e.g. +HAVE messages to indicate the current progress of a peer or a HINT +(see Sec. 3.6). + + +.ti 0 +3.3. HAVE + +The HAVE message is used to convey which chunks a peers has available, +expressed in a new content addressing scheme called "bin numbers". + +.ti 0 +3.3.1. Bin Numbers + +Swift employs a generic content addressing scheme based on binary intervals +("bins" in short). The smallest interval is a chunk (e.g. a N kilobyte +block), the top interval is the complete 2**63 range. A novel addition +to the classical scheme are "bin numbers", a scheme of numbering binary +intervals which lays them out into a vector nicely. Consider an chunk +interval of width W. To derive the bin numbers of the complete interval and +the subintervals, a minimal balanced binary tree is built that is at least W +chunks wide at the base. The leaves from left-to-right correspond to +the chunks 0..W in the interval, and have bin number I*2 where I is the +index of the chunk (counting beyond W-1 to balance the tree). The higher +level nodes P in the tree have bin number + + binP = (binL + binR) / 2 + +.br +where binL is the bin of node P's left-hand child and binR is the bin of node +P's right-hand child. Given that each node in the tree represents a +subinterval of the original interval, each such subinterval now is +addressable by a bin number, a single integer. The bin number tree of a +interval of width W=8 looks like this: + + + + + 7 +.br + / \\ +.br + / \\ +.br + / \\ +.br + / \\ +.br + 3 11 +.br + / \\ / \\ +.br + / \\ / \\ +.br + / \\ / \\ +.br + 1 5 9 13 +.br + / \\ / \\ / \\ / \\ +.br + 0 2 4 6 8 10 12 14 +.br + +.fi +So bin 7 represents the complete interval, 3 represents the interval of +chunk 0..3 and 1 represents the interval of chunks 0 and 1. The special +numbers 0xFFFFFFFF (32-bit) or 0xFFFFFFFFFFFFFFFF (64-bit) stands for an +empty interval, and 0x7FFF...FFF stands for "everything". + + +.ti 0 +3.3.2. HAVE Message + +When a receiving peer has successfully checked the integrity of a chunk or +interval of chunks it MUST send a HAVE message to all peers it wants to +interact with. The latter allows the HAVE message to be used as a method of +choking. The HAVE message MUST contain the bin number of the biggest +complete interval of all chunks the receiver has received and checked so far +that fully includes the interval of chunks just received. So the bin number +MUST denote at least the interval received, but the receiver is supposed to +aggregate and acknowledge bigger bins, when possible. + +As a result, every single chunk is acknowledged a logarithmic number of times. +That provides some necessary redundancy of acknowledgments and sufficiently +compensates for unreliable transport protocols. + +To record which chunks a peer has in the state that an implementation keeps +for each peer, an implementation MAY use the "binmap" data structure, which +is a hybrid of a bitmap and a binary tree, discussed in detail in [BINMAP]. + + +.ti 0 +3.4. ACK + +When swift is run over an unreliable transport protocol, an implementation MAY +choose to use ACK messages to acknowledge received data. When a receiving +peer has successfully checked the integrity of a chunk or interval of chunks +C it MUST send a ACK message containing the bin number of its biggest, complete, +interval covering C to the sending peer (see HAVE). To facilitate delay-based +congestion control, an ACK message contains a timestamp. + + +.ti 0 +3.5. DATA and HASH + +The DATA message is used to transfer chunks of content. The associated HASH message +carries cryptographic hashes that are necessary for a receiver to check the +the integrity of the chunk. Swift's content integrity protection is based on a +Merkle hash tree and works as follows. + +.ti 0 +3.5.1. Merkle Hash Tree + +Swift uses a method of naming content based on self-certification. In particular, +content in swift is identified by a single cryptographic hash that is the +root hash in a Merkle hash tree calculated recursively from the content [ABMRKL]. +This self-certifying hash tree allows every peer to directly detect when a malicious +peer tries to distribute fake content. It also ensures only a small the amount +of information is needed to start a download (the root hash and some peer +addresses). For live streaming public keys and dynamic trees are used, see below. + +The Merkle hash tree of a content asset that is divided into N chunks +is constructed as follows. Note the construction does not assume chunks +of content to be fixed size. Given a cryptographic hash function, more specifically +a modification detection code (MDC) [HAC01], such as SHA1, the hashes of all the +chunks of the content are calculated. Next, a binary tree of sufficient height +is created. Sufficient +height means that the lowest level in the tree has enough nodes to hold all +chunk hashes in the set, as before, see HAVE message. The figure below shows +the tree for a content asset consisting of 7 chunks. As before with the +content addressing scheme, the leaves of the tree correspond to a chunk and +in this case are assigned the hash of that chunk, starting at the left-most leaf. +As the base of the tree may be wider than the number of chunks, any remaining +leaves in the tree are assigned a empty hash value of all zeros. Finally, the hash +values of the higher levels in the tree are calculated, by concatenating the hash +values of the two children (again left to right) and computing the hash of that +aggregate. This process ends in a hash value for the root node, which is called +the "root hash". Note the root hash only depends on the content and any modification +of the content will result in a different root hash. + + + + + 7 = root hash +.br + / \\ +.br + / \\ +.br + / \\ +.br + / \\ +.br + 3* 11 +.br + / \\ / \\ +.br + / \\ / \\ +.br + / \\ / \\ +.br + 1 5 9 13* = uncle hash +.br + / \\ / \\ / \\ / \\ +.br + 0 2 4 6 8 10* 12 14 +.br + + C0 C1 C2 C3 C4 C5 C6 E +.br + =chunk index ^^ = empty hash +.br + + +.ti 0 +3.5.2. Content Integrity Verification + +.fi +Assuming a peer receives the root hash of the content it wants to download +from a trusted source, it can can check the integrity of any chunk of that +content it receives as follows. It first calculates the hash of the chunk +it received, for example chunk C4 in the previous figure. Along with this +chunk it MUST receive the hashes required to check the integrity of that +chunk. In principle, these are the hash of the chunk's sibling (C5) and +that of its "uncles". A chunk's uncles are the sibling Y of its parent X, +and the uncle of that Y, recursively until the root is reached. For chunk C4 +its uncles are bins 13 and 3, marked with * in the figure. Using this information +the peer recalculates the root hash of the tree, and compares it to the +root hash it received from the trusted source. If they match the chunk of +content has been positively verified to be the requested part of the content. +Otherwise, the sending peer either sent the wrong content or the wrong +sibling or uncle hashes. For simplicity, the set of sibling and uncles +hashes is collectively referred to as the "uncle hashes". + +In the case of live streaming the tree of chunks grows dynamically and +content is identified with a public key instead of a root hash, as the root +hash is undefined or, more precisely, transient, as long as new data is +generated by the live source. Live streaming is described in more detail +below, but content verification works the same for both live and predefined +content. + +.ti 0 +3.5.3. The Atomic Datagram Principle + +As explained above, a datagram consists of a sequence of messages. Ideally, +every datagram sent must be independent of other datagrams, so each +datagram SHOULD be processed separately and a loss of one datagram MUST NOT +disrupt the flow. Thus, as a datagram carries zero or more messages, +neither messages nor message interdependencies should span over multiple +datagrams. + +This principle implies that as any chunk is verified using its uncle +hashes the necessary hashes MUST be put into the same datagram as the +chunk's data (Sec. 3.5.4). As a general rule, if some additional data is +still missing to process a message within a datagram, the message SHOULD be +dropped. + +The hashes necessary to verify a chunk are in principle its sibling's hash +and all its uncle hashes, but the set of hashes to sent can be optimized. +Before sending a packet of data to the receiver, the sender inspects the +receiver's previous acknowledgments (HAVE or ACK) to derive which hashes the +receiver already has for sure. Suppose, the receiver had acknowledged bin 1 +(first two chunks of the file), then it must already have uncle hashes 5, +11 and so on. That is because those hashes are necessary to check packets of +bin 1 against the root hash. Then, hashes 3, 7 and so on must be also known +as they are calculated in the process of checking the uncle hash chain. +Hence, to send bin 12 (i.e. the 7th chunk of content), the sender needs to +include just the hashes for bins 14 and 9, which let the data be checked +against hash 11 which is already known to the receiver. + +The sender MAY optimistically skip hashes which were sent out in previous, +still unacknowledged datagrams. It is an optimization tradeoff between +redundant hash transmission and possibility of collateral data loss in the +case some necessary hashes were lost in the network so some delivered data +cannot be verified and thus has to be dropped. In either case, the receiver +builds the Merkle tree on-demand, incrementally, starting from the root +hash, and uses it for data validation. + +In short, the sender MUST put into the datagram the missing hashes necessary +for the receiver to verify the chunk. + +.ti 0 +3.5.4. DATA and HASH Messages + +Concretely, a peer that wants to send a chunk of content creates a datagram +that MUST consist of one or more HASH messages and a DATA message. The datagram +MUST contain a HASH message for each hash the receiver misses for integrity +checking. A HASH message MUST contain the bin number and hash data for each +of those hashes. The DATA message MUST contain the bin number of the chunk +and chunk itself. A peer MAY send the required messages for multiple chunks +in the same datagram. + + +.ti 0 +3.6. HINT + +While bulk download protocols normally do explicit requests for certain ranges +of data (i.e., use a pull model, for example, BitTorrent [BITTORRENT]), live +streaming protocols quite often use a request-less push model to save round +trips. Swift supports both models of operation. + +A peer MUST send a HINT message containing the bin of the chunk interval it +wants to download. A peer receiving a HINT message MAY send out requested +pieces. When it receives multiple HINTs (either in one datagram or in multiple), +the peer SHOULD process the HINTs sequentially. When live streaming, +it also may send some other chunks in case it runs out of requests or +for some other reason. In that case the only purpose of HINT messages is +to coordinate peers and to avoid unnecessary data retransmission, hence +the name. + + + +.ti 0 +3.7. Peer Address Exchange and NAT Hole Punching + +Peer address exchange messages (or PEX messages for short) are common for +many peer-to-peer protocols. By exchanging peer addresses in gossip fashion, +peers relieve central coordinating entities (the trackers) from unnecessary +work. swift optionally features two types of PEX messages: PEX_REQ and PEX_ADD. +A peer that wants to retrieve some peer addresses MUST send a PEX_REQ message. +The receiving peer MAY respond with a PEX_ADD message containing the addresses +of several peers. The addresses MUST be of peers it has recently exchanged +messages with to guarantee liveliness. + +.fi +To unify peer exchange and NAT hole punching functionality, the +sending pattern of PEX messages is restricted. As the swift handshake +is able to do simple NAT hole punching [SNP] transparently, PEX +messages must be emitted in the way to facilitate that. Namely, +once peer A introduces peer B to peer C by sending a PEX_ADD message to +C, it SHOULD also send a message to B introducing C. The messages +SHOULD be within 2 seconds from each other, but MAY not be, +simultaneous, instead leaving a gap of twice the "typical" RTT, i.e. +\%300-600ms. The peers are supposed to initiate handshakes to each +other thus forming a simple NAT hole punching pattern where the +introducing peer effectively acts as a STUN server [RFC5389]. Still, peers +MAY ignore PEX messages if uninterested in obtaining new peers or +because of security considerations (rate limiting) or any other +reason. + +The PEX messages can be used to construct a dedicated tracker peer. + + +.ti 0 +3.8. KEEPALIVE + +A peer MUST send a datagram containing a KEEPALIVE message periodically +to each peer it wants to interact with in the future but has no +other messages to send them at present. + + +.ti 0 +3.9. VERSION +.fi +Peers MUST convey which version of the swift protocol they support using a +VERSION message. This message MUST be included in the initial (handshake) +datagrams and MUST indicate which version of the swift protocol the sending +peer supports. + + +.ti 0 +3.10. Conveying Peer Capabilities +.fi +Peers may support just a subset of the swift messages. For example, peers +running over TCP may not accept ACK messages, or peers used with a centralized +tracking infrastructure may not accept PEX messages. For these reasons, peers +SHOULD signal which subset of the swift messages they support by means of +the MSGTYPE_RCVD message. This message SHOULD be included in the initial +(handshake) datagrams and MUST indicate which swift protocol messages +the sending peer supports. + + +.ti 0 +3.11. Directory Lists + +Directory list files MUST start with magic bytes ".\\n..\\n". The rest of +the file is a newline-separated list of hashes and file names for the +content of the directory. An example: + +.nf +\&. +\&.. +1234567890ABCDEF1234567890ABCDEF12345678 readme.txt +01234567890ABCDEF1234567890ABCDEF1234567 big_file.dat + + + +.ti 0 +4. Automatic Detection of Content Size + +.fi +In swift, the root hash of a static content asset, such as a video file, +along with some peer addresses is sufficient to start a download. +In addition, swift can reliably and automatically derive the size +of such content from information received from the network when fixed +sized chunks are used. As a result, it is not necessary to include +the size of the content asset as the metadata of the content, +in addition to the root hash. Implementations of swift MAY use +this automatic detection feature. + +.ti 0 +4.1. Peak Hashes + +The ability for a newcomer peer to detect the size of the content +depends heavily on the concept of peak hashes. Peak hashes, +in general, enable two cornerstone features of swift: reliable file +size detection and download/live streaming unification (see Sec. 5). +The concept of peak hashes depends on the concepts of filled and +incomplete bins. Recall that when constructing the binary trees +for content verification and addressing the base of the tree may +have more leaves than the number of chunks in the content. In the +Merkle hash tree these leaves were assigned empty all-zero +hashes to be able to calculate the higher level hashes. A filled +bin is now defined as a bin number that addresses an interval +of leaves that consists only of hashes of content chunks, not +empty hashes. Reversely, an incomplete (not filled) bin +addresses an interval that contains also empty hashes, +typically an interval that extends past the end of the file. +In the following figure bins 7, 11, 13 and 14 are incomplete +the rest is filled. + +Formally, a peak hash is a hash in the Merkle tree defined +over a filled bin, whose sibling is defined over an incomplete bin. +Practically, suppose a file is 7162 bytes long and a chunk +is 1 kilobyte. That file fits into 7 chunks, the tail chunk being +1018 bytes long. The Merkle tree for that file looks as follows. +Following the definition the peak hashes of this file are in +bins 3, 9 and 12, denoted with a *. E denotes an empty +hash. + + 7 +.br + / \\ +.br + / \\ +.br + / \\ +.br + / \\ +.br + 3* 11 +.br + / \\ / \\ +.br + / \\ / \\ +.br + / \\ / \\ +.br + 1 5 9* 13 +.br + / \\ / \\ / \\ / \\ +.br + 0 2 4 6 8 10 12* 14 +.br + + C0 C1 C2 C3 C4 C5 C6 E +.br + = 1018 bytes +.br + +Peak hashes can be explained by the binary representation of the +number of chunks the file occupies. The binary representation for +7 is 111. Every "1" in binary representation of the file's packet +length corresponds to a peak hash. For this particular file there +are indeed three peaks, bin numbers 3, 9, 12. The number of peak +hashes for a file is therefore also at most logarithmic with its +size. + +.fi +A peer knowing which bins contain the peak hashes for the file +can therefore calculate the number of chunks it consists of, and +thus get an estimate of the file size (given all chunks but the last +are fixed size). Which bins are the peaks can be securely communicated +from one (untrusted) peer A to another B by letting A send the +peak hashes and their bin numbers to B. It can be shown that +the root hash that B obtained from a trusted source is sufficient +to verify that these are indeed the right peak hashes, as follows. + +Lemma: Peak hashes can be checked against the root hash. + +Proof: (a) Any peak hash is always the left sibling. Otherwise, be +it the right sibling, its left neighbor/sibling must also be +defined over a filled bin, because of the way chunks are laid +out in the leaves, contradiction. (b) For the rightmost +peak hash, its right sibling is zero. (c) For any peak hash, +its right sibling might be calculated using peak hashes to the +left and zeros for empty bins. (d) Once the right sibling of +the leftmost peak hash is calculated, its parent might be +calculated. (e) Once that parent is calculated, we might +trivially get to the root hash by concatenating the hash with +zeros and hashing it repeatedly. + +.fi +Informally, the Lemma might be expressed as follows: peak hashes cover all +data, so the remaining hashes are either trivial (zeros) or might be +calculated from peak hashes and zero hashes. + +Finally, once peer B has obtained the number of chunks in the content it +can determine the exact file size as follows. Given that all chunks +except the last are fixed size B just needs to know the size of the last +chunk. Knowing the number of chunks B can calculate the bin number of the +last chunk and download it. As always B verifies the integrity of this +chunk against the trusted root hash. As there is only one chunk of data +that leads to a successful verification the size of this chunk must +be correct. B can then determine the exact file size as + + (number of chunks -1) * fixed chunk size + size of last chunk + + +.ti 0 +4.2. Procedure + +A swift implementation that wants to use automatic size detection MUST +operate as follows. When a peer B sends a DATA message for the first time +to a peer A, B MUST include all the peak hashes for the content in the +same datagram, unless A has already signalled earlier in the exchange +that it knows the peak hashes by having acknowledged any bin, even the empty +one. The receiver A MUST check the peak hashes against the root hash +to determine the approximate content size. To obtain the definite content size +peer A MUST download the last chunk of the content from any peer that offers it. + + + + + +.ti 0 +5. Live streaming + +.fi +In the case of live streaming a transfer is bootstrapped with a public key +instead of a root hash, as the root hash is undefined or, more precisely, +transient, as long as new data is being generated by the live source. +Live/download unification is achieved by sending signed peak hashes on-demand, +ahead of the actual data. As before, the sender might use acknowledgements +to derive which content range the receiver has peak hashes for and to prepend +the data hashes with the necessary (signed) peak hashes. +Except for the fact that the set of peak hashes changes with time, +other parts of the algorithm work as described in Sec. 3. + +As with static content assets in the previous section, in live streaming +content length is not known on advance, but derived \%on-the-go from the peak +hashes. Suppose, our 7 KB stream extended to another kilobyte. Thus, now hash 7 +becomes the only peak hash, eating hashes 3, 9 and 12. So, the source sends +out a SIGNED_HASH message to announce the fact. + +The number of cryptographic operations will be limited. For example, +consider a 25 frame/second video transmitted over UDP. When each frame is +transmitted in its own chunk, only 25 signature verification operations +per second are required at the receiver for bitrates up to ~12.8 +megabit/second. For higher bitrates multiple UDP packets per frame +are needed and the number of verifications doubles. + + + + + +.ti 0 +6. Transport Protocols and Encapsulation + +.ti 0 +6.1. UDP + +.ti 0 +6.1.1. Chunk Size + +Currently, swift-over-UDP is the preferred deployment option. Effectively, UDP +allows the use of IP with minimal overhead and it also allows userspace +implementations. The default is to use chunks of 1 kilobyte such that a +datagram fits in an Ethernet-sized IP packet. The bin numbering +allows to use swift over Jumbo frames/datagrams. Both DATA and +HAVE/ACK messages may use e.g. 8 kilobyte packets instead of the standard 1 +KiB. The hashing scheme stays the same. Using swift with 512 or 256-byte +packets is theoretically possible with 64-bit byte-precise bin numbers, but IP +fragmentation might be a better method to achieve the same result. + + +.ti 0 +6.1.2. Datagrams and Messages + +When using UDP, the abstract datagram described above corresponds directly +to a UDP datagram. Each message within a datagram has a fixed length, which +depends on the type of the message. The first byte of a message denotes its type. +The currently defined types are: + + HANDSHAKE = 0x00 +.br + DATA = 0x01 +.br + ACK = 0x02 +.br + HAVE = 0x03 +.br + HASH = 0x04 +.br + PEX_ADD = 0x05 +.br + PEX_REQ = 0x06 +.br + SIGNED_HASH = 0x07 +.br + HINT = 0x08 +.br + MSGTYPE_RCVD = 0x09 +.br + VERSION = 0x10 +.br + + +Furthermore, integers are serialized in the network \%(big-endian) byte order. +So consider the example of an ACK message (Sec 3.4). It has message type of 0x02 +and a payload of a bin number, a four-byte integer (say, 1); hence, its on the wire +representation for UDP can be written in hex as: "02 00000001". This \%hex-like two +character-per-byte notation is used to represent message formats in the rest of +this section. + +.ti 0 +6.1.3. Channels + +As it is increasingly complex for peers to enable UDP communication +between each other due to NATs and firewalls, swift-over-UDP uses +a multiplexing scheme, called "channels", to allow multiple swarms to use the +same UDP port. Channels loosely correspond to TCP connections and each channel +belongs to a single swarm. When channels are used, each datagram starts with +four bytes corresponding to the receiving channel number. + + +.ti 0 +6.1.4. HANDSHAKE and VERSION + +A channel is established with a handshake. To start a handshake, the initiating +peer needs to know: + +.nf +(1) the IP address of a peer +(2) peer's UDP port and +(3) the root hash of the content (see Sec. 3.5.1). +.fi + +To do the handshake the initiating peer sends a datagram that MUST start +with an all 0-zeros channel number followed by a VERSION message, then a +HASH message whose payload is the root hash, and a HANDSHAKE message, whose +only payload is a locally unused channel number. + +On the wire the datagram will look something like this: +.nf + 00000000 10 01 + 04 7FFFFFFF 1234123412341234123412341234123412341234 + 00 00000011 +.fi +(to unknown channel, handshake from channel 0x11 speaking protocol version 0x01, +initiating a transfer of a file with a root hash 123...1234) + +The receiving peer MUST respond with a datagram that starts with the +channel number from the sender's HANDSHAKE message, followed by a +VERSION message, then a HANDSHAKE message, whose only payload is a locally +unused channel number, followed by any other messages it wants to send. + +Peer's response datagram on the wire: +.nf + 00000011 10 01 + 00 00000022 03 00000003 +.fi +(peer to the initiator: use channel number 0x22 for this transfer and +proto version 0x01; I also have first 4 chunks of the file, see Sec. 4.3) + +.fi +At this point, the initiator knows that the peer really responds; for that +purpose channel ids MUST be random enough to prevent easy guessing. So, the +third datagram of a handshake MAY already contain some heavy payload. To +minimize the number of initialization roundtrips, the first two datagrams +MAY also contain some minor payload, e.g. a couple of HAVE messages roughly +indicating the current progress of a peer or a HINT (see Sec. 3.6). +When receiving the third datagram, both peers have the proof they really talk +to each other; three-way handshake is complete. + +A peer MAY explicit close a channel by sending a HANDSHAKE message that MUST +contain an all 0-zeros channel number. + +On the wire: +.nf + 00 00000000 + + +.ti 0 +6.1.5. HAVE + +A HAVE message (type 0x03) states that the sending peer has the +complete specified bin and successfully checked its integrity: +.nf + 03 00000003 +(got/checked first four kilobytes of a file/stream) + + + +.ti 0 +6.1.6. ACK + +An ACK message (type 0x02) acknowledges data that was received from +its addressee; to facilitate delay-based congestion control, an +ACK message contains a timestamp, in particular, a 64-bit microsecond +time. +.nf + 02 00000002 12345678 +(got the second kilobyte of the file from you; my microsecond +timer was showing 0x12345678 at that moment) + + +.ti 0 +6.1.7. HASH + +A HASH message (type 0x04) consists of a four-byte bin number and +the cryptographic hash (e.g. a 20-byte SHA1 hash) +.nf + 04 7FFFFFFF 1234123412341234123412341234123412341234 + + +.ti 0 +6.1.8. DATA + +.fi +A DATA message (type 0x01) consists of a four-byte bin number and the +actual chunk. In case a datagram contains a DATA message, a sender +MUST always put the data message in the tail of the datagram. For +example: +.nf + 01 00000000 48656c6c6f20776f726c6421 +(This message accommodates an entire file: "Hello world!") + + +.ti 0 +6.1.9. KEEPALIVE + +Keepalives do not have a message type on UDP. They are just simple +datagrams consisting of a 4-byte channel id only. + +On the wire: +.nf + 00000022 + +.ti 0 +6.1.10. Flow and Congestion Control + +.fi +Explicit flow control is not necessary in swift-over-UDP. In the case of +video-on-demand the receiver will request data explicitly from peers and is +therefore in control of how much data is coming towards it. In the case of +live streaming, where a push-model may be used, the amount of data incoming +is limited to the bitrate, which the receiver must be able to process otherwise +it cannot play the stream. Should, for any reason, the receiver get saturated +with data that situation is perfectly detected by the congestion control. +Swift-over-UDP can support different congestion control algorithms, in particular, +it supports the new IETF Low Extra Delay Background Transport (LEDBAT) congestion +control algorithm that ensures that peer-to-peer traffic yields to regular +best-effort traffic [LEDBAT]. + + +.ti 0 +6.2. TCP + +.fi +When run over TCP, swift becomes functionally equivalent to BitTorrent. +Namely, most swift messages have corresponding BitTorrent messages and vice +versa, except for BitTorrent's explicit interest declarations and +choking/unchoking, which serve the classic implementation of the tit-for-tat +algorithm [TIT4TAT]. However, TCP is not well suited for multiparty communication, +as argued in Sec. 9. + + +.ti 0 +6.3. RTP Profile for PPSP + +.fi +In this section we sketch how swift can be integrated into RTP [RFC3550] to +form the Peer-to-Peer Streaming Protocol (PPSP) [I-D.ietf-ppsp-reqs] running +over UDP. The PPSP charter requires existing media transfer protocols be +used [PPSPCHART]. Hence, the general idea is to define swift as a profile +of RTP, in the same way as the Secure Real-time Transport Protocol (SRTP) +[RFC3711]. SRTP, and therefore swift is considered ``a "bump in the stack" +implementation which resides between the RTP application and the transport +layer. [swift] intercepts RTP packets and then forwards an equivalent +[swift] packet on the sending side, and intercepts [swift] packets and passes +an equivalent RTP packet up the stack on the receiving side.'' [RFC3711]. + +In particular, to encode a swift datagram in an RTP packet all the +non-DATA messages of swift such as HINT and HAVE are postfixed to the RTP +packet using the UDP encoding and the content of DATA messages is sent +in the payload field. Implementations MAY omit the RTP header for packets +without payload. This construction allows the streaming application to use +of all RTP's current features, and with a modification to the Merkle tree +hashing scheme (see below) meets swift's atomic datagram principle. The latter +means that a receiving peer can autonomously verify the RTP packet as being +correct content, thus preventing the spread of corrupt data (see +requirement PPSP.SEC-REQ-4). + +The use of ACK messages for reliability is left as a choice of the +application using PPSP. + + +.ti 0 +6.3.1. Design + +6.3.1.1. Joining a Swarm + +To commence a PPSP download a peer A must have the swarm ID of the stream +and a list of one or more tracker contact points (e.g. host+port). The list +of trackers is optional in the presence of a decentralized tracking +mechanism. The swarm ID consists of the swift root hash of the content, +which is divided into chunks (see Discussion). + +Peer A now registers with the PPSP tracker following the tracker protocol +[I-D.ietf.ppsp-reqs] and receives the IP address and RTP port of peers +already in the swarm, say B, C, and D. Peer A now sends an RTP packet +containing a HANDSHAKE without channel information to B, C, and D. This +serves as an end-to-end check that the peers are actually in the correct +swarm. Optionally A could include a HINT message in some RTP packets if it +wants to start receiving content immediately. B and C respond with a +HANDSHAKE and HAVE messages. D sends just a HANDSHAKE and omits HAVE +messages as a way of choking A. + + +6.3.1.2. Exchanging Chunks + +In response to B and C, A sends new RTP packets to B and C with HINTs for +disjunct sets of chunks. B and C respond with the requested chunks in the +payload and HAVE messages, updating their chunk availability. Upon receipt, +A sends HAVE for the chunks received and new HINT messages to B and C. When +e.g. C finds that A obtained a chunk (from B) that C did not yet have, C's +response includes a HINT for that chunk. + +D does not send HAVE messages, instead if D decides to unchoke peer A, it +sends an RTP packet with HAVE messages to inform A of its current +availability. If B or C decide to choke A they stop sending HAVE and DATA +messages and A should then rerequest from other peers. They may continue to +send HINT messages, or exponentially slowing KEEPALIVE messages such that A +keeps sending them HAVE messages. + +Once A has received all content (video-on-demand use case) it stops +sending messages to all other peers that have all content (a.k.a. seeders). + + +6.3.1.3. Leaving a Swarm + +Peers can implicitly leave a swarm by stopping to respond to messages. +Sending peers should remove these peers from the current peer list. This +mechanism works for both graceful and ungraceful leaves (i.e., peer crashes +or disconnects). When leaving gracefully, a peer should deregister from the +tracker following the PPSP tracker protocol. + +More explicit graceful leaves could be implemented using RTCP. In +particular, a peer could send a RTCP BYE on the RTCP port that is derivable +from a peer's RTP port for all peers in its current peer list. However, to +prevent malicious peers from sending BYEs a form of peer authentication is +required (e.g. using public keys as peer IDs [PERMIDS].) + + +6.3.1.4. Discussion + +Using swift as an RTP profile requires a change to the content +integrity protection scheme (see Sec. 3.5). The fields in the RTP +header, such as the timestamp and PT fields, must be protected by the Merkle +tree hashing scheme to prevent malicious alterations. Therefore, the Merkle +tree is no longer constructed from pure content chunks, but from the complete +RTP packet for a chunk as it would be transmitted (minus the non-DATA swift messages). +In other words, the hash of the leaves in the tree is the hash over the +Authenticated Portion of the RTP packet as defined by SRTP, illustrated +in the following figure (extended from [RFC3711]). There is no need for the +RTP packets to be fixed size, as the hashing scheme can deal with +variable-sized leaves. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +.in 0 + 0 1 2 3 +.br + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<+ +.br + |V=2|P|X| CC |M| PT | sequence number | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | timestamp | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | synchronization source (SSRC) identifier | | +.br + +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ | +.br + | contributing source (CSRC) identifiers | | +.br + | .... | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | RTP extension (OPTIONAL) | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | payload ... | | +.br + | +-------------------------------+ | +.br + | | RTP padding | RTP pad count | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<+ +.br + ~ swift non-DATA messages (REQUIRED) ~ | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | length of swift messages (REQUIRED) | | +.br + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +.br + | +.br + Authenticated Portion ---+ +.br + + Figure: The format of an RTP-Swift packet. +.in 3 + +As a downside, with variable-sized payloads the automatic content size +detection of Section 4 no longer works, so content length MUST be explicit in the +metadata. In addition, storage on disk is more complex with out-of-order, +variable-sized packets. On the upside, carrying RTP over swift allow +decryption-less caching. + +As with UDP, another matter is how much data is carried inside each packet. +An important swift-specific factor here is the resulting number of hash +calculations per second needed to verify chunks. Experiments should be +conducted to ensure they are not excessive for, e.g., mobile hardware. + +At present, Peer IDs are not required in this design. + + +.ti 0 +6.3.2. PPSP Requirements + +6.3.2.1. Basic Requirements + +- PPSP.REQ-1: The swift PEX message can also be used as the basis for a tracker protocol, to be discussed elsewhere. + +- PPSP.REQ-2: This draft preserves the properties of RTP. + +- PPSP.REQ-3: This draft does not place requirements on peer IDs, IP+port is sufficient. + +- PPSP.REQ-4: The content is identified by its root hash (video-on-demand) or a public key (live streaming). + +- PPSP.REQ-5: The content is partitioned by the streaming application. + +- PPSP.REQ-6: Each chunk is identified by a bin number (and its cryptographic hash.) + +- PPSP.REQ-7: The protocol is carried over UDP because RTP is. + +- PPSP.REQ-8: The protocol has been designed to allow meaningful data transfer between +peers as soon as possible and to avoid unnecessary round-trips. It supports small and +variable chunk sizes, and its content integrity protection enables wide scale caching. + + +6.3.2.2. Peer Protocol Requirements + +- PPSP.PP.REQ-1: A GET_HAVE would have to be added to request which +chunks are available from a peer, if the proposed push-based HAVE +mechanism is not sufficient. + +- PPSP.PP.REQ-2: A set of HAVE messages satisfies this. + +- PPSP.PP.REQ-3: The PEX_REQ message satisfies this. Care should be taken with +peer address exchange in general, as the use of such hearsay is a risk for the +protocol as it may be exploited by malicious peers (as a DDoS attack mechanism). +A secure tracking / peer sampling protocol like [PUPPETCAST] may be needed +to make peer-address exchange safe. + +- PPSP.PP.REQ-4: HAVE messages convey current availability via a push model. + +- PPSP.PP.REQ-5: Bin numbering enables a compact representation of chunk availability. + +- PPSP.PP.REQ-6: A new PPSP specific Peer Report message would have to be +added to RTCP. + +- PPSP.PP.REQ-7: Transmission and chunk requests are integrated in this +protocol. + + +6.3.2.3. Security Requirements + +- PPSP.SEC.REQ-1: An access control mechanism like Closed Swarms +[CLOSED] would have to be added. + +- PPSP.SEC.REQ-2: As RTP is carried verbatim over swift, RTP encryption +can be used. Note that just encrypting the RTP part will allow for +caching servers that are part of the swarm but do not need access to the +decryption keys. They just need access to the swift HASHES in the postfix to +verify the packet's integrity. + +- PPSP.SEC.REQ-3: RTP encryption or IPsec [RFC4303] can be used, if the +swift messages must also be encrypted. + +- PPSP.SEC.REQ-4: The Merkle tree hashing scheme prevents the indirect +spread of corrupt content, as peers will only forward chunks to others if +their integrity check out. Another protection mechanism is to not depend on +hearsay (i.e., do not forward other peers' availability information), or to +only use it when the information spread is self-certified by its subjects. + +Other attacks, such as a malicious peer claiming it has content but not +replying, are still possible. Or wasting CPU and bandwidth at a receiving +peer by sending packets where the DATA doesn't match the HASHes. + + +- PPSP.SEC.REQ-5: The Merkle tree hashing scheme allows a receiving +peer to detect a malicious or faulty sender, which it can subsequently +ignore. Spreading this knowledge to other peers such that they know +about this bad behavior is hearsay. + + +- PPSP.SEC.REQ-6: A risk in peer-to-peer streaming systems is that malicious +peers launch an Eclipse [ECLIPSE] attack on the initial injectors of the +content (in particular in live streaming). The attack tries to let the +injector upload to just malicious peers which then do not forward the +content to others, thus stopping the distribution. An Eclipse attack could +also be launched on an individual peer. Letting these injectors only use +trusted trackers that provide true random samples of the population or using +a secure peer sampling service [PUPPETCAST] can help negate such an attack. + + +- PPSP.SEC.REQ-7: swift supports decentralized tracking via PEX or +additional mechanisms such as DHTs [SECDHTS], but self-certification of +addresses is needed. Self-certification means For example, that each peer +has a public/private key pair [PERMIDS] and creates self-certified address +changes that include the swarm ID and a timestamp, which are then exchanged +among peers or stored in DHTs. See also discussion of PPSP.PP.REQ-3 +above. Content distribution can continue as long as there are peers that +have it available. + +- PPSP.SEC.REQ-8: The verification of data via hashes obtained from a +trusted source is well-established in the BitTorrent protocol [BITTORRENT]. +The proposed Merkle tree scheme is a secure extension of this idea. +Self-certification and not using hearsay are other lessons learned from +existing distributed systems. + +- PPSP.SEC.REQ-9: Swift has built-in content integrity protection via +self-certified naming of content, see SEC.REQ-5 and Sec. 3.5.1. + + +.ti 0 +6.4. HTTP (as PPSP) + +In this section we sketch how swift can be carried over HTTP [RFC2616] to +form the PPSP running over TCP. The general idea is to encode a swift +datagram in HTTP GET and PUT requests and their replies by transmitting all +the non-DATA messages such as HINTs and HAVEs as headers and send DATA +messages in the body. This idea follows the atomic datagram principle for +each request and reply. So a receiving peer can autonomously verify the +message as carrying correct data, thus preventing the spread of corrupt data +(see requirement PPSP.SEC-REQ-4). + +A problem with HTTP is that it is a client/server protocol. To overcome this +problem, a peer A uses a PUT request instead of a GET request if the peer B +has indicated in a reply that it wants to retrieve a chunk from A. In cases +where peer A is no longer interested in receiving requests from B (described +below) B may need to establish a new HTTP connection to A to quickly +download a chunk, instead of waiting for a convenient time when A sends +another request. As an alternative design, two HTTP connections could be +used always., but this is inefficient. + +.ti 0 +6.4.1. Design + +6.4.1.1. Joining a Swarm + +To commence a PPSP download a peer A must have the swarm ID of the stream +and a list of one or more tracker contact points, as above. The swarm ID as +earlier also consists of the swift root hash of the content, divided in +chunks by the streaming application (e.g. fixed-size chunks of 1 kilobyte +for video-on-demand). + +Peer A now registers with the PPSP tracker following the tracker protocol +[I-D.ietf-ppsp-reqs] and receives the IP address and HTTP port of peers +already in the swarm, say B, C, and D. Peer A now establishes persistent +HTTP connections with B, C, D and sends GET requests with the Request-URI +set to /. Optionally A could include a HINT message in +some requests if it wants to start receiving content immediately. A HINT is +encoded as a Range header with a new "bins" unit [RFC2616,$14.35]. + +B and C respond with a 200 OK reply with header-encoded HAVE messages. A +HAVE message is encoded as an extended Accept-Ranges: header [RFC2616,$14.5] +with the new bins unit and the possibility of listing the set of accepted bins. +If no HINT/Range header was present in the request, the body of the reply is +empty. D sends just a 200 OK reply and omits the HAVE/Accept-Ranges header +as a way of choking A. + +6.4.1.2. Exchanging Chunks + +In response to B and C, A sends GET requests with Range headers, requesting +disjunct sets of chunks. B and C respond with 206 Partial Content replies +with the requested chunks in the body and Accept-Ranges headers, updating +their chunk availability. The HASHES for the chunks are encoded in a new +Content-Merkle header and the Content-Range is set to identify the chunk +[RFC2616,$14.16]. A new "multipart-bin ranges" equivalent to the +"multipart-bytes ranges" media type may be used to transmit multiple chunks +in one reply. + +Upon receipt, A sends a new GET request with a HAVE/Accept-Ranges header for +the chunks received and new HINT/Range headers to B and C. Now when e.g. C +finds that A obtained a chunk (from B) that C did not yet have, C's response +includes a HINT/Range for that chunk. In this case, A's next request to C is +not a GET request, but a PUT request with the requested chunk sent in the +body. + +Again, working around the fact that HTTP is a client/server protocol, peer A +periodically sends HEAD requests to peer D (which was virtually choking A) +that serve as keepalives and may contain HAVE/Accept-Ranges headers. If D +decides to unchoke peer A, it includes an Accept-Ranges header in the "200 OK" +reply to inform A of its current chunk availability. + +If B or C decide to choke A they start responding with 204 No Content +replies without HAVE/Accept-Ranges headers and A should then re-request from +other peers. However, if their replies contain HINT/Range headers A should +keep on sending PUT requests with the desired data (another client/server +workaround). If not, A should slowly send HEAD requests as keepalive and +content availability update. + +Once A has received all content (video-on-demand use case) it closes the +persistent connections to all other peers that have all content (a.k.a. +seeders). + + +6.4.1.3. Leaving a Swarm + +Peers can explicitly leave a swarm by closing the connection. This mechanism +works for both graceful and ungraceful leaves (i.e., peer crashes or +disconnects). When leaving gracefully, a peer should deregister from the +tracker following the PPSP tracker protocol. + + +6.4.1.4. Discussion + +As mentioned earlier, this design suffers from the fact that HTTP is a +client/server protocol. A solution where a peer establishes two HTTP +connections with every other peer may be more elegant, but inefficient. +The mapping of swift messages to headers remains the same: + + HINT = Range +.br + HAVE = Accept-Ranges +.br + HASH = Content-Merkle +.br + PEX = e.g. extended Content-Location +.br + +The Content-Merkle header should include some parameters to +indicate the hash function and chunk size (e.g. SHA1 and 1K) used to +build the Merkle tree. + + +.ti 0 +6.4.2. PPSP Requirements + +6.4.2.1. Basic Requirements + +- PPSP.REQ-1: The HTTP-based BitTorrent tracker protocol [BITTORRENT] can be +used as the basis for a tracker protocol, to be discussed elsewhere. + +- PPSP.REQ-2: This draft preserves the properties of HTTP, but +extra mechanisms may be necessary to protect against faulty +or malicious peers. + +- PPSP.REQ-3: This draft does not place requirements on peer IDs, +IP+port is sufficient. + +- PPSP.REQ-4: The content is identified by its root hash (video-on-demand) +or a public key (live streaming). + +- PPSP.REQ-5: The content is partitioned into chunks by the streaming application (see 6.4.1.1.) + +- PPSP.REQ-6: Each chunk is identified by a bin number (and its cryptographic hash.) + +- PPSP.REQ-7: The protocol is carried over TCP because HTTP is. + + +6.4.2.2. Peer Protocol Requirements + +- PPSP.PP.REQ-1: A HEAD request can be used to find out which +chunks are available from a peer, which returns the new Accept-Ranges header. + +- PPSP.PP.REQ-2: The new Accept-Ranges header satisfies this. + +- PPSP.PP.REQ-3: A GET with a request-URI requesting the peers of a resource +(e.g. //peers) would have to be added to request known +peers from a peer, if the proposed push-based PEX/~Content-Location +mechanism is not sufficient. Care should be taken with peer address exchange +in general, as the use of such hearsay is a risk for the protocol as it may be +exploited by malicious peers (as a DDoS attack mechanism). A secure +tracking / peer sampling protocol like [PUPPETCAST] may be needed +to make peer-address exchange safe. + + +- PPSP.PP.REQ-4: HAVE/Accept-Ranges headers convey current availability. + +- PPSP.PP.REQ-5: Bin numbering enables a compact representation of chunk availability. + +- PPSP.PP.REQ-6: A new PPSP specific Peer-Report header would have to be +added. + +- PPSP.PP.REQ-7: Transmission and chunk requests are integrated in this +protocol. + + + + +6.4.2.3. Security Requirements + +- PPSP.SEC.REQ-1: An access control mechanism like Closed Swarms +[CLOSED] would have to be added. + +- PPSP.SEC.REQ-2: As swift is carried over HTTP, HTTPS encryption can be +used instead. Alternatively, just the body could be encrypted. The latter +allows for caching servers that are part of the swarm but do not need access +to the decryption keys (they need access to the swift HASHES in the headers +to verify the packet's integrity). + +- PPSP.SEC.REQ-3: HTTPS encryption or the content encryption facilities of +HTTP can be used. + +- PPSP.SEC.REQ-4: The Merkle tree hashing scheme prevents the indirect +spread of corrupt content, as peers will only forward content to others if +its integrity checks out. Another protection mechanism is to not depend on +hearsay (i.e., do not forward other peers' availability information), or to +only use it when the information spread is self-certified by its subjects. + +Other attacks such as a malicious peer claiming it has content, but not +replying are still possible. Or wasting CPU and bandwidth at a receiving +peer by sending packets where the body doesn't match the HASH/Content-Merkle +headers. + + +- PPSP.SEC.REQ-5: The Merkle tree hashing scheme allows a receiving peer to +detect a malicious or faulty sender, which it can subsequently close its +connection to and ignore. Spreading this knowledge to other peers such that +they know about this bad behavior is hearsay. + + +- PPSP.SEC.REQ-6: A risk in peer-to-peer streaming systems is that malicious +peers launch an Eclipse [ECLIPSE] attack on the initial injectors of the +content (in particular in live streaming). The attack tries to let the +injector upload to just malicious peers which then do not forward the +content to others, thus stopping the distribution. An Eclipse attack could +also be launched on an individual peer. Letting these injectors only use +trusted trackers that provide true random samples of the population or using +a secure peer sampling service [PUPPETCAST] can help negate such an attack. + + +- PPSP.SEC.REQ-7: swift supports decentralized tracking via PEX or +additional mechanisms such as DHTs [SECDHTS], but self-certification of +addresses is needed. Self-certification means For example, that each peer +has a public/private key pair [PERMIDS] and creates self-certified address +changes that include the swarm ID and a timestamp, which are then exchanged +among peers or stored in DHTs. See also discussion of PPSP.PP.REQ-3 +above. Content distribution can continue as long as there are peers that +have it available. + +- PPSP.SEC.REQ-8: The verification of data via hashes obtained from a +trusted source is well-established in the BitTorrent protocol [BITTORRENT]. +The proposed Merkle tree scheme is a secure extension of this idea. +Self-certification and not using hearsay are other lessons learned from +existing distributed systems. + +- PPSP.SEC.REQ-9: Swift has built-in content integrity protection via +self-certified naming of content, see SEC.REQ-5 and Sec. 3.5.1. + + +.ti 0 +7. Security Considerations + +As any other network protocol, the swift faces a common set of security +challenges. An implementation must consider the possibility of buffer +overruns, DoS attacks and manipulation (i.e. reflection attacks). Any +guarantee of privacy seems unlikely, as the user is exposing its IP address +to the peers. A probable exception is the case of the user being hidden +behind a public NAT or proxy. + + +.ti 0 +8. Extensibility + +.ti 0 +8.1. 32 bit vs 64 bit + +.nf +While in principle the protocol supports bigger (>1TB) files, all the +mentioned counters are 32-bit. It is an optimization, as using +\%64-bit numbers on-wire may cost ~2% practical overhead. The 64-bit +version of every message has typeid of 64+t, e.g. typeid 68 for +\%64-bit hash message: +.nf + 44 000000000000000E 01234567890ABCDEF1234567890ABCDEF1234567 + +.ti 0 +8.2. IPv6 + +.fi +IPv6 versions of PEX messages use the same 64+t shift as just mentioned. + + +.ti 0 +8.3. Congestion Control Algorithms + +.fi +Congestion control algorithm is left to the implementation and may even vary +from peer to peer. Congestion control is entirely implemented by the sending +peer, the receiver only provides clues, such as hints, acknowledgments and +timestamps. In general, it is expected that servers would use TCP-like +congestion control schemes such as classic AIMD or CUBIC [CUBIC]. End-user +peers are expected to use weaker-than-TCP (least than best effort) +congestion control, such as [LEDBAT] to minimize seeding counter-incentives. + + +.ti 0 +8.4. Piece Picking Algorithms + +Piece picking entirely depends on the receiving peer. The sender peer is +made aware of preferred pieces by the means of HINT messages. In some +scenarios it may be beneficial to allow the sender to ignore those hints and +send unrequested data. + + +.ti 0 +8.5. Reciprocity Algorithms + +Reciprocity algorithms are the sole responsibility of the sender peer. +Reciprocal intentions of the sender are not manifested by separate messages +(as BitTorrent's CHOKE/UNCHOKE), as it does not guarantee anything anyway +(the "snubbing" syndrome). + + +.ti 0 +8.6. Different crypto/hashing schemes + +.fi +Once a flavor of swift will need to use a different crypto scheme +(e.g., SHA-256), a message should be allocated for that. As the root +hash is supplied in the handshake message, the crypto scheme in use +will be known from the very beginning. As the root hash is the +content's identifier, different schemes of crypto cannot be mixed +in the same swarm; different swarms may distribute the same content +using different crypto. + + +.ti 0 +9. Rationale + +Historically, the Internet was based on end-to-end unicast +and, considering the failure of multicast, was addressed by +different technologies, which ultimately boiled down to maintaining +and coordinating distributed replicas. On one hand, downloading +from a nearby well-provisioned replica is somewhat faster and/or +cheaper; on the other hand, it requires to coordinate multiple +parties (the data source, mirrors/CDN sites/peers, consumers). As +the Internet progresses to richer and richer content, the overhead +of peer/replica coordination becomes dwarfed by the mass of the +download itself. Thus, the niche for multiparty transfers expands. +Still, current, relevant technologies are tightly coupled to a +single use case or even infrastructure of a particular corporation. +The mission of our project is to create a generic content-centric +multiparty transport protocol to allow seamless, effortless data +dissemination on the Net. + + TABLE 1. Use cases. + +.br + | mirror-based peer-assisted peer-to-peer +.br +------+---------------------------------------------------- +.br +data | SunSITE CacheLogic VelociX BitTorrent +.br +VoD | YouTube Azureus(+seedboxes) SwarmPlayer +.br +live | Akamai Str. Octoshape, Joost PPlive +.br + +.fi +The protocol must be designed for maximum genericity, thus +focusing on the very core of the mission, contain no magic + constants and no hardwired policies. Effectively, it is a +set of messages allowing to securely retrieve data from +whatever source available, in parallel. Ideally, the protocol must +be able to run over IP as an independent transport protocol. +Practically, it must run over UDP and TCP. + + +.ti 0 +9.1. Design Goals + +.fi +The technical focus of the swift protocol is to find the +simplest solution involving the minimum set of primitives, still +being sufficient to implement all the targeted usecases (see +Table 1), suitable for use in general-purpose software and hardware +(i.e. a web browser or a set-top box). The five design goals for +the protocol are: + +.nf +1. Embeddable kernel-ready protocol. +2. Embrace real-time streaming, in- and out-of-order download. +3. Have short warm-up times. +4. Traverse NATs transparently. +5. Be extensible, allow for multitude of implementation over + diverse mediums, allow for drop-in pluggability. + +The objectives are referenced as (1)-(5). + +.fi +The goal of embedding (1) means that the protocol must be ready +to function as a regular transport protocol inside a set-top box, +mobile device, a browser and/or in the kernel space. Thus, the +protocol must have light footprint, preferably less than TCP, +in spite of the necessity to support numerous ongoing connections +as well as to constantly probe the network for new possibilities. +The practical overhead for TCP is estimated at 10KB per connection +[HTTP1MLN]. We aim at <1KB per peer connected. Also, the amount of + code necessary to make a basic implementation must be limited to +10KLoC of C. Otherwise, besides the resource considerations, +maintaining and auditing the code might become prohibitively expensive. + +The support for all three basic usecases of real-time streaming, + \%in-order download and out-of-order download (2) is necessary +for the manifested goal of THE multiparty transport protocol as no + single usecase dominates over the others. + +The objective of short warm-up times (3) is the matter of end-user + experience; the playback must start as soon as possible. Thus any +unnecessary initialization roundtrips and warm-up cycles must be +eliminated from the transport layer. + +.fi +Transparent NAT traversal (4) is absolutely necessary as at least +60% of today's users are hidden behind NATs. NATs severely affect +connection patterns in P2P networks thus impacting performance +and fairness [MOLNAT,LUCNAT]. + +The protocol must define a common message set (5) to be used by +implementations; it must not hardwire any magic constants, +algorithms or schemes beyond that. For example, an implementation + is free to use its own congestion control, connection rotation +or reciprocity algorithms. Still, the protocol must enable such +algorithms by supplying sufficient information. For example, +trackerless peer discovery needs peer exchange messages, +scavenger congestion control may need timestamped acknowledgments, + etc. + + +.ti 0 +9.2. Not TCP + +.fi +To large extent, swift's design is defined by the cornerstone decision +to get rid of TCP and not to reinvent any TCP-like transports on +top of UDP or otherwise. The requirements (1), (4), (5) make TCP a +bad choice due to its high per-connection footprint, complex and +less reliable NAT traversal and fixed predefined congestion control +algorithms. Besides that, an important consideration is that no +block of TCP functionality turns out to be useful for the general +case of swarming downloads. Namely, +.nf + 1. in-order delivery is less useful as peer-to-peer protocols + often employ out-of-order delivery themselves and in either case + \%out-of-order data can still be stored; + 2. reliable delivery/retransmissions are not useful because + the same data might be requested from different sources; as + in-order delivery is not required, packet losses might be + patched up lazily, without stopping the flow of data; + 3. flow control is not necessary as the receiver is much less + likely to be saturated with the data and even if so, that + situation is perfectly detected by the congestion control; + 4. TCP congestion control is less useful as custom congestion + control is often needed [LEDBAT]. +In general, TCP is built and optimized for a different usecase than +we have with swarming downloads. The abstraction of a "data pipe" +orderly delivering some stream of bytes from one peer to another +turned out to be irrelevant. In even more general terms, TCP +supports the abstraction of pairwise _conversations_, while we need +a content-centric protocol built around the abstraction of a cloud +of participants disseminating the same _data_ in any way and order +that is convenient to them. + +.fi +Thus, the choice is to design a protocol that runs on top of +unreliable datagrams. Instead of reimplementing TCP, we create a +\%datagram-based protocol, completely dropping the sequential data +stream abstraction. Removing unnecessary features of TCP makes +it easier both to implement the protocol and to verify it; +numerous TCP vulnerabilities were caused by complexity of the +protocol's state machine. Still, we reserve the possibility +to run swift on top of TCP or HTTP. + +Pursuing the maxim of making things as simple as possible but not +simpler, we fit the protocol into the constraints of the transport +layer by dropping all the transmission's technical metadata except +for the content's root hash (compare that to metadata files used +in BitTorrent). Elimination of technical metadata is achieved +through the use of Merkle [MERKLE,ABMRKL] hash trees, exclusively +single-file transfers and other techniques. As a result, a transfer +is identified and bootstrapped by its root hash only. + +.fi +To avoid the usual layering of positive/negative acknowledgment +mechanisms we introduce a scale-invariant acknowledgment system + (see Sec 4.4). The system allows for aggregation and variable +level of detail in requesting, announcing and acknowledging data, + serves \%in-order and out-of-order retrieval with equal ease. +Besides the protocol's footprint, we also aim at lowering the +size of a minimal useful interaction. Once a single datagram is +received, it must be checked for data integrity, and then either +dropped or accepted, consumed and relayed. + + + +.ti 0 +9.3. Generic Acknowledgments + +.nf +Generic acknowledgments came out of the need to simplify the +data addressing/requesting/acknowledging mechanics, which tends +to become overly complex and multilayered with the conventional +approach. Take the BitTorrent+TCP tandem for example: + +1. The basic data unit is a byte of content in a file. +2. BitTorrent's highest-level unit is a "torrent", physically a +byte range resulting from concatenation of content files. +3. A torrent is divided into "pieces", typically about a thousand +of them. Pieces are used to communicate progress to other +peers. Pieces are also basic data integrity units, as the torrent's +metadata includes a SHA1 hash for every piece. +4. The actual data transfers are requested and made in 16KByte +units, named "blocks" or chunks. +5. Still, one layer lower, TCP also operates with bytes and byte +offsets which are totally different from the torrent's bytes and +offsets, as TCP considers cumulative byte offsets for all content +sent by a connection, be it data, metadata or commands. +6. Finally, another layer lower, IP transfers independent datagrams +(typically around 1.5 kilobyte), which TCP then reassembles into +continuous streams. + +Obviously, such addressing schemes need lots of mappings; from +piece number and block to file(s) and offset(s) to TCP sequence +numbers to the actual packets and the other way around. Lots of +complexity is introduced by mismatch of bounds: packet bounds are +different from file, block or hash/piece bounds. The picture is +typical for a codebase which was historically layered. + +To simplify this aspect, we employ a generic content addressing +scheme based on binary intervals, or "bins" for short. + + + +.ti 0 +Acknowledgements + +Arno Bakker and Victor Grishchenko are partially supported by the +P2P-Next project (http://www.p2p-next.org/), a research project +supported by the European Community under its 7th Framework Programme +(grant agreement no. 216217). The views and conclusions contained +herein are those of the authors and should not be interpreted as +necessarily representing the official policies or endorsements, +either expressed or implied, of the P2P-Next project or the European +Commission. + +.fi +The swift protocol was designed by Victor Grishchenko at Technische Universiteit Delft. +The authors would like to thank the following people for their +contributions to this draft: Mihai Capota, Raul Jiminez, Flutra Osmani, +Riccardo Petrocco, Johan Pouwelse, and Raynor Vliegendhart. + + +.ti 0 +References + +.nf +.in 0 +[RFC2119] Key words for use in RFCs to Indicate Requirement Levels +[HTTP1MLN] Richard Jones. "A Million-user Comet Application with + Mochiweb", Part 3. http://www.metabrew.com/article/ + \%a-million-user-comet-application-with-mochiweb-part-3 +[MOLNAT] J.J.D. Mol, J.A. Pouwelse, D.H.J. Epema and H.J. Sips: + \%"Free-riding, Fairness, and Firewalls in P2P File-Sharing" + Proc. Eighth International Conference on Peer-to-Peer Computing + (P2P '08), Aachen, Germany, 8-11 Sept. 2008, pp. 301 - 310. +[LUCNAT] L. D'Acunto and M. Meulpolder and R. Rahman and J.A. + Pouwelse and H.J. Sips. "Modeling and Analyzing the Effects + of Firewalls and NATs in P2P Swarming Systems". In Proc. of + IEEE IPDPS (HotP2P), Atlanta, USA, April 23, 2010. +[BINMAP] V. Grishchenko, J. Pouwelse: "Binmaps: hybridizing bitmaps + and binary trees". Technical Report PDS-2011-005, Parallel and + Distributed Systems Group, Fac. of Electrical Engineering, + Mathematics, and Computer Science, Delft University of Technology, + The Netherlands, April 2009. +[SNP] B. Ford, P. Srisuresh, D. Kegel: "Peer-to-Peer Communication + Across Network Address Translators", + http://www.brynosaurus.com/pub/net/p2pnat/ +[FIPS180-2] + Federal Information Processing Standards Publication 180-2: + "Secure Hash Standard" 2002 August 1. +[MERKLE] Merkle, R. "Secrecy, Authentication, and Public Key Systems", + Ph.D. thesis, Dept. of Electrical Engineering, Stanford University, + CA, USA, 1979. pp 40-45. +[ABMRKL] Arno Bakker: "Merkle hash torrent extension", BitTorrent + Enhancement Proposal 30, Mar 2009. + http://bittorrent.org/beps/bep_0030.html +[CUBIC] Injong Rhee, and Lisong Xu: "CUBIC: A New TCP-Friendly + \%High-Speed TCP Variant", Proc. Third International Workshop + on Protocols for Fast Long-Distance Networks (PFLDnet), Lyon, + France, Feb 2005. +[LEDBAT] S. Shalunov et al. "Low Extra Delay Background Transport + (LEDBAT)", IETF Internet-Draft draft-ietf-ledbat-congestion + (work in progress), Oct 2011. + \%http://datatracker.ietf.org/doc/draft-ietf-ledbat-congestion/ +[TIT4TAT] Bram Cohen: "Incentives Build Robustness in BitTorrent", + Proc. 1st Workshop on Economics of Peer-to-Peer Systems, Berkeley, + CA, USA, Jun 2003. +[BITTORRENT] B. Cohen, "The BitTorrent Protocol Specification", + February 2008, \%http://www.bittorrent.org/beps/bep_0003.html +[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. + Jacobson, "RTP: A Transport Protocol for Real-Time + Applications", STD 64, RFC 3550, July 2003. +[RFC3711] M. Baugher, D. McGrew, M. Naslund, E. Carrara, K. Norrman, + "The Secure Real-time Transport Protocol (SRTP), RFC 3711, March + 2004. +[RFC5389] Rosenberg, J., Mahy, R., Matthews, P., and D. Wing, + "Session Traversal Utilities for NAT (STUN)", RFC 5389, October 2008. +[RFC2616] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, + P. Leach, T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", + RFC2616, June 1999. +[I-D.ietf-ppsp-reqs] Zong, N., Zhang, Y., Pascual, V., Williams, C., + and L. Xiao, "P2P Streaming Protocol (PPSP) Requirements", + draft-ietf-ppsp-reqs-05 (work in progress), October 2011. +[PPSPCHART] M. Stiemerling et al. "Peer to Peer Streaming Protocol (ppsp) + Description of Working Group" + \%http://datatracker.ietf.org/wg/ppsp/charter/ +[PERMIDS] A. Bakker et al. "Next-Share Platform M8--Specification + Part", App. C. P2P-Next project deliverable D4.0.1 (revised), + June 2009. + \%http://www.p2p-next.org/download.php?id=E7750C654035D8C2E04D836243E6526E +[PUPPETCAST] A. Bakker and M. van Steen. "PuppetCast: A Secure Peer + Sampling Protocol". Proceedings 4th Annual European Conference on + Computer Network Defense (EC2ND'08), pp. 3-10, Dublin, Ireland, + 11-12 December 2008. +[CLOSED] N. Borch, K. Michell, I. Arntzen, and D. Gabrijelcic: "Access + control to BitTorrent swarms using closed swarms". In Proceedings + of the 2010 ACM workshop on Advanced video streaming techniques + for peer-to-peer networks and social networking (AVSTP2P '10). + ACM, New York, NY, USA, 25-30. + \%http://doi.acm.org/10.1145/1877891.1877898 +[ECLIPSE] E. Sit and R. Morris, "Security Considerations for + Peer-to-Peer Distributed Hash Tables", IPTPS '01: Revised Papers + from the First International Workshop on Peer-to-Peer Systems, pp. + 261-269, Springer-Verlag, London, UK, 2002. +[SECDHTS] G. Urdaneta, G. Pierre, M. van Steen, "A Survey of DHT + Security Techniques", ACM Computing Surveys, vol. 43(2), June 2011. +[SWIFTIMPL] V. Grishchenko, et al. "Swift M40 reference implementation", + \%http://swarmplayer.p2p-next.org/download/Next-Share-M40.tar.bz2 + (subdirectory Next-Share/TUD/swift-trial-r2242/), July 2011. +[CCNWIKI] http://en.wikipedia.org/wiki/Content-centric_networking +[HAC01] A.J. Menezes, P.C. van Oorschot and S.A. Vanstone. "Handbook of + Applied Cryptography", CRC Press, October 1996 (Fifth Printing, + August 2001). +[JIM11] R. Jimenez, F. Osmani, and B. Knutsson. "Sub-Second Lookups on + a Large-Scale Kademlia-Based Overlay". 11th IEEE International + Conference on Peer-to-Peer Computing 2011, Kyoto, Japan, Aug. 2011 + + +.ti 0 +Authors' addresses + +.in 3 +A. Bakker +Technische Universiteit Delft +Department EWI/ST/PDS +Room HB 9.160 +Mekelweg 4 +2628CD Delft +The Netherlands + +Email: arno@cs.vu.nl + +.ce 0 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.txt tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.txt --- tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/draft-ietf-ppsp-peer-protocol-00.txt 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,2240 @@ + + + + +PPSP A. Bakker +Internet-Draft TU Delft + +Expires: June 21, 2012 December 19, 2011 + + Peer-to-Peer Streaming Protocol (PPSP) + + +Abstract + + The Generic Multiparty Protocol (swift) is a peer-to-peer based + transport protocol for content dissemination. It can be used for + streaming on-demand and live video content, as well as conventional + downloading. In swift, the clients consuming the content participate + in the dissemination by forwarding the content to other clients via a + mesh-like structure. It is a generic protocol which can run directly + on top of UDP, TCP, HTTP or as a RTP profile. Features of swift are + short time-till-playback and extensibility. Hence, it can use + different mechanisms to prevent freeriding, and work with different + peer discovery schemes (centralized trackers or Distributed Hash + Tables). Depending on the underlying transport protocol, swift can + also use different congestion control algorithms, such as LEDBAT, and + offer transparent NAT traversal. Finally, swift maintains only a + small amount of state per peer and detects malicious modification of + content. This documents describes swift and how it satisfies the + requirements for the IETF Peer-to-Peer Streaming Protocol (PPSP) + Working Group's peer protocol. + + +Status of this memo + + This Internet-Draft is submitted to IETF in full conformance with the + provisions of BCP 78 and BCP 79. + + Internet-Drafts are working documents of the Internet Engineering + Task Force (IETF), its areas, and its working groups. Note that + other groups may also distribute working documents as Internet- + Drafts. + + Internet-Drafts are draft documents valid for a maximum of six months + and may be updated, replaced, or obsoleted by other documents at any + time. It is inappropriate to use Internet-Drafts as reference + material or to cite them other than as "work in progress." + + The list of current Internet-Drafts can be accessed at + http://www.ietf.org/ietf/1id-abstracts.txt. + + The list of Internet-Draft Shadow Directories can be accessed at + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 1] + +Internet-Draft swift December 19, 2011 + + + http://www.ietf.org/shadow.html. + + + Copyright (c) 2011 IETF Trust and the persons identified as the + document authors. All rights reserved. + + This document is subject to BCP 78 and the IETF Trust's Legal + Provisions Relating to IETF Documents + (http://trustee.ietf.org/license-info) in effect on the date of + publication of this document. Please review these documents + carefully, as they describe your rights and restrictions with respect + to this document. Code Components extracted from this document must + include Simplified BSD License text as described in Section 4.e of + the Trust Legal Provisions and are provided without warranty as + described in the Simplified BSD License. + +Table of Contents + + 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 + 1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . 3 + 1.2. Conventions Used in This Document . . . . . . . . . . . . . 4 + 1.3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . 5 + 2. Overall Operation . . . . . . . . . . . . . . . . . . . . . . . 6 + 2.1. Joining a Swarm . . . . . . . . . . . . . . . . . . . . . . 6 + 2.2. Exchanging Chunks . . . . . . . . . . . . . . . . . . . . . 6 + 2.3. Leaving a Swarm . . . . . . . . . . . . . . . . . . . . . . 7 + 3. Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 + 3.1. HANDSHAKE . . . . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3. HAVE . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3.1. Bin Numbers . . . . . . . . . . . . . . . . . . . . . . 8 + 3.3.2. HAVE Message . . . . . . . . . . . . . . . . . . . . . 9 + 3.4. ACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 + 3.5. DATA and HASH . . . . . . . . . . . . . . . . . . . . . . . 10 + 3.5.1. Merkle Hash Tree . . . . . . . . . . . . . . . . . . . 10 + 3.5.2. Content Integrity Verification . . . . . . . . . . . . 11 + 3.5.3. The Atomic Datagram Principle . . . . . . . . . . . . . 11 + 3.5.4. DATA and HASH Messages . . . . . . . . . . . . . . . . 12 + 3.6. HINT . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 + 3.7. Peer Address Exchange and NAT Hole Punching . . . . . . . . 13 + 3.8. KEEPALIVE . . . . . . . . . . . . . . . . . . . . . . . . . 14 + 3.9. VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . 14 + 3.10. Conveying Peer Capabilities . . . . . . . . . . . . . . . 14 + 3.11. Directory Lists . . . . . . . . . . . . . . . . . . . . . 14 + 4. Automatic Detection of Content Size . . . . . . . . . . . . . . 14 + 4.1. Peak Hashes . . . . . . . . . . . . . . . . . . . . . . . . 15 + 4.2. Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 16 + 5. Live streaming . . . . . . . . . . . . . . . . . . . . . . . . 17 + 6. Transport Protocols and Encapsulation . . . . . . . . . . . . . 17 + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 2] + +Internet-Draft swift December 19, 2011 + + + 6.1. UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 + 6.1.1. Chunk Size . . . . . . . . . . . . . . . . . . . . . . 17 + 6.1.2. Datagrams and Messages . . . . . . . . . . . . . . . . 18 + 6.1.3. Channels . . . . . . . . . . . . . . . . . . . . . . . 18 + 6.1.4. HANDSHAKE and VERSION . . . . . . . . . . . . . . . . . 19 + 6.1.5. HAVE . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.6. ACK . . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.7. HASH . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.8. DATA . . . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.9. KEEPALIVE . . . . . . . . . . . . . . . . . . . . . . . 20 + 6.1.10. Flow and Congestion Control . . . . . . . . . . . . . 21 + 6.2. TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 + 6.3. RTP Profile for PPSP . . . . . . . . . . . . . . . . . . . 21 + 6.3.1. Design . . . . . . . . . . . . . . . . . . . . . . . . 22 + 6.3.2. PPSP Requirements . . . . . . . . . . . . . . . . . . . 24 + 6.4. HTTP (as PPSP) . . . . . . . . . . . . . . . . . . . . . . 27 + 6.4.1. Design . . . . . . . . . . . . . . . . . . . . . . . . 27 + 6.4.2. PPSP Requirements . . . . . . . . . . . . . . . . . . . 29 + 7. Security Considerations . . . . . . . . . . . . . . . . . . . . 32 + 8. Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . 32 + 8.1. 32 bit vs 64 bit . . . . . . . . . . . . . . . . . . . . . 32 + 8.2. IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 + 8.3. Congestion Control Algorithms . . . . . . . . . . . . . . . 32 + 8.4. Piece Picking Algorithms . . . . . . . . . . . . . . . . . 33 + 8.5. Reciprocity Algorithms . . . . . . . . . . . . . . . . . . 33 + 8.6. Different crypto/hashing schemes . . . . . . . . . . . . . 33 + 9. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 + 9.1. Design Goals . . . . . . . . . . . . . . . . . . . . . . . 34 + 9.2. Not TCP . . . . . . . . . . . . . . . . . . . . . . . . . 35 + 9.3. Generic Acknowledgments . . . . . . . . . . . . . . . . . 36 + Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . 37 + References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 + Authors' addresses . . . . . . . . . . . . . . . . . . . . . . . . 39 + + + + +1. Introduction + +1.1. Purpose + + This document describes the Generic Multiparty Protocol (swift), + designed from the ground up for the task of disseminating the same + content to a group of interested parties. Swift supports streaming + on-demand and live video content, as well as conventional + downloading, thus covering today's three major use cases for content + distribution. To fulfil this task, clients consuming the content are + put on equal footing with the servers initially providing the content + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 3] + +Internet-Draft swift December 19, 2011 + + + to create a peer-to-peer system where everyone can provide data. Each + peer connects to a random set of other peers resulting in a mesh-like + structure. + + Swift uses a simple method of naming content based on self- + certification. In particular, content in swift is identified by a + single cryptographic hash that is the root hash in a Merkle hash tree + calculated recursively from the content [ABMRKL]. This self- + certifying hash tree allows every peer to directly detect when a + malicious peer tries to distribute fake content. It also ensures only + a small amount of information is needed to start a download (just the + root hash and some peer addresses). + + Swift uses a novel method of addressing chunks of content called "bin + numbers". Bin numbers allow the addressing of a binary interval of + data using a single integer. This reduces the amount of state that + needs to be recorded per peer and the space needed to denote + intervals on the wire, making the protocol light-weight. In general, + this numbering system allows swift to work with simpler data + structures, e.g. to use arrays instead of binary trees, thus reducing + complexity. + + Swift is a generic protocol which can run directly on top of UDP, + TCP, HTTP, or as a layer below RTP, similar to SRTP [RFC3711]. As + such, swift defines a common set of messages that make up the + protocol, which can have different representations on the wire + depending on the lower-level protocol used. When the lower-level + transport is UDP, swift can also use different congestion control + algorithms and facilitate NAT traversal. + + In addition, swift is extensible in the mechanisms it uses to promote + client contribution and prevent freeriding, that is, how to deal with + peers that only download content but never upload to others. + Furthermore, it can work with different peer discovery schemes, such + as centralized trackers or fast Distributed Hash Tables [JIM11]. + + This documents describes not only the swift protocol but also how it + satisfies the requirements for the IETF Peer-to-Peer Streaming + Protocol (PPSP) Working Group's peer protocol [PPSPCHART,I-D.ietf- + ppsp-reqs]. A reference implementation of swift over UDP is available + [SWIFTIMPL]. + + +1.2. Conventions Used in This Document + + The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", + "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this + document are to be interpreted as described in [RFC2119]. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 4] + +Internet-Draft swift December 19, 2011 + + +1.3. Terminology + + message + The basic unit of swift communication. A message will have + different representations on the wire depending on the transport + protocol used. Messages are typically multiplexed into a + datagram for transmission. + + datagram + A sequence of messages that is offered as a unit to the + underlying transport protocol (UDP, etc.). The datagram is + swift's Protocol Data Unit (PDU). + + content + Either a live transmission, a pre-recorded multimedia asset, or + a file. + + bin + A number denoting a specific binary interval of the content + (i.e., one or more consecutive chunks). + + chunk + The basic unit in which the content is divided. E.g. a block of + N kilobyte. + + hash + The result of applying a cryptographic hash function, more + specifically a modification detection code (MDC) [HAC01], such + as SHA1 [FIPS180-2], to a piece of data. + + root hash + The root in a Merkle hash tree calculated recursively from the + content. + + swarm + A group of peers participating in the distribution of the same + content. + + swarm ID + Unique identifier for a swarm of peers, in swift the root hash + of the content (video-on-demand,download) or a public key (live + streaming). + + tracker + An entity that records the addresses of peers participating in a + swarm, usually for a set of swarms, and makes this membership + information available to other peers on request. + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 5] + +Internet-Draft swift December 19, 2011 + + + choking + When a peer A is choking peer B it means that A is currently not + willing to accept requests for content from B. + + +2. Overall Operation + + The basic unit of communication in swift is the message. Multiple + messages are multiplexed into a single datagram for transmission. A + datagram (and hence the messages it contains) will have different + representations on the wire depending on the transport protocol used + (see Sec. 6). + + +2.1. Joining a Swarm + + Consider a peer A that wants to download a certain content asset. To + commence a swift download, peer A must have the swarm ID of the + content and a list of one or more tracker contact points (e.g. + host+port). The list of trackers is optional in the presence of a + decentralized tracking mechanism. The swarm ID consists of the swift + root hash of the content (video-on-demand, downloading) or a public + key (live streaming). + + Peer A now registers with the tracker following e.g. the PPSP tracker + protocol [I-D.ietf.ppsp-reqs] and receives the IP address and port of + peers already in the swarm, say B, C, and D. Peer A now sends a + datagram containing a HANDSHAKE message to B, C, and D. This message + serves as an end-to-end check that the peers are actually in the + correct swarm, and contains the root hash of the swarm. Peer B and C + respond with datagrams containing a HANDSHAKE message and one or more + HAVE messages. A HAVE message conveys (part of) the chunk + availability of a peer and thus contains a bin number that denotes + what chunks of the content peer B, resp. C have. Peer D sends a + datagram with just a HANDSHAKE and omits HAVE messages as a way of + choking A. + +2.2. Exchanging Chunks + + In response to B and C, A sends new datagrams to B and C containing + HINT messages. A HINT or request message indicates the chunks that a + peer wants to download, and contains a bin number. The HINT messages + to B and C refer to disjunct sets of chunks. B and C respond with + datagrams containing HASH, HAVE and DATA messages. The HASH messages + contains all cryptographic hashes that peer A needs to verify the + integrity of the content chunk sent in the DATA message, using the + content's root hash as trusted anchor, see Sec. 3.5. Using these + hashes peer A verifies that the chunks received from B and C are + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 6] + +Internet-Draft swift December 19, 2011 + + + correct. It also updates the chunk availability of B and C using the + information in the received HAVE messages. + + After processing, A sends a datagram containing HAVE messages for the + chunks it just received to all its peers. In the datagram to B and C + it includes an ACK message acknowledging the receipt of the chunks, + and adds HINT messages for new chunks. ACK messages are not used when + a reliable transport protocol is used. When e.g. C finds that A + obtained a chunk (from B) that C did not yet have, C's next datagram + includes a HINT for that chunk. + + Peer D does not send HAVE messages to A when it downloads chunks from + other peers, until D decides to unchoke peer A. In the case, it sends + a datagram with HAVE messages to inform A of its current + availability. If B or C decide to choke A they stop sending HAVE and + DATA messages and A should then rerequest from other peers. They may + continue to send HINT messages, or periodic KEEPALIVE messages such + that A keeps sending them HAVE messages. + + Once peer A has received all content (video-on-demand use case) it + stops sending messages to all other peers that have all content + (a.k.a. seeders). Peer A MAY also contact the tracker or another + source again to obtain more peer addresses. + + +2.3. Leaving a Swarm + + Depending on the transport protocol used, peers should either use + explicit leave messages or implicitly leave a swarm by stopping to + respond to messages. Peers that learn about the departure should + remove these peers from the current peer list. The implicit-leave + mechanism works for both graceful and ungraceful leaves (i.e., peer + crashes or disconnects). When leaving gracefully, a peer should + deregister from the tracker following the (PPSP) tracker protocol. + + +3. Messages + + In general, no error codes or responses are used in the protocol; + absence of any response indicates an error. Invalid messages are + discarded. + + For the sake of simplicity, one swarm of peers always deals with one + content asset (e.g. file) only. Retrieval of large collections of + files is done by retrieving a directory list file and then + recursively retrieving files, which might also turn to be directory + lists, as described in Sec. 3.11. + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 7] + +Internet-Draft swift December 19, 2011 + + +3.1. HANDSHAKE + + As an end-to-end check that the peers are actually in the correct + swarm, the initiating peer and the addressed peer SHOULD send a + HANDSHAKE message in the first datagrams they exchange. The only + payload of the HANDSHAKE message is the root hash of the content. + + After the handshakes are exchanged, the initiator knows that the peer + really responds. Hence, the second datagram the initiator sends MAY + already contain some heavy payload. To minimize the number of + initialization roundtrips, implementations MAY dispense with the + HANDSHAKE message. To the same end, the first two datagrams exchanged + MAY also contain some minor payload, e.g. HAVE messages to indicate + the current progress of a peer or a HINT (see Sec. 3.6). + + +3.3. HAVE + + The HAVE message is used to convey which chunks a peers has + available, expressed in a new content addressing scheme called "bin + numbers". + +3.3.1. Bin Numbers + + Swift employs a generic content addressing scheme based on binary + intervals ("bins" in short). The smallest interval is a chunk (e.g. a + N kilobyte block), the top interval is the complete 2**63 range. A + novel addition to the classical scheme are "bin numbers", a scheme of + numbering binary intervals which lays them out into a vector nicely. + Consider an chunk interval of width W. To derive the bin numbers of + the complete interval and the subintervals, a minimal balanced binary + tree is built that is at least W chunks wide at the base. The leaves + from left-to-right correspond to the chunks 0..W in the interval, and + have bin number I*2 where I is the index of the chunk (counting + beyond W-1 to balance the tree). The higher level nodes P in the tree + have bin number + + binP = (binL + binR) / 2 + + where binL is the bin of node P's left-hand child and binR is the bin + of node P's right-hand child. Given that each node in the tree + represents a subinterval of the original interval, each such + subinterval now is addressable by a bin number, a single integer. The + bin number tree of a interval of width W=8 looks like this: + + + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 8] + +Internet-Draft swift December 19, 2011 + + + 7 + / \ + / \ + / \ + / \ + 3 11 + / \ / \ + / \ / \ + / \ / \ + 1 5 9 13 + / \ / \ / \ / \ + 0 2 4 6 8 10 12 14 + + So bin 7 represents the complete interval, 3 represents the interval + of chunk 0..3 and 1 represents the interval of chunks 0 and 1. The + special numbers 0xFFFFFFFF (32-bit) or 0xFFFFFFFFFFFFFFFF (64-bit) + stands for an empty interval, and 0x7FFF...FFF stands for + "everything". + + +3.3.2. HAVE Message + + When a receiving peer has successfully checked the integrity of a + chunk or interval of chunks it MUST send a HAVE message to all peers + it wants to interact with. The latter allows the HAVE message to be + used as a method of choking. The HAVE message MUST contain the bin + number of the biggest complete interval of all chunks the receiver + has received and checked so far that fully includes the interval of + chunks just received. So the bin number MUST denote at least the + interval received, but the receiver is supposed to aggregate and + acknowledge bigger bins, when possible. + + As a result, every single chunk is acknowledged a logarithmic number + of times. That provides some necessary redundancy of acknowledgments + and sufficiently compensates for unreliable transport protocols. + + To record which chunks a peer has in the state that an implementation + keeps for each peer, an implementation MAY use the "binmap" data + structure, which is a hybrid of a bitmap and a binary tree, discussed + in detail in [BINMAP]. + + +3.4. ACK + + When swift is run over an unreliable transport protocol, an + implementation MAY choose to use ACK messages to acknowledge received + data. When a receiving peer has successfully checked the integrity of + a chunk or interval of chunks C it MUST send a ACK message containing + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 9] + +Internet-Draft swift December 19, 2011 + + + the bin number of its biggest, complete, interval covering C to the + sending peer (see HAVE). To facilitate delay-based congestion + control, an ACK message contains a timestamp. + + +3.5. DATA and HASH + + The DATA message is used to transfer chunks of content. The + associated HASH message carries cryptographic hashes that are + necessary for a receiver to check the the integrity of the chunk. + Swift's content integrity protection is based on a Merkle hash tree + and works as follows. + +3.5.1. Merkle Hash Tree + + Swift uses a method of naming content based on self-certification. In + particular, content in swift is identified by a single cryptographic + hash that is the root hash in a Merkle hash tree calculated + recursively from the content [ABMRKL]. This self-certifying hash tree + allows every peer to directly detect when a malicious peer tries to + distribute fake content. It also ensures only a small the amount of + information is needed to start a download (the root hash and some + peer addresses). For live streaming public keys and dynamic trees are + used, see below. + + The Merkle hash tree of a content asset that is divided into N chunks + is constructed as follows. Note the construction does not assume + chunks of content to be fixed size. Given a cryptographic hash + function, more specifically a modification detection code (MDC) + [HAC01], such as SHA1, the hashes of all the chunks of the content + are calculated. Next, a binary tree of sufficient height is created. + Sufficient height means that the lowest level in the tree has enough + nodes to hold all chunk hashes in the set, as before, see HAVE + message. The figure below shows the tree for a content asset + consisting of 7 chunks. As before with the content addressing scheme, + the leaves of the tree correspond to a chunk and in this case are + assigned the hash of that chunk, starting at the left-most leaf. As + the base of the tree may be wider than the number of chunks, any + remaining leaves in the tree are assigned a empty hash value of all + zeros. Finally, the hash values of the higher levels in the tree are + calculated, by concatenating the hash values of the two children + (again left to right) and computing the hash of that aggregate. This + process ends in a hash value for the root node, which is called the + "root hash". Note the root hash only depends on the content and any + modification of the content will result in a different root hash. + + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 10] + +Internet-Draft swift December 19, 2011 + + + 7 = root hash + / \ + / \ + / \ + / \ + 3* 11 + / \ / \ + / \ / \ + / \ / \ + 1 5 9 13* = uncle hash + / \ / \ / \ / \ + 0 2 4 6 8 10* 12 14 + + C0 C1 C2 C3 C4 C5 C6 E + =chunk index ^^ = empty hash + + +3.5.2. Content Integrity Verification + + Assuming a peer receives the root hash of the content it wants to + download from a trusted source, it can can check the integrity of any + chunk of that content it receives as follows. It first calculates the + hash of the chunk it received, for example chunk C4 in the previous + figure. Along with this chunk it MUST receive the hashes required to + check the integrity of that chunk. In principle, these are the hash + of the chunk's sibling (C5) and that of its "uncles". A chunk's + uncles are the sibling Y of its parent X, and the uncle of that Y, + recursively until the root is reached. For chunk C4 its uncles are + bins 13 and 3, marked with * in the figure. Using this information + the peer recalculates the root hash of the tree, and compares it to + the root hash it received from the trusted source. If they match the + chunk of content has been positively verified to be the requested + part of the content. Otherwise, the sending peer either sent the + wrong content or the wrong sibling or uncle hashes. For simplicity, + the set of sibling and uncles hashes is collectively referred to as + the "uncle hashes". + + In the case of live streaming the tree of chunks grows dynamically + and content is identified with a public key instead of a root hash, + as the root hash is undefined or, more precisely, transient, as long + as new data is generated by the live source. Live streaming is + described in more detail below, but content verification works the + same for both live and predefined content. + +3.5.3. The Atomic Datagram Principle + + As explained above, a datagram consists of a sequence of messages. + Ideally, every datagram sent must be independent of other datagrams, + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 11] + +Internet-Draft swift December 19, 2011 + + + so each datagram SHOULD be processed separately and a loss of one + datagram MUST NOT disrupt the flow. Thus, as a datagram carries zero + or more messages, neither messages nor message interdependencies + should span over multiple datagrams. + + This principle implies that as any chunk is verified using its uncle + hashes the necessary hashes MUST be put into the same datagram as the + chunk's data (Sec. 3.5.4). As a general rule, if some additional + data is still missing to process a message within a datagram, the + message SHOULD be dropped. + + The hashes necessary to verify a chunk are in principle its sibling's + hash and all its uncle hashes, but the set of hashes to sent can be + optimized. Before sending a packet of data to the receiver, the + sender inspects the receiver's previous acknowledgments (HAVE or ACK) + to derive which hashes the receiver already has for sure. Suppose, + the receiver had acknowledged bin 1 (first two chunks of the file), + then it must already have uncle hashes 5, 11 and so on. That is + because those hashes are necessary to check packets of bin 1 against + the root hash. Then, hashes 3, 7 and so on must be also known as they + are calculated in the process of checking the uncle hash chain. + Hence, to send bin 12 (i.e. the 7th chunk of content), the sender + needs to include just the hashes for bins 14 and 9, which let the + data be checked against hash 11 which is already known to the + receiver. + + The sender MAY optimistically skip hashes which were sent out in + previous, still unacknowledged datagrams. It is an optimization + tradeoff between redundant hash transmission and possibility of + collateral data loss in the case some necessary hashes were lost in + the network so some delivered data cannot be verified and thus has to + be dropped. In either case, the receiver builds the Merkle tree on- + demand, incrementally, starting from the root hash, and uses it for + data validation. + + In short, the sender MUST put into the datagram the missing hashes + necessary for the receiver to verify the chunk. + +3.5.4. DATA and HASH Messages + + Concretely, a peer that wants to send a chunk of content creates a + datagram that MUST consist of one or more HASH messages and a DATA + message. The datagram MUST contain a HASH message for each hash the + receiver misses for integrity checking. A HASH message MUST contain + the bin number and hash data for each of those hashes. The DATA + message MUST contain the bin number of the chunk and chunk itself. A + peer MAY send the required messages for multiple chunks in the same + datagram. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 12] + +Internet-Draft swift December 19, 2011 + + +3.6. HINT + + While bulk download protocols normally do explicit requests for + certain ranges of data (i.e., use a pull model, for example, + BitTorrent [BITTORRENT]), live streaming protocols quite often use a + request-less push model to save round trips. Swift supports both + models of operation. + + A peer MUST send a HINT message containing the bin of the chunk + interval it wants to download. A peer receiving a HINT message MAY + send out requested pieces. When it receives multiple HINTs (either in + one datagram or in multiple), the peer SHOULD process the HINTs + sequentially. When live streaming, it also may send some other chunks + in case it runs out of requests or for some other reason. In that + case the only purpose of HINT messages is to coordinate peers and to + avoid unnecessary data retransmission, hence the name. + + + +3.7. Peer Address Exchange and NAT Hole Punching + + Peer address exchange messages (or PEX messages for short) are common + for many peer-to-peer protocols. By exchanging peer addresses in + gossip fashion, peers relieve central coordinating entities (the + trackers) from unnecessary work. swift optionally features two types + of PEX messages: PEX_REQ and PEX_ADD. A peer that wants to retrieve + some peer addresses MUST send a PEX_REQ message. The receiving peer + MAY respond with a PEX_ADD message containing the addresses of + several peers. The addresses MUST be of peers it has recently + exchanged messages with to guarantee liveliness. + + To unify peer exchange and NAT hole punching functionality, the + sending pattern of PEX messages is restricted. As the swift handshake + is able to do simple NAT hole punching [SNP] transparently, PEX + messages must be emitted in the way to facilitate that. Namely, once + peer A introduces peer B to peer C by sending a PEX_ADD message to C, + it SHOULD also send a message to B introducing C. The messages SHOULD + be within 2 seconds from each other, but MAY not be, simultaneous, + instead leaving a gap of twice the "typical" RTT, i.e. 300-600ms. The + peers are supposed to initiate handshakes to each other thus forming + a simple NAT hole punching pattern where the introducing peer + effectively acts as a STUN server [RFC5389]. Still, peers MAY ignore + PEX messages if uninterested in obtaining new peers or because of + security considerations (rate limiting) or any other reason. + + The PEX messages can be used to construct a dedicated tracker peer. + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 13] + +Internet-Draft swift December 19, 2011 + + +3.8. KEEPALIVE + + A peer MUST send a datagram containing a KEEPALIVE message + periodically to each peer it wants to interact with in the future but + has no other messages to send them at present. + + +3.9. VERSION + Peers MUST convey which version of the swift protocol they support + using a VERSION message. This message MUST be included in the initial + (handshake) datagrams and MUST indicate which version of the swift + protocol the sending peer supports. + + +3.10. Conveying Peer Capabilities + Peers may support just a subset of the swift messages. For example, + peers running over TCP may not accept ACK messages, or peers used + with a centralized tracking infrastructure may not accept PEX + messages. For these reasons, peers SHOULD signal which subset of the + swift messages they support by means of the MSGTYPE_RCVD message. + This message SHOULD be included in the initial (handshake) datagrams + and MUST indicate which swift protocol messages the sending peer + supports. + + +3.11. Directory Lists + + Directory list files MUST start with magic bytes ".\n..\n". The rest + of the file is a newline-separated list of hashes and file names for + the content of the directory. An example: + + . + .. + 1234567890ABCDEF1234567890ABCDEF12345678 readme.txt + 01234567890ABCDEF1234567890ABCDEF1234567 big_file.dat + + + +4. Automatic Detection of Content Size + + In swift, the root hash of a static content asset, such as a video + file, along with some peer addresses is sufficient to start a + download. In addition, swift can reliably and automatically derive + the size of such content from information received from the network + when fixed sized chunks are used. As a result, it is not necessary to + include the size of the content asset as the metadata of the content, + in addition to the root hash. Implementations of swift MAY use this + automatic detection feature. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 14] + +Internet-Draft swift December 19, 2011 + + +4.1. Peak Hashes + + The ability for a newcomer peer to detect the size of the content + depends heavily on the concept of peak hashes. Peak hashes, in + general, enable two cornerstone features of swift: reliable file size + detection and download/live streaming unification (see Sec. 5). The + concept of peak hashes depends on the concepts of filled and + incomplete bins. Recall that when constructing the binary trees for + content verification and addressing the base of the tree may have + more leaves than the number of chunks in the content. In the Merkle + hash tree these leaves were assigned empty all-zero hashes to be able + to calculate the higher level hashes. A filled bin is now defined as + a bin number that addresses an interval of leaves that consists only + of hashes of content chunks, not empty hashes. Reversely, an + incomplete (not filled) bin addresses an interval that contains also + empty hashes, typically an interval that extends past the end of the + file. In the following figure bins 7, 11, 13 and 14 are incomplete + the rest is filled. + + Formally, a peak hash is a hash in the Merkle tree defined over a + filled bin, whose sibling is defined over an incomplete bin. + Practically, suppose a file is 7162 bytes long and a chunk is 1 + kilobyte. That file fits into 7 chunks, the tail chunk being 1018 + bytes long. The Merkle tree for that file looks as follows. Following + the definition the peak hashes of this file are in bins 3, 9 and 12, + denoted with a *. E denotes an empty hash. + + 7 + / \ + / \ + / \ + / \ + 3* 11 + / \ / \ + / \ / \ + / \ / \ + 1 5 9* 13 + / \ / \ / \ / \ + 0 2 4 6 8 10 12* 14 + + C0 C1 C2 C3 C4 C5 C6 E + = 1018 bytes + + Peak hashes can be explained by the binary representation of the + number of chunks the file occupies. The binary representation for 7 + is 111. Every "1" in binary representation of the file's packet + length corresponds to a peak hash. For this particular file there are + indeed three peaks, bin numbers 3, 9, 12. The number of peak hashes + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 15] + +Internet-Draft swift December 19, 2011 + + + for a file is therefore also at most logarithmic with its size. + + A peer knowing which bins contain the peak hashes for the file can + therefore calculate the number of chunks it consists of, and thus get + an estimate of the file size (given all chunks but the last are fixed + size). Which bins are the peaks can be securely communicated from + one (untrusted) peer A to another B by letting A send the peak hashes + and their bin numbers to B. It can be shown that the root hash that B + obtained from a trusted source is sufficient to verify that these are + indeed the right peak hashes, as follows. + + Lemma: Peak hashes can be checked against the root hash. + + Proof: (a) Any peak hash is always the left sibling. Otherwise, be it + the right sibling, its left neighbor/sibling must also be defined + over a filled bin, because of the way chunks are laid out in the + leaves, contradiction. (b) For the rightmost peak hash, its right + sibling is zero. (c) For any peak hash, its right sibling might be + calculated using peak hashes to the left and zeros for empty bins. + (d) Once the right sibling of the leftmost peak hash is calculated, + its parent might be calculated. (e) Once that parent is calculated, + we might trivially get to the root hash by concatenating the hash + with zeros and hashing it repeatedly. + + Informally, the Lemma might be expressed as follows: peak hashes + cover all data, so the remaining hashes are either trivial (zeros) or + might be calculated from peak hashes and zero hashes. + + Finally, once peer B has obtained the number of chunks in the content + it can determine the exact file size as follows. Given that all + chunks except the last are fixed size B just needs to know the size + of the last chunk. Knowing the number of chunks B can calculate the + bin number of the last chunk and download it. As always B verifies + the integrity of this chunk against the trusted root hash. As there + is only one chunk of data that leads to a successful verification the + size of this chunk must be correct. B can then determine the exact + file size as + + (number of chunks -1) * fixed chunk size + size of last chunk + + +4.2. Procedure + + A swift implementation that wants to use automatic size detection + MUST operate as follows. When a peer B sends a DATA message for the + first time to a peer A, B MUST include all the peak hashes for the + content in the same datagram, unless A has already signalled earlier + in the exchange that it knows the peak hashes by having acknowledged + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 16] + +Internet-Draft swift December 19, 2011 + + + any bin, even the empty one. The receiver A MUST check the peak + hashes against the root hash to determine the approximate content + size. To obtain the definite content size peer A MUST download the + last chunk of the content from any peer that offers it. + + + + + +5. Live streaming + + In the case of live streaming a transfer is bootstrapped with a + public key instead of a root hash, as the root hash is undefined or, + more precisely, transient, as long as new data is being generated by + the live source. Live/download unification is achieved by sending + signed peak hashes on-demand, ahead of the actual data. As before, + the sender might use acknowledgements to derive which content range + the receiver has peak hashes for and to prepend the data hashes with + the necessary (signed) peak hashes. Except for the fact that the set + of peak hashes changes with time, other parts of the algorithm work + as described in Sec. 3. + + As with static content assets in the previous section, in live + streaming content length is not known on advance, but derived + on-the-go from the peak hashes. Suppose, our 7 KB stream extended to + another kilobyte. Thus, now hash 7 becomes the only peak hash, eating + hashes 3, 9 and 12. So, the source sends out a SIGNED_HASH message to + announce the fact. + + The number of cryptographic operations will be limited. For example, + consider a 25 frame/second video transmitted over UDP. When each + frame is transmitted in its own chunk, only 25 signature verification + operations per second are required at the receiver for bitrates up to + ~12.8 megabit/second. For higher bitrates multiple UDP packets per + frame are needed and the number of verifications doubles. + + + + + +6. Transport Protocols and Encapsulation + +6.1. UDP + +6.1.1. Chunk Size + + Currently, swift-over-UDP is the preferred deployment option. + Effectively, UDP allows the use of IP with minimal overhead and it + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 17] + +Internet-Draft swift December 19, 2011 + + + also allows userspace implementations. The default is to use chunks + of 1 kilobyte such that a datagram fits in an Ethernet-sized IP + packet. The bin numbering allows to use swift over Jumbo + frames/datagrams. Both DATA and HAVE/ACK messages may use e.g. 8 + kilobyte packets instead of the standard 1 KiB. The hashing scheme + stays the same. Using swift with 512 or 256-byte packets is + theoretically possible with 64-bit byte-precise bin numbers, but IP + fragmentation might be a better method to achieve the same result. + + +6.1.2. Datagrams and Messages + + When using UDP, the abstract datagram described above corresponds + directly to a UDP datagram. Each message within a datagram has a + fixed length, which depends on the type of the message. The first + byte of a message denotes its type. The currently defined types are: + + HANDSHAKE = 0x00 + DATA = 0x01 + ACK = 0x02 + HAVE = 0x03 + HASH = 0x04 + PEX_ADD = 0x05 + PEX_REQ = 0x06 + SIGNED_HASH = 0x07 + HINT = 0x08 + MSGTYPE_RCVD = 0x09 + VERSION = 0x10 + + + Furthermore, integers are serialized in the network (big-endian) byte + order. So consider the example of an ACK message (Sec 3.4). It has + message type of 0x02 and a payload of a bin number, a four-byte + integer (say, 1); hence, its on the wire representation for UDP can + be written in hex as: "02 00000001". This hex-like two character-per- + byte notation is used to represent message formats in the rest of + this section. + +6.1.3. Channels + + As it is increasingly complex for peers to enable UDP communication + between each other due to NATs and firewalls, swift-over-UDP uses a + multiplexing scheme, called "channels", to allow multiple swarms to + use the same UDP port. Channels loosely correspond to TCP connections + and each channel belongs to a single swarm. When channels are used, + each datagram starts with four bytes corresponding to the receiving + channel number. + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 18] + +Internet-Draft swift December 19, 2011 + + +6.1.4. HANDSHAKE and VERSION + + A channel is established with a handshake. To start a handshake, the + initiating peer needs to know: + + (1) the IP address of a peer + (2) peer's UDP port and + (3) the root hash of the content (see Sec. 3.5.1). + + To do the handshake the initiating peer sends a datagram that MUST + start with an all 0-zeros channel number followed by a VERSION + message, then a HASH message whose payload is the root hash, and a + HANDSHAKE message, whose only payload is a locally unused channel + number. + + On the wire the datagram will look something like this: + 00000000 10 01 + 04 7FFFFFFF 1234123412341234123412341234123412341234 + 00 00000011 + (to unknown channel, handshake from channel 0x11 speaking protocol + version 0x01, initiating a transfer of a file with a root hash + 123...1234) + + The receiving peer MUST respond with a datagram that starts with the + channel number from the sender's HANDSHAKE message, followed by a + VERSION message, then a HANDSHAKE message, whose only payload is a + locally unused channel number, followed by any other messages it + wants to send. + + Peer's response datagram on the wire: + 00000011 10 01 + 00 00000022 03 00000003 + (peer to the initiator: use channel number 0x22 for this transfer and + proto version 0x01; I also have first 4 chunks of the file, see Sec. + 4.3) + + At this point, the initiator knows that the peer really responds; for + that purpose channel ids MUST be random enough to prevent easy + guessing. So, the third datagram of a handshake MAY already contain + some heavy payload. To minimize the number of initialization + roundtrips, the first two datagrams MAY also contain some minor + payload, e.g. a couple of HAVE messages roughly indicating the + current progress of a peer or a HINT (see Sec. 3.6). When receiving + the third datagram, both peers have the proof they really talk to + each other; three-way handshake is complete. + + A peer MAY explicit close a channel by sending a HANDSHAKE message + that MUST contain an all 0-zeros channel number. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 19] + +Internet-Draft swift December 19, 2011 + + + On the wire: + 00 00000000 + + +6.1.5. HAVE + + A HAVE message (type 0x03) states that the sending peer has the + complete specified bin and successfully checked its integrity: + 03 00000003 + (got/checked first four kilobytes of a file/stream) + + + +6.1.6. ACK + + An ACK message (type 0x02) acknowledges data that was received from + its addressee; to facilitate delay-based congestion control, an + ACK message contains a timestamp, in particular, a 64-bit microsecond + time. + 02 00000002 12345678 + (got the second kilobyte of the file from you; my microsecond + timer was showing 0x12345678 at that moment) + + +6.1.7. HASH + + A HASH message (type 0x04) consists of a four-byte bin number and + the cryptographic hash (e.g. a 20-byte SHA1 hash) + 04 7FFFFFFF 1234123412341234123412341234123412341234 + + +6.1.8. DATA + + A DATA message (type 0x01) consists of a four-byte bin number and the + actual chunk. In case a datagram contains a DATA message, a sender + MUST always put the data message in the tail of the datagram. For + example: + 01 00000000 48656c6c6f20776f726c6421 + (This message accommodates an entire file: "Hello world!") + + +6.1.9. KEEPALIVE + + Keepalives do not have a message type on UDP. They are just simple + datagrams consisting of a 4-byte channel id only. + + On the wire: + 00000022 + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 20] + +Internet-Draft swift December 19, 2011 + + +6.1.10. Flow and Congestion Control + + Explicit flow control is not necessary in swift-over-UDP. In the case + of video-on-demand the receiver will request data explicitly from + peers and is therefore in control of how much data is coming towards + it. In the case of live streaming, where a push-model may be used, + the amount of data incoming is limited to the bitrate, which the + receiver must be able to process otherwise it cannot play the stream. + Should, for any reason, the receiver get saturated with data that + situation is perfectly detected by the congestion control. Swift- + over-UDP can support different congestion control algorithms, in + particular, it supports the new IETF Low Extra Delay Background + Transport (LEDBAT) congestion control algorithm that ensures that + peer-to-peer traffic yields to regular best-effort traffic [LEDBAT]. + + +6.2. TCP + + When run over TCP, swift becomes functionally equivalent to + BitTorrent. Namely, most swift messages have corresponding BitTorrent + messages and vice versa, except for BitTorrent's explicit interest + declarations and choking/unchoking, which serve the classic + implementation of the tit-for-tat algorithm [TIT4TAT]. However, TCP + is not well suited for multiparty communication, as argued in Sec. 9. + + +6.3. RTP Profile for PPSP + + In this section we sketch how swift can be integrated into RTP + [RFC3550] to form the Peer-to-Peer Streaming Protocol (PPSP) [I- + D.ietf-ppsp-reqs] running over UDP. The PPSP charter requires + existing media transfer protocols be used [PPSPCHART]. Hence, the + general idea is to define swift as a profile of RTP, in the same way + as the Secure Real-time Transport Protocol (SRTP) [RFC3711]. SRTP, + and therefore swift is considered ``a "bump in the stack" + implementation which resides between the RTP application and the + transport layer. [swift] intercepts RTP packets and then forwards an + equivalent [swift] packet on the sending side, and intercepts [swift] + packets and passes an equivalent RTP packet up the stack on the + receiving side.'' [RFC3711]. + + In particular, to encode a swift datagram in an RTP packet all the + non-DATA messages of swift such as HINT and HAVE are postfixed to the + RTP packet using the UDP encoding and the content of DATA messages is + sent in the payload field. Implementations MAY omit the RTP header + for packets without payload. This construction allows the streaming + application to use of all RTP's current features, and with a + modification to the Merkle tree hashing scheme (see below) meets + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 21] + +Internet-Draft swift December 19, 2011 + + + swift's atomic datagram principle. The latter means that a receiving + peer can autonomously verify the RTP packet as being correct content, + thus preventing the spread of corrupt data (see requirement PPSP.SEC- + REQ-4). + + The use of ACK messages for reliability is left as a choice of the + application using PPSP. + + +6.3.1. Design + + 6.3.1.1. Joining a Swarm + + To commence a PPSP download a peer A must have the swarm ID of the + stream and a list of one or more tracker contact points (e.g. + host+port). The list of trackers is optional in the presence of a + decentralized tracking mechanism. The swarm ID consists of the swift + root hash of the content, which is divided into chunks (see + Discussion). + + Peer A now registers with the PPSP tracker following the tracker + protocol [I-D.ietf.ppsp-reqs] and receives the IP address and RTP + port of peers already in the swarm, say B, C, and D. Peer A now sends + an RTP packet containing a HANDSHAKE without channel information to + B, C, and D. This serves as an end-to-end check that the peers are + actually in the correct swarm. Optionally A could include a HINT + message in some RTP packets if it wants to start receiving content + immediately. B and C respond with a HANDSHAKE and HAVE messages. D + sends just a HANDSHAKE and omits HAVE messages as a way of choking A. + + + 6.3.1.2. Exchanging Chunks + + In response to B and C, A sends new RTP packets to B and C with HINTs + for disjunct sets of chunks. B and C respond with the requested + chunks in the payload and HAVE messages, updating their chunk + availability. Upon receipt, A sends HAVE for the chunks received and + new HINT messages to B and C. When e.g. C finds that A obtained a + chunk (from B) that C did not yet have, C's response includes a HINT + for that chunk. + + D does not send HAVE messages, instead if D decides to unchoke peer + A, it sends an RTP packet with HAVE messages to inform A of its + current availability. If B or C decide to choke A they stop sending + HAVE and DATA messages and A should then rerequest from other peers. + They may continue to send HINT messages, or exponentially slowing + KEEPALIVE messages such that A keeps sending them HAVE messages. + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 22] + +Internet-Draft swift December 19, 2011 + + + Once A has received all content (video-on-demand use case) it stops + sending messages to all other peers that have all content (a.k.a. + seeders). + + + 6.3.1.3. Leaving a Swarm + + Peers can implicitly leave a swarm by stopping to respond to + messages. Sending peers should remove these peers from the current + peer list. This mechanism works for both graceful and ungraceful + leaves (i.e., peer crashes or disconnects). When leaving gracefully, + a peer should deregister from the tracker following the PPSP tracker + protocol. + + More explicit graceful leaves could be implemented using RTCP. In + particular, a peer could send a RTCP BYE on the RTCP port that is + derivable from a peer's RTP port for all peers in its current peer + list. However, to prevent malicious peers from sending BYEs a form of + peer authentication is required (e.g. using public keys as peer IDs + [PERMIDS].) + + + 6.3.1.4. Discussion + + Using swift as an RTP profile requires a change to the content + integrity protection scheme (see Sec. 3.5). The fields in the RTP + header, such as the timestamp and PT fields, must be protected by the + Merkle tree hashing scheme to prevent malicious alterations. + Therefore, the Merkle tree is no longer constructed from pure content + chunks, but from the complete RTP packet for a chunk as it would be + transmitted (minus the non-DATA swift messages). In other words, the + hash of the leaves in the tree is the hash over the Authenticated + Portion of the RTP packet as defined by SRTP, illustrated in the + following figure (extended from [RFC3711]). There is no need for the + RTP packets to be fixed size, as the hashing scheme can deal with + variable-sized leaves. + + + + + + + + + + + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 23] + +Internet-Draft swift December 19, 2011 + + + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<+ + |V=2|P|X| CC |M| PT | sequence number | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | timestamp | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | synchronization source (SSRC) identifier | | + +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ | + | contributing source (CSRC) identifiers | | + | .... | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | RTP extension (OPTIONAL) | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | payload ... | | + | +-------------------------------+ | + | | RTP padding | RTP pad count | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+<+ + ~ swift non-DATA messages (REQUIRED) ~ | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | length of swift messages (REQUIRED) | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | + | + Authenticated Portion ---+ + + Figure: The format of an RTP-Swift packet. + + + As a downside, with variable-sized payloads the automatic content + size detection of Section 4 no longer works, so content length MUST + be explicit in the metadata. In addition, storage on disk is more + complex with out-of-order, variable-sized packets. On the upside, + carrying RTP over swift allow decryption-less caching. + + As with UDP, another matter is how much data is carried inside each + packet. An important swift-specific factor here is the resulting + number of hash calculations per second needed to verify chunks. + Experiments should be conducted to ensure they are not excessive for, + e.g., mobile hardware. + + At present, Peer IDs are not required in this design. + + +6.3.2. PPSP Requirements + + 6.3.2.1. Basic Requirements + + - PPSP.REQ-1: The swift PEX message can also be used as the basis for + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 24] + +Internet-Draft swift December 19, 2011 + + + a tracker protocol, to be discussed elsewhere. + + - PPSP.REQ-2: This draft preserves the properties of RTP. + + - PPSP.REQ-3: This draft does not place requirements on peer IDs, + IP+port is sufficient. + + - PPSP.REQ-4: The content is identified by its root hash (video-on- + demand) or a public key (live streaming). + + - PPSP.REQ-5: The content is partitioned by the streaming + application. + + - PPSP.REQ-6: Each chunk is identified by a bin number (and its + cryptographic hash.) + + - PPSP.REQ-7: The protocol is carried over UDP because RTP is. + + - PPSP.REQ-8: The protocol has been designed to allow meaningful data + transfer between peers as soon as possible and to avoid unnecessary + round-trips. It supports small and variable chunk sizes, and its + content integrity protection enables wide scale caching. + + + 6.3.2.2. Peer Protocol Requirements + + - PPSP.PP.REQ-1: A GET_HAVE would have to be added to request which + chunks are available from a peer, if the proposed push-based HAVE + mechanism is not sufficient. + + - PPSP.PP.REQ-2: A set of HAVE messages satisfies this. + + - PPSP.PP.REQ-3: The PEX_REQ message satisfies this. Care should be + taken with peer address exchange in general, as the use of such + hearsay is a risk for the protocol as it may be exploited by + malicious peers (as a DDoS attack mechanism). A secure tracking / + peer sampling protocol like [PUPPETCAST] may be needed to make peer- + address exchange safe. + + - PPSP.PP.REQ-4: HAVE messages convey current availability via a push + model. + + - PPSP.PP.REQ-5: Bin numbering enables a compact representation of + chunk availability. + + - PPSP.PP.REQ-6: A new PPSP specific Peer Report message would have + to be added to RTCP. + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 25] + +Internet-Draft swift December 19, 2011 + + + - PPSP.PP.REQ-7: Transmission and chunk requests are integrated in + this protocol. + + + 6.3.2.3. Security Requirements + + - PPSP.SEC.REQ-1: An access control mechanism like Closed Swarms + [CLOSED] would have to be added. + + - PPSP.SEC.REQ-2: As RTP is carried verbatim over swift, RTP + encryption can be used. Note that just encrypting the RTP part will + allow for caching servers that are part of the swarm but do not need + access to the decryption keys. They just need access to the swift + HASHES in the postfix to verify the packet's integrity. + + - PPSP.SEC.REQ-3: RTP encryption or IPsec [RFC4303] can be used, if + the swift messages must also be encrypted. + + - PPSP.SEC.REQ-4: The Merkle tree hashing scheme prevents the + indirect spread of corrupt content, as peers will only forward chunks + to others if their integrity check out. Another protection mechanism + is to not depend on hearsay (i.e., do not forward other peers' + availability information), or to only use it when the information + spread is self-certified by its subjects. + + Other attacks, such as a malicious peer claiming it has content but + not replying, are still possible. Or wasting CPU and bandwidth at a + receiving peer by sending packets where the DATA doesn't match the + HASHes. + + + - PPSP.SEC.REQ-5: The Merkle tree hashing scheme allows a receiving + peer to detect a malicious or faulty sender, which it can + subsequently ignore. Spreading this knowledge to other peers such + that they know about this bad behavior is hearsay. + + + - PPSP.SEC.REQ-6: A risk in peer-to-peer streaming systems is that + malicious peers launch an Eclipse [ECLIPSE] attack on the initial + injectors of the content (in particular in live streaming). The + attack tries to let the injector upload to just malicious peers which + then do not forward the content to others, thus stopping the + distribution. An Eclipse attack could also be launched on an + individual peer. Letting these injectors only use trusted trackers + that provide true random samples of the population or using a secure + peer sampling service [PUPPETCAST] can help negate such an attack. + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 26] + +Internet-Draft swift December 19, 2011 + + + - PPSP.SEC.REQ-7: swift supports decentralized tracking via PEX or + additional mechanisms such as DHTs [SECDHTS], but self-certification + of addresses is needed. Self-certification means For example, that + each peer has a public/private key pair [PERMIDS] and creates self- + certified address changes that include the swarm ID and a timestamp, + which are then exchanged among peers or stored in DHTs. See also + discussion of PPSP.PP.REQ-3 above. Content distribution can continue + as long as there are peers that have it available. + + - PPSP.SEC.REQ-8: The verification of data via hashes obtained from a + trusted source is well-established in the BitTorrent protocol + [BITTORRENT]. The proposed Merkle tree scheme is a secure extension + of this idea. Self-certification and not using hearsay are other + lessons learned from existing distributed systems. + + - PPSP.SEC.REQ-9: Swift has built-in content integrity protection via + self-certified naming of content, see SEC.REQ-5 and Sec. 3.5.1. + + +6.4. HTTP (as PPSP) + + In this section we sketch how swift can be carried over HTTP + [RFC2616] to form the PPSP running over TCP. The general idea is to + encode a swift datagram in HTTP GET and PUT requests and their + replies by transmitting all the non-DATA messages such as HINTs and + HAVEs as headers and send DATA messages in the body. This idea + follows the atomic datagram principle for each request and reply. So + a receiving peer can autonomously verify the message as carrying + correct data, thus preventing the spread of corrupt data (see + requirement PPSP.SEC-REQ-4). + + A problem with HTTP is that it is a client/server protocol. To + overcome this problem, a peer A uses a PUT request instead of a GET + request if the peer B has indicated in a reply that it wants to + retrieve a chunk from A. In cases where peer A is no longer + interested in receiving requests from B (described below) B may need + to establish a new HTTP connection to A to quickly download a chunk, + instead of waiting for a convenient time when A sends another + request. As an alternative design, two HTTP connections could be used + always., but this is inefficient. + +6.4.1. Design + + 6.4.1.1. Joining a Swarm + + To commence a PPSP download a peer A must have the swarm ID of the + stream and a list of one or more tracker contact points, as above. + The swarm ID as earlier also consists of the swift root hash of the + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 27] + +Internet-Draft swift December 19, 2011 + + + content, divided in chunks by the streaming application (e.g. fixed- + size chunks of 1 kilobyte for video-on-demand). + + Peer A now registers with the PPSP tracker following the tracker + protocol [I-D.ietf-ppsp-reqs] and receives the IP address and HTTP + port of peers already in the swarm, say B, C, and D. Peer A now + establishes persistent HTTP connections with B, C, D and sends GET + requests with the Request-URI set to /. Optionally + A could include a HINT message in some requests if it wants to start + receiving content immediately. A HINT is encoded as a Range header + with a new "bins" unit [RFC2616,$14.35]. + + B and C respond with a 200 OK reply with header-encoded HAVE + messages. A HAVE message is encoded as an extended Accept-Ranges: + header [RFC2616,$14.5] with the new bins unit and the possibility of + listing the set of accepted bins. If no HINT/Range header was present + in the request, the body of the reply is empty. D sends just a 200 OK + reply and omits the HAVE/Accept-Ranges header as a way of choking A. + + 6.4.1.2. Exchanging Chunks + + In response to B and C, A sends GET requests with Range headers, + requesting disjunct sets of chunks. B and C respond with 206 Partial + Content replies with the requested chunks in the body and Accept- + Ranges headers, updating their chunk availability. The HASHES for the + chunks are encoded in a new Content-Merkle header and the Content- + Range is set to identify the chunk [RFC2616,$14.16]. A new + "multipart-bin ranges" equivalent to the "multipart-bytes ranges" + media type may be used to transmit multiple chunks in one reply. + + Upon receipt, A sends a new GET request with a HAVE/Accept-Ranges + header for the chunks received and new HINT/Range headers to B and C. + Now when e.g. C finds that A obtained a chunk (from B) that C did not + yet have, C's response includes a HINT/Range for that chunk. In this + case, A's next request to C is not a GET request, but a PUT request + with the requested chunk sent in the body. + + Again, working around the fact that HTTP is a client/server protocol, + peer A periodically sends HEAD requests to peer D (which was + virtually choking A) that serve as keepalives and may contain + HAVE/Accept-Ranges headers. If D decides to unchoke peer A, it + includes an Accept-Ranges header in the "200 OK" reply to inform A of + its current chunk availability. + + If B or C decide to choke A they start responding with 204 No Content + replies without HAVE/Accept-Ranges headers and A should then re- + request from other peers. However, if their replies contain + HINT/Range headers A should keep on sending PUT requests with the + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 28] + +Internet-Draft swift December 19, 2011 + + + desired data (another client/server workaround). If not, A should + slowly send HEAD requests as keepalive and content availability + update. + + Once A has received all content (video-on-demand use case) it closes + the persistent connections to all other peers that have all content + (a.k.a. seeders). + + + 6.4.1.3. Leaving a Swarm + + Peers can explicitly leave a swarm by closing the connection. This + mechanism works for both graceful and ungraceful leaves (i.e., peer + crashes or disconnects). When leaving gracefully, a peer should + deregister from the tracker following the PPSP tracker protocol. + + + 6.4.1.4. Discussion + + As mentioned earlier, this design suffers from the fact that HTTP is + a client/server protocol. A solution where a peer establishes two + HTTP connections with every other peer may be more elegant, but + inefficient. The mapping of swift messages to headers remains the + same: + + HINT = Range + HAVE = Accept-Ranges + HASH = Content-Merkle + PEX = e.g. extended Content-Location + + The Content-Merkle header should include some parameters to indicate + the hash function and chunk size (e.g. SHA1 and 1K) used to build the + Merkle tree. + + +6.4.2. PPSP Requirements + + 6.4.2.1. Basic Requirements + + - PPSP.REQ-1: The HTTP-based BitTorrent tracker protocol [BITTORRENT] + can be used as the basis for a tracker protocol, to be discussed + elsewhere. + + - PPSP.REQ-2: This draft preserves the properties of HTTP, but extra + mechanisms may be necessary to protect against faulty or malicious + peers. + + - PPSP.REQ-3: This draft does not place requirements on peer IDs, + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 29] + +Internet-Draft swift December 19, 2011 + + + IP+port is sufficient. + + - PPSP.REQ-4: The content is identified by its root hash (video-on- + demand) or a public key (live streaming). + + - PPSP.REQ-5: The content is partitioned into chunks by the streaming + application (see 6.4.1.1.) + + - PPSP.REQ-6: Each chunk is identified by a bin number (and its + cryptographic hash.) + + - PPSP.REQ-7: The protocol is carried over TCP because HTTP is. + + + 6.4.2.2. Peer Protocol Requirements + + - PPSP.PP.REQ-1: A HEAD request can be used to find out which chunks + are available from a peer, which returns the new Accept-Ranges + header. + + - PPSP.PP.REQ-2: The new Accept-Ranges header satisfies this. + + - PPSP.PP.REQ-3: A GET with a request-URI requesting the peers of a + resource (e.g. //peers) would have to be added to + request known peers from a peer, if the proposed push-based + PEX/~Content-Location mechanism is not sufficient. Care should be + taken with peer address exchange in general, as the use of such + hearsay is a risk for the protocol as it may be exploited by + malicious peers (as a DDoS attack mechanism). A secure tracking / + peer sampling protocol like [PUPPETCAST] may be needed to make peer- + address exchange safe. + + + - PPSP.PP.REQ-4: HAVE/Accept-Ranges headers convey current + availability. + + - PPSP.PP.REQ-5: Bin numbering enables a compact representation of + chunk availability. + + - PPSP.PP.REQ-6: A new PPSP specific Peer-Report header would have to + be added. + + - PPSP.PP.REQ-7: Transmission and chunk requests are integrated in + this protocol. + + + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 30] + +Internet-Draft swift December 19, 2011 + + + 6.4.2.3. Security Requirements + + - PPSP.SEC.REQ-1: An access control mechanism like Closed Swarms + [CLOSED] would have to be added. + + - PPSP.SEC.REQ-2: As swift is carried over HTTP, HTTPS encryption can + be used instead. Alternatively, just the body could be encrypted. The + latter allows for caching servers that are part of the swarm but do + not need access to the decryption keys (they need access to the swift + HASHES in the headers to verify the packet's integrity). + + - PPSP.SEC.REQ-3: HTTPS encryption or the content encryption + facilities of HTTP can be used. + + - PPSP.SEC.REQ-4: The Merkle tree hashing scheme prevents the + indirect spread of corrupt content, as peers will only forward + content to others if its integrity checks out. Another protection + mechanism is to not depend on hearsay (i.e., do not forward other + peers' availability information), or to only use it when the + information spread is self-certified by its subjects. + + Other attacks such as a malicious peer claiming it has content, but + not replying are still possible. Or wasting CPU and bandwidth at a + receiving peer by sending packets where the body doesn't match the + HASH/Content-Merkle headers. + + + - PPSP.SEC.REQ-5: The Merkle tree hashing scheme allows a receiving + peer to detect a malicious or faulty sender, which it can + subsequently close its connection to and ignore. Spreading this + knowledge to other peers such that they know about this bad behavior + is hearsay. + + + - PPSP.SEC.REQ-6: A risk in peer-to-peer streaming systems is that + malicious peers launch an Eclipse [ECLIPSE] attack on the initial + injectors of the content (in particular in live streaming). The + attack tries to let the injector upload to just malicious peers which + then do not forward the content to others, thus stopping the + distribution. An Eclipse attack could also be launched on an + individual peer. Letting these injectors only use trusted trackers + that provide true random samples of the population or using a secure + peer sampling service [PUPPETCAST] can help negate such an attack. + + + - PPSP.SEC.REQ-7: swift supports decentralized tracking via PEX or + additional mechanisms such as DHTs [SECDHTS], but self-certification + of addresses is needed. Self-certification means For example, that + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 31] + +Internet-Draft swift December 19, 2011 + + + each peer has a public/private key pair [PERMIDS] and creates self- + certified address changes that include the swarm ID and a timestamp, + which are then exchanged among peers or stored in DHTs. See also + discussion of PPSP.PP.REQ-3 above. Content distribution can continue + as long as there are peers that have it available. + + - PPSP.SEC.REQ-8: The verification of data via hashes obtained from a + trusted source is well-established in the BitTorrent protocol + [BITTORRENT]. The proposed Merkle tree scheme is a secure extension + of this idea. Self-certification and not using hearsay are other + lessons learned from existing distributed systems. + + - PPSP.SEC.REQ-9: Swift has built-in content integrity protection via + self-certified naming of content, see SEC.REQ-5 and Sec. 3.5.1. + + +7. Security Considerations + + As any other network protocol, the swift faces a common set of + security challenges. An implementation must consider the possibility + of buffer overruns, DoS attacks and manipulation (i.e. reflection + attacks). Any guarantee of privacy seems unlikely, as the user is + exposing its IP address to the peers. A probable exception is the + case of the user being hidden behind a public NAT or proxy. + + +8. Extensibility + +8.1. 32 bit vs 64 bit + + While in principle the protocol supports bigger (>1TB) files, all the + mentioned counters are 32-bit. It is an optimization, as using + 64-bit numbers on-wire may cost ~2% practical overhead. The 64-bit + version of every message has typeid of 64+t, e.g. typeid 68 for + 64-bit hash message: + 44 000000000000000E 01234567890ABCDEF1234567890ABCDEF1234567 + +8.2. IPv6 + + IPv6 versions of PEX messages use the same 64+t shift as just + mentioned. + + +8.3. Congestion Control Algorithms + + Congestion control algorithm is left to the implementation and may + even vary from peer to peer. Congestion control is entirely + implemented by the sending peer, the receiver only provides clues, + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 32] + +Internet-Draft swift December 19, 2011 + + + such as hints, acknowledgments and timestamps. In general, it is + expected that servers would use TCP-like congestion control schemes + such as classic AIMD or CUBIC [CUBIC]. End-user peers are expected to + use weaker-than-TCP (least than best effort) congestion control, such + as [LEDBAT] to minimize seeding counter-incentives. + + +8.4. Piece Picking Algorithms + + Piece picking entirely depends on the receiving peer. The sender peer + is made aware of preferred pieces by the means of HINT messages. In + some scenarios it may be beneficial to allow the sender to ignore + those hints and send unrequested data. + + +8.5. Reciprocity Algorithms + + Reciprocity algorithms are the sole responsibility of the sender + peer. Reciprocal intentions of the sender are not manifested by + separate messages (as BitTorrent's CHOKE/UNCHOKE), as it does not + guarantee anything anyway (the "snubbing" syndrome). + + +8.6. Different crypto/hashing schemes + + Once a flavor of swift will need to use a different crypto scheme + (e.g., SHA-256), a message should be allocated for that. As the root + hash is supplied in the handshake message, the crypto scheme in use + will be known from the very beginning. As the root hash is the + content's identifier, different schemes of crypto cannot be mixed in + the same swarm; different swarms may distribute the same content + using different crypto. + + +9. Rationale + + Historically, the Internet was based on end-to-end unicast and, + considering the failure of multicast, was addressed by different + technologies, which ultimately boiled down to maintaining and + coordinating distributed replicas. On one hand, downloading from a + nearby well-provisioned replica is somewhat faster and/or cheaper; on + the other hand, it requires to coordinate multiple parties (the data + source, mirrors/CDN sites/peers, consumers). As the Internet + progresses to richer and richer content, the overhead of peer/replica + coordination becomes dwarfed by the mass of the download itself. + Thus, the niche for multiparty transfers expands. Still, current, + relevant technologies are tightly coupled to a single use case or + even infrastructure of a particular corporation. The mission of our + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 33] + +Internet-Draft swift December 19, 2011 + + + project is to create a generic content-centric multiparty transport + protocol to allow seamless, effortless data dissemination on the Net. + + TABLE 1. Use cases. + + | mirror-based peer-assisted peer-to-peer + ------+---------------------------------------------------- + data | SunSITE CacheLogic VelociX BitTorrent + VoD | YouTube Azureus(+seedboxes) SwarmPlayer + live | Akamai Str. Octoshape, Joost PPlive + + The protocol must be designed for maximum genericity, thus focusing + on the very core of the mission, contain no magic constants and no + hardwired policies. Effectively, it is a set of messages allowing to + securely retrieve data from whatever source available, in parallel. + Ideally, the protocol must be able to run over IP as an independent + transport protocol. Practically, it must run over UDP and TCP. + + +9.1. Design Goals + + The technical focus of the swift protocol is to find the simplest + solution involving the minimum set of primitives, still being + sufficient to implement all the targeted usecases (see Table 1), + suitable for use in general-purpose software and hardware (i.e. a web + browser or a set-top box). The five design goals for the protocol + are: + + 1. Embeddable kernel-ready protocol. + 2. Embrace real-time streaming, in- and out-of-order download. + 3. Have short warm-up times. + 4. Traverse NATs transparently. + 5. Be extensible, allow for multitude of implementation over + diverse mediums, allow for drop-in pluggability. + + The objectives are referenced as (1)-(5). + + The goal of embedding (1) means that the protocol must be ready to + function as a regular transport protocol inside a set-top box, mobile + device, a browser and/or in the kernel space. Thus, the protocol must + have light footprint, preferably less than TCP, in spite of the + necessity to support numerous ongoing connections as well as to + constantly probe the network for new possibilities. The practical + overhead for TCP is estimated at 10KB per connection [HTTP1MLN]. We + aim at <1KB per peer connected. Also, the amount of code necessary to + make a basic implementation must be limited to 10KLoC of C. + Otherwise, besides the resource considerations, maintaining and + auditing the code might become prohibitively expensive. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 34] + +Internet-Draft swift December 19, 2011 + + + The support for all three basic usecases of real-time streaming, + in-order download and out-of-order download (2) is necessary for the + manifested goal of THE multiparty transport protocol as no single + usecase dominates over the others. + + The objective of short warm-up times (3) is the matter of end-user + experience; the playback must start as soon as possible. Thus any + unnecessary initialization roundtrips and warm-up cycles must be + eliminated from the transport layer. + + Transparent NAT traversal (4) is absolutely necessary as at least 60% + of today's users are hidden behind NATs. NATs severely affect + connection patterns in P2P networks thus impacting performance and + fairness [MOLNAT,LUCNAT]. + + The protocol must define a common message set (5) to be used by + implementations; it must not hardwire any magic constants, algorithms + or schemes beyond that. For example, an implementation is free to use + its own congestion control, connection rotation or reciprocity + algorithms. Still, the protocol must enable such algorithms by + supplying sufficient information. For example, trackerless peer + discovery needs peer exchange messages, scavenger congestion control + may need timestamped acknowledgments, etc. + + +9.2. Not TCP + + To large extent, swift's design is defined by the cornerstone + decision to get rid of TCP and not to reinvent any TCP-like + transports on top of UDP or otherwise. The requirements (1), (4), (5) + make TCP a bad choice due to its high per-connection footprint, + complex and less reliable NAT traversal and fixed predefined + congestion control algorithms. Besides that, an important + consideration is that no block of TCP functionality turns out to be + useful for the general case of swarming downloads. Namely, + 1. in-order delivery is less useful as peer-to-peer protocols + often employ out-of-order delivery themselves and in either case + out-of-order data can still be stored; + 2. reliable delivery/retransmissions are not useful because + the same data might be requested from different sources; as + in-order delivery is not required, packet losses might be + patched up lazily, without stopping the flow of data; + 3. flow control is not necessary as the receiver is much less + likely to be saturated with the data and even if so, that + situation is perfectly detected by the congestion control; + 4. TCP congestion control is less useful as custom congestion + control is often needed [LEDBAT]. + In general, TCP is built and optimized for a different usecase than + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 35] + +Internet-Draft swift December 19, 2011 + + + we have with swarming downloads. The abstraction of a "data pipe" + orderly delivering some stream of bytes from one peer to another + turned out to be irrelevant. In even more general terms, TCP + supports the abstraction of pairwise _conversations_, while we need + a content-centric protocol built around the abstraction of a cloud + of participants disseminating the same _data_ in any way and order + that is convenient to them. + + Thus, the choice is to design a protocol that runs on top of + unreliable datagrams. Instead of reimplementing TCP, we create a + datagram-based protocol, completely dropping the sequential data + stream abstraction. Removing unnecessary features of TCP makes it + easier both to implement the protocol and to verify it; numerous TCP + vulnerabilities were caused by complexity of the protocol's state + machine. Still, we reserve the possibility to run swift on top of TCP + or HTTP. + + Pursuing the maxim of making things as simple as possible but not + simpler, we fit the protocol into the constraints of the transport + layer by dropping all the transmission's technical metadata except + for the content's root hash (compare that to metadata files used in + BitTorrent). Elimination of technical metadata is achieved through + the use of Merkle [MERKLE,ABMRKL] hash trees, exclusively single-file + transfers and other techniques. As a result, a transfer is identified + and bootstrapped by its root hash only. + + To avoid the usual layering of positive/negative acknowledgment + mechanisms we introduce a scale-invariant acknowledgment system (see + Sec 4.4). The system allows for aggregation and variable level of + detail in requesting, announcing and acknowledging data, serves + in-order and out-of-order retrieval with equal ease. Besides the + protocol's footprint, we also aim at lowering the size of a minimal + useful interaction. Once a single datagram is received, it must be + checked for data integrity, and then either dropped or accepted, + consumed and relayed. + + + +9.3. Generic Acknowledgments + + Generic acknowledgments came out of the need to simplify the + data addressing/requesting/acknowledging mechanics, which tends + to become overly complex and multilayered with the conventional + approach. Take the BitTorrent+TCP tandem for example: + + 1. The basic data unit is a byte of content in a file. + 2. BitTorrent's highest-level unit is a "torrent", physically a + byte range resulting from concatenation of content files. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 36] + +Internet-Draft swift December 19, 2011 + + + 3. A torrent is divided into "pieces", typically about a thousand + of them. Pieces are used to communicate progress to other + peers. Pieces are also basic data integrity units, as the torrent's + metadata includes a SHA1 hash for every piece. + 4. The actual data transfers are requested and made in 16KByte + units, named "blocks" or chunks. + 5. Still, one layer lower, TCP also operates with bytes and byte + offsets which are totally different from the torrent's bytes and + offsets, as TCP considers cumulative byte offsets for all content + sent by a connection, be it data, metadata or commands. + 6. Finally, another layer lower, IP transfers independent datagrams + (typically around 1.5 kilobyte), which TCP then reassembles into + continuous streams. + + Obviously, such addressing schemes need lots of mappings; from + piece number and block to file(s) and offset(s) to TCP sequence + numbers to the actual packets and the other way around. Lots of + complexity is introduced by mismatch of bounds: packet bounds are + different from file, block or hash/piece bounds. The picture is + typical for a codebase which was historically layered. + + To simplify this aspect, we employ a generic content addressing + scheme based on binary intervals, or "bins" for short. + + + +Acknowledgements + + Arno Bakker and Victor Grishchenko are partially supported by the + P2P-Next project (http://www.p2p-next.org/), a research project + supported by the European Community under its 7th Framework Programme + (grant agreement no. 216217). The views and conclusions contained + herein are those of the authors and should not be interpreted as + necessarily representing the official policies or endorsements, + either expressed or implied, of the P2P-Next project or the European + Commission. + + The swift protocol was designed by Victor Grishchenko at Technische + Universiteit Delft. The authors would like to thank the following + people for their contributions to this draft: Mihai Capota, Raul + Jiminez, Flutra Osmani, Riccardo Petrocco, Johan Pouwelse, and Raynor + Vliegendhart. + + +References + +[RFC2119] Key words for use in RFCs to Indicate Requirement Levels +[HTTP1MLN] Richard Jones. "A Million-user Comet Application with + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 37] + +Internet-Draft swift December 19, 2011 + + + Mochiweb", Part 3. http://www.metabrew.com/article/ + a-million-user-comet-application-with-mochiweb-part-3 +[MOLNAT] J.J.D. Mol, J.A. Pouwelse, D.H.J. Epema and H.J. Sips: + "Free-riding, Fairness, and Firewalls in P2P File-Sharing" + Proc. Eighth International Conference on Peer-to-Peer Computing + (P2P '08), Aachen, Germany, 8-11 Sept. 2008, pp. 301 - 310. +[LUCNAT] L. D'Acunto and M. Meulpolder and R. Rahman and J.A. + Pouwelse and H.J. Sips. "Modeling and Analyzing the Effects + of Firewalls and NATs in P2P Swarming Systems". In Proc. of + IEEE IPDPS (HotP2P), Atlanta, USA, April 23, 2010. +[BINMAP] V. Grishchenko, J. Pouwelse: "Binmaps: hybridizing bitmaps + and binary trees". Technical Report PDS-2011-005, Parallel and + Distributed Systems Group, Fac. of Electrical Engineering, + Mathematics, and Computer Science, Delft University of Technology, + The Netherlands, April 2009. +[SNP] B. Ford, P. Srisuresh, D. Kegel: "Peer-to-Peer Communication + Across Network Address Translators", + http://www.brynosaurus.com/pub/net/p2pnat/ +[FIPS180-2] + Federal Information Processing Standards Publication 180-2: + "Secure Hash Standard" 2002 August 1. +[MERKLE] Merkle, R. "Secrecy, Authentication, and Public Key Systems", + Ph.D. thesis, Dept. of Electrical Engineering, Stanford University, + CA, USA, 1979. pp 40-45. +[ABMRKL] Arno Bakker: "Merkle hash torrent extension", BitTorrent + Enhancement Proposal 30, Mar 2009. + http://bittorrent.org/beps/bep_0030.html +[CUBIC] Injong Rhee, and Lisong Xu: "CUBIC: A New TCP-Friendly + High-Speed TCP Variant", Proc. Third International Workshop + on Protocols for Fast Long-Distance Networks (PFLDnet), Lyon, + France, Feb 2005. +[LEDBAT] S. Shalunov et al. "Low Extra Delay Background Transport + (LEDBAT)", IETF Internet-Draft draft-ietf-ledbat-congestion + (work in progress), Oct 2011. + http://datatracker.ietf.org/doc/draft-ietf-ledbat-congestion/ +[TIT4TAT] Bram Cohen: "Incentives Build Robustness in BitTorrent", + Proc. 1st Workshop on Economics of Peer-to-Peer Systems, Berkeley, + CA, USA, Jun 2003. +[BITTORRENT] B. Cohen, "The BitTorrent Protocol Specification", + February 2008, http://www.bittorrent.org/beps/bep_0003.html +[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. + Jacobson, "RTP: A Transport Protocol for Real-Time + Applications", STD 64, RFC 3550, July 2003. +[RFC3711] M. Baugher, D. McGrew, M. Naslund, E. Carrara, K. Norrman, + "The Secure Real-time Transport Protocol (SRTP), RFC 3711, March + 2004. +[RFC5389] Rosenberg, J., Mahy, R., Matthews, P., and D. Wing, + "Session Traversal Utilities for NAT (STUN)", RFC 5389, October 2008. + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 38] + +Internet-Draft swift December 19, 2011 + + +[RFC2616] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, + P. Leach, T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", + RFC2616, June 1999. +[I-D.ietf-ppsp-reqs] Zong, N., Zhang, Y., Pascual, V., Williams, C., + and L. Xiao, "P2P Streaming Protocol (PPSP) Requirements", + draft-ietf-ppsp-reqs-05 (work in progress), October 2011. +[PPSPCHART] M. Stiemerling et al. "Peer to Peer Streaming Protocol (ppsp) + Description of Working Group" + http://datatracker.ietf.org/wg/ppsp/charter/ +[PERMIDS] A. Bakker et al. "Next-Share Platform M8--Specification + Part", App. C. P2P-Next project deliverable D4.0.1 (revised), + June 2009. + http://www.p2p-next.org/download.php?id=E7750C654035D8C2E04D836243E6526E +[PUPPETCAST] A. Bakker and M. van Steen. "PuppetCast: A Secure Peer + Sampling Protocol". Proceedings 4th Annual European Conference on + Computer Network Defense (EC2ND'08), pp. 3-10, Dublin, Ireland, + 11-12 December 2008. +[CLOSED] N. Borch, K. Michell, I. Arntzen, and D. Gabrijelcic: "Access + control to BitTorrent swarms using closed swarms". In Proceedings + of the 2010 ACM workshop on Advanced video streaming techniques + for peer-to-peer networks and social networking (AVSTP2P '10). + ACM, New York, NY, USA, 25-30. + http://doi.acm.org/10.1145/1877891.1877898 +[ECLIPSE] E. Sit and R. Morris, "Security Considerations for + Peer-to-Peer Distributed Hash Tables", IPTPS '01: Revised Papers + from the First International Workshop on Peer-to-Peer Systems, pp. + 261-269, Springer-Verlag, London, UK, 2002. +[SECDHTS] G. Urdaneta, G. Pierre, M. van Steen, "A Survey of DHT + Security Techniques", ACM Computing Surveys, vol. 43(2), June 2011. +[SWIFTIMPL] V. Grishchenko, et al. "Swift M40 reference implementation", + http://swarmplayer.p2p-next.org/download/Next-Share-M40.tar.bz2 + (subdirectory Next-Share/TUD/swift-trial-r2242/), July 2011. +[CCNWIKI] http://en.wikipedia.org/wiki/Content-centric_networking +[HAC01] A.J. Menezes, P.C. van Oorschot and S.A. Vanstone. "Handbook of + Applied Cryptography", CRC Press, October 1996 (Fifth Printing, + August 2001). +[JIM11] R. Jimenez, F. Osmani, and B. Knutsson. "Sub-Second Lookups on + a Large-Scale Kademlia-Based Overlay". 11th IEEE International + Conference on Peer-to-Peer Computing 2011, Kyoto, Japan, Aug. 2011 + + +Authors' addresses + + A. Bakker + Technische Universiteit Delft + Department EWI/ST/PDS + Room HB 9.160 + Mekelweg 4 + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 39] + +Internet-Draft swift December 19, 2011 + + + 2628CD Delft + The Netherlands + + Email: arno@cs.vu.nl + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Grishchenko and Bakker Expires June 21, 2012 [Page 40] diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/index.html tribler-6.2.0/Tribler/SwiftEngine/doc/index.html --- tribler-6.2.0/Tribler/SwiftEngine/doc/index.html 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/index.html 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,164 @@ + + + + + + + swift: the multiparty transport protocol + + + + + + + +
+ + + + + + + +
+ +

Turn the Net into a single data cloud

+

Current Internet protocols are geared for 1:1 client/server communication. We expanded the TCP/IP protocol suite with swarming. Our protocol is designed to be capable of integration into browsers or operating systems and is able to serve 95% of current Internet traffic.

+

swift is a multiparty transport protocol. Its mission is to disseminate content among a swarm of peers. It might be understood as BitTorrent at the transport layer.

+

The TCP+BitTorrent stack consists of 60+90K lines of code (according to SLOCCount). With novel datastructures and refactoring we managed to implement swift in a mere 4,000 lines of cross-platform C++ code. The libswift library is licensed under LGPL; it runs on Mac OS X, Windows and a variety of Unices; it uses UDP with LEDBAT congestion control. Currently maximum throughput is 400Mbps, but we are working on that: our next target is 1 Gbps. The library is delivered as a part of P2P-Next, funded by the European Union Seventh Framework Programme.

+ +
+ + + +
+ +

Ideas

+ +

As wise people say, the Internet was initially built for remotely connecting scientists to expensive supercomputers (whose computing power was comparable to modern cell phones). Thus, they supported the abstraction of conversation. Currently, however, the Internet is mostly used for disseminating content – and this mismatch definitely creates some problems.

+

The swift protocol is a content-centric multiparty transport protocol. Basically, it answers one and only one question: 'Here is a hash! Give me data for it!'. Ultimately swift aims at the abstraction of the Internet as a single big data cloud. Such entities as storage, servers and connections are abstracted away and are virtually invisible at the API layer. Given a hash, the data is received from whatever source available and data integrity is checked cryptographically with Merkle hash trees.

+

An old Unix adage says: 'free memory is wasted memory'. Once a computer is powered on, there is no benefit in keeping some memory unoccupied. We may extend this principle a bit further:

+
    +
  • free bandwidth is wasted bandwidth
  • +
  • free storage is wasted storage.
  • +
+

Unless your power budget is really tight, there is no sense in conserving either. Thus, instead of emphasising reciprocity and incentives we focus on code with a lighter footprint, non-intrusive congestion control and automatic disk space management.

+

Currently, most parts of the protocol/library are implemented, pass basic testing and successfully transfer data on real networks. After more scrutinized testing, the protocol and the library are expected to be real-life-ready in December 2010.

+ + + +

Design of the protocol

+ +

Most features of the protocol are defined by its function as a content-centric multiparty transport protocol. It entirely drops TCP's abstraction of sequential reliable data stream delivery: for swift this is redundant. For example, out-of-order data could still be saved and the same piece of data might always be received from another peer. Being implemented over UDP, the protocol does its best to make every datagram self-contained. In general, pruning of unneeded functions and aggressive layer collapsing greatly simplifies the protocol compared to, for example, the BitTorrent+TCP stack.

+ +

Atomic datagrams, not data stream

+

To achieve per-datagram flexibility of data flow and also to adapt to the unreliable medium (UDP, and, ultimately, IP), the protocol was built around the abstraction of atomic datagrams. Ideally, once received, a datagram is either immediately discarded or permanently accepted, ready to be forwarded to other peers. For the sake of flexibility, most of the protocol's messages are optional. It also has no 'standard' header. Instead, each datagram is a concatenation of zero or more messages. No message ever spans two datagrams. Except for the data pieces themselves, no message is acknowledged or guaranteed to be delivered.

+ +

Scale-independent unit system

+

To avoid a multilayered request/acknowledgement system, where every layer basically does the same but for bigger chunks of data – as is the case with BitTorrent+TCP packet-block-piece-file-torrent stacking – swift employs a scale-independent acknowledgement/request system, where data is measured by aligned power-of-2 intervals (so called bins). All acknowledgements and requests are done in terms of bins.

+ +

Datagram-level integrity checks

+

swift builds Merkle hash trees down to every single packet (1KB of data). Once data is transmitted, all uncle hashes necessary for verification are prepended to the same datagram. As the receiver constantly remembers old hashes, the average number of 'new' hashes which have to be transmitted is small: normally around one per packet of data.

+ +

NAT traversal by design

+

The only method of peer discovery in swift is PEX: a third peer initiates a connection between two of its contacted peers. The protocol's handshake is engineered to perform simple NAT hole punching transparently if needed.

+ +

Subsetting of the protocol

+

Different kinds of peers might implement different subsets of messages; a 'tracker', for example, uses the same protocol as every peer, except it only accepts the HANDSHAKE message and the HASH message (to let peers explain what content they are interested in), while returning only HANDSHAKE and PEX_ADD messages (to return the list of peers). Different subsets of accepted/emitted messages may correspond to push/pull peers, plain trackers, hash storing trackers, live streaming peers, etc.

+ +

Push AND pull

+

The protocol allows both for PUSH (sender decides what to send) and PULL (receiver explicitly requests the data). PUSH is normally used as a fallback if PULL fails; also, the sender may ignore requests and send any data it finds convenient to send. Merkle hash trees allow this flexibility without causing security implications.

+ +

No transmission metadata

+

Unlike BitTorrent swift employs no transmission metadata (the .torrent file). The only bootstrap information is the root hash; file size is derived from the hash tree once the first packet is received; the hash tree is reconstructed incrementally in the process of download.

+ + + +

Specifications and documentation

+ + + + + +

Downloads

+ + + + + +

Frequently asked questions

+ +

Well, why swift?

+

That name has served well for many other protocols; we hope it will serve well for ours. It may be thought of as a meta-joke. The working name for the protocol was 'VicTorrent'. We also insist on lowercase italic swift to keep the name formally unique (for some definition of unique).

+ +

How is it different from...

+ +

...TCP?

+

TCP emulates reliable in-order delivery ("data pipe") over chaotic unreliable multi-hop networks. TCP has no idea what data it is dealing with, as the data is passed from the userspace. In our case, the data is fixed in advance and many peers participate in distributing the same data. Order of delivery is of little importance and unreliability is naturally compensated for by redundance. Thus, many functions of TCP turn out to be redundant. The only function of TCP that is also critical for swift is the congestion control, but... we need our own custom congestion control! Thus, we did not use TCP.

+

That led both to hurdles and to some savings. As one example, every TCP connection needs to maintain buffers for the data that has left the sender's userspace but not yet arrived at the receiver's userspace. As we know that we are dealing with the same fixed data, we don't need to maintain per-connection buffers.

+ +

...UDP?

+

UDP, which is the thinnest wrapper around IP, is our choice of underlying protocol. From the standpoint of ideology, a transport protocol should be implemented over IP, but unfortunately that causes some chicken-and-egg problems, like a need to get into the kernel to get deployments, and a need to get deployments to be accepted into the kernel. UDP is also quite nice with regard to NAT penetration.

+ +

...BitTorrent?

+

BitTorrent is an application-level protocol and quite a heavy one. We focused on fitting our protocol into the restrictions of the transport layer, assuming that the protocol might eventually be included into operating system kernels. For example, we stripped the protocol of any transmission's metadata (the .torrent file); leaving a file's root hash as the only parameter.

+ +

...µTorrent's µTP?

+

Historically, BitTorrent required lots of adaptations to its underlying transport. First and foremost, TCP is unable to prioritize traffic, so BitTorrent needed to coerce users somehow to tolerate the inconveniences of seeding. That caused tit-for-tat and, to significant degree, rarest-first. Another example is the four-upload-slots limitation. (Apparently some architectural decisions in BitTorrent were dictated by the oddities of Windows 95, but... never mind.)

+

Eventually, BitTorrent developers came to the conclusion that not annoying the user in the first place was probably a better stimulus. So they came up with the LEDBAT congestion control algorithm (Low Extra Delay Background Transport). LEDBAT allows a peer to seed without interfering with regular traffic (in other words, without slowing down the browser). To integrate the novel congestion control algorithm into BitTorrent incrementally, BitTorrent Inc had to develop a TCP-alike transport named µTP. The swift project (then named VicTorrent) began by trying to understand what would happen if BitTorrent was stripped of any Win95-specific, TCP-specific or Python-specific workarounds. As it turned out, not much was left.

+ +

...Van Jacobson's CCN?

+

Van Jacobson's team in PARC is doing exploratory research on content-centric networking. While BitTorrent works at layer 5 (application), we go to layer 4 (transport). PARC people are bold enough to go to layer 3 and to propose a complete replacement for the entire TCP/IP world. That is certainly a compelling vision, but we focus on the near future (<10 years) while CCNx is a much more ambitious rework.

+ +

...DCCP?

+

This question arises quite frequently as DCCP is a congestion-controlled datagram transport. The option of implementing swift over DCCP was considered, but the inconvenience of working with an esoteric transport was not compensated by the added value of DCCP, which is limited to one mode of congestion control being readily implemented. Architectural restrictions imposed by DCCP were also found to be a major inconvenience. Last but not least, currently only Linux supports DCCP at the kernel level.

+ +

...SCTP?

+

+SCTP is a protocol fixing some shortcomings of TCP mostly in the context of telephony. As was the case with DCCP, features of SCTP were of little interest to us, while things we really needed were missing from SCTP. Still, we must admit that we employ quite a similar message-oriented model (as opposed to TCP's stream orientation).

+ +
+ + + +
+ +

Who we are

+ + +

Contacts & feedback

+ + + +
+ + + +
+ + + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.bst tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.bst --- tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.bst 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.bst 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,2425 @@ +%% +%% IEEEtran.bst +%% BibTeX Bibliography Style file for IEEE Journals and Conferences (unsorted) +%% Version 1.13 (2008/09/30) +%% +%% Copyright (c) 2003-2008 Michael Shell +%% +%% Original starting code base and algorithms obtained from the output of +%% Patrick W. Daly's makebst package as well as from prior versions of +%% IEEE BibTeX styles: +%% +%% 1. Howard Trickey and Oren Patashnik's ieeetr.bst (1985/1988) +%% 2. Silvano Balemi and Richard H. Roy's IEEEbib.bst (1993) +%% +%% Support sites: +%% http://www.michaelshell.org/tex/ieeetran/ +%% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/ +%% and/or +%% http://www.ieee.org/ +%% +%% For use with BibTeX version 0.99a or later +%% +%% This is a numerical citation style. +%% +%%************************************************************************* +%% Legal Notice: +%% This code is offered as-is without any warranty either expressed or +%% implied; without even the implied warranty of MERCHANTABILITY or +%% FITNESS FOR A PARTICULAR PURPOSE! +%% User assumes all risk. +%% In no event shall IEEE or any contributor to this code be liable for +%% any damages or losses, including, but not limited to, incidental, +%% consequential, or any other damages, resulting from the use or misuse +%% of any information contained here. +%% +%% All comments are the opinions of their respective authors and are not +%% necessarily endorsed by the IEEE. +%% +%% This work is distributed under the LaTeX Project Public License (LPPL) +%% ( http://www.latex-project.org/ ) version 1.3, and may be freely used, +%% distributed and modified. A copy of the LPPL, version 1.3, is included +%% in the base LaTeX documentation of all distributions of LaTeX released +%% 2003/12/01 or later. +%% Retain all contribution notices and credits. +%% ** Modified files should be clearly indicated as such, including ** +%% ** renaming them and changing author support contact information. ** +%% +%% File list of work: IEEEabrv.bib, IEEEfull.bib, IEEEexample.bib, +%% IEEEtran.bst, IEEEtranS.bst, IEEEtranSA.bst, +%% IEEEtranN.bst, IEEEtranSN.bst, IEEEtran_bst_HOWTO.pdf +%%************************************************************************* +% +% +% Changelog: +% +% 1.00 (2002/08/13) Initial release +% +% 1.10 (2002/09/27) +% 1. Corrected minor bug for improperly formed warning message when a +% book was not given a title. Thanks to Ming Kin Lai for reporting this. +% 2. Added support for CTLname_format_string and CTLname_latex_cmd fields +% in the BST control entry type. +% +% 1.11 (2003/04/02) +% 1. Fixed bug with URLs containing underscores when using url.sty. Thanks +% to Ming Kin Lai for reporting this. +% +% 1.12 (2007/01/11) +% 1. Fixed bug with unwanted comma before "et al." when an entry contained +% more than two author names. Thanks to Pallav Gupta for reporting this. +% 2. Fixed bug with anomalous closing quote in tech reports that have a +% type, but without a number or address. Thanks to Mehrdad Mirreza for +% reporting this. +% 3. Use braces in \providecommand in begin.bib to better support +% latex2html. TeX style length assignments OK with recent versions +% of latex2html - 1.71 (2002/2/1) or later is strongly recommended. +% Use of the language field still causes trouble with latex2html. +% Thanks to Federico Beffa for reporting this. +% 4. Added IEEEtran.bst ID and version comment string to .bbl output. +% 5. Provide a \BIBdecl hook that allows the user to execute commands +% just prior to the first entry. +% 6. Use default urlstyle (is using url.sty) of "same" rather than rm to +% better work with a wider variety of bibliography styles. +% 7. Changed month abbreviations from Sept., July and June to Sep., Jul., +% and Jun., respectively, as IEEE now does. Thanks to Moritz Borgmann +% for reporting this. +% 8. Control entry types should not be considered when calculating longest +% label width. +% 9. Added alias www for electronic/online. +% 10. Added CTLname_url_prefix control entry type. +% +% 1.13 (2008/09/30) +% 1. Fixed bug with edition number to ordinal conversion. Thanks to +% Michael Roland for reporting this and correcting the algorithm. + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%% DEFAULTS FOR THE CONTROLS OF THE BST STYLE %% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +% These are the defaults for the user adjustable controls. The values used +% here can be overridden by the user via IEEEtranBSTCTL entry type. + +% NOTE: The recommended LaTeX command to invoke a control entry type is: +% +%\makeatletter +%\def\bstctlcite{\@ifnextchar[{\@bstctlcite}{\@bstctlcite[@auxout]}} +%\def\@bstctlcite[#1]#2{\@bsphack +% \@for\@citeb:=#2\do{% +% \edef\@citeb{\expandafter\@firstofone\@citeb}% +% \if@filesw\immediate\write\csname #1\endcsname{\string\citation{\@citeb}}\fi}% +% \@esphack} +%\makeatother +% +% It is called at the start of the document, before the first \cite, like: +% \bstctlcite{IEEEexample:BSTcontrol} +% +% IEEEtran.cls V1.6 and later does provide this command. + + + +% #0 turns off the display of the number for articles. +% #1 enables +FUNCTION {default.is.use.number.for.article} { #1 } + + +% #0 turns off the display of the paper and type fields in @inproceedings. +% #1 enables +FUNCTION {default.is.use.paper} { #1 } + + +% #0 turns off the forced use of "et al." +% #1 enables +FUNCTION {default.is.forced.et.al} { #0 } + +% The maximum number of names that can be present beyond which an "et al." +% usage is forced. Be sure that num.names.shown.with.forced.et.al (below) +% is not greater than this value! +% Note: There are many instances of references in IEEE journals which have +% a very large number of authors as well as instances in which "et al." is +% used profusely. +FUNCTION {default.max.num.names.before.forced.et.al} { #10 } + +% The number of names that will be shown with a forced "et al.". +% Must be less than or equal to max.num.names.before.forced.et.al +FUNCTION {default.num.names.shown.with.forced.et.al} { #1 } + + +% #0 turns off the alternate interword spacing for entries with URLs. +% #1 enables +FUNCTION {default.is.use.alt.interword.spacing} { #1 } + +% If alternate interword spacing for entries with URLs is enabled, this is +% the interword spacing stretch factor that will be used. For example, the +% default "4" here means that the interword spacing in entries with URLs can +% stretch to four times normal. Does not have to be an integer. Note that +% the value specified here can be overridden by the user in their LaTeX +% code via a command such as: +% "\providecommand\BIBentryALTinterwordstretchfactor{1.5}" in addition to +% that via the IEEEtranBSTCTL entry type. +FUNCTION {default.ALTinterwordstretchfactor} { "4" } + + +% #0 turns off the "dashification" of repeated (i.e., identical to those +% of the previous entry) names. IEEE normally does this. +% #1 enables +FUNCTION {default.is.dash.repeated.names} { #1 } + + +% The default name format control string. +FUNCTION {default.name.format.string}{ "{f.~}{vv~}{ll}{, jj}" } + + +% The default LaTeX font command for the names. +FUNCTION {default.name.latex.cmd}{ "" } + + +% The default URL prefix. +FUNCTION {default.name.url.prefix}{ "[Online]. Available:" } + + +% Other controls that cannot be accessed via IEEEtranBSTCTL entry type. + +% #0 turns off the terminal startup banner/completed message so as to +% operate more quietly. +% #1 enables +FUNCTION {is.print.banners.to.terminal} { #1 } + + + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%% FILE VERSION AND BANNER %% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +FUNCTION{bst.file.version} { "1.13" } +FUNCTION{bst.file.date} { "2008/09/30" } +FUNCTION{bst.file.website} { "http://www.michaelshell.org/tex/ieeetran/bibtex/" } + +FUNCTION {banner.message} +{ is.print.banners.to.terminal + { "-- IEEEtran.bst version" " " * bst.file.version * + " (" * bst.file.date * ") " * "by Michael Shell." * + top$ + "-- " bst.file.website * + top$ + "-- See the " quote$ * "IEEEtran_bst_HOWTO.pdf" * quote$ * " manual for usage information." * + top$ + } + { skip$ } + if$ +} + +FUNCTION {completed.message} +{ is.print.banners.to.terminal + { "" + top$ + "Done." + top$ + } + { skip$ } + if$ +} + + + + +%%%%%%%%%%%%%%%%%%%%%% +%% STRING CONSTANTS %% +%%%%%%%%%%%%%%%%%%%%%% + +FUNCTION {bbl.and}{ "and" } +FUNCTION {bbl.etal}{ "et~al." } +FUNCTION {bbl.editors}{ "eds." } +FUNCTION {bbl.editor}{ "ed." } +FUNCTION {bbl.edition}{ "ed." } +FUNCTION {bbl.volume}{ "vol." } +FUNCTION {bbl.of}{ "of" } +FUNCTION {bbl.number}{ "no." } +FUNCTION {bbl.in}{ "in" } +FUNCTION {bbl.pages}{ "pp." } +FUNCTION {bbl.page}{ "p." } +FUNCTION {bbl.chapter}{ "ch." } +FUNCTION {bbl.paper}{ "paper" } +FUNCTION {bbl.part}{ "pt." } +FUNCTION {bbl.patent}{ "Patent" } +FUNCTION {bbl.patentUS}{ "U.S." } +FUNCTION {bbl.revision}{ "Rev." } +FUNCTION {bbl.series}{ "ser." } +FUNCTION {bbl.standard}{ "Std." } +FUNCTION {bbl.techrep}{ "Tech. Rep." } +FUNCTION {bbl.mthesis}{ "Master's thesis" } +FUNCTION {bbl.phdthesis}{ "Ph.D. dissertation" } +FUNCTION {bbl.st}{ "st" } +FUNCTION {bbl.nd}{ "nd" } +FUNCTION {bbl.rd}{ "rd" } +FUNCTION {bbl.th}{ "th" } + + +% This is the LaTeX spacer that is used when a larger than normal space +% is called for (such as just before the address:publisher). +FUNCTION {large.space} { "\hskip 1em plus 0.5em minus 0.4em\relax " } + +% The LaTeX code for dashes that are used to represent repeated names. +% Note: Some older IEEE journals used something like +% "\rule{0.275in}{0.5pt}\," which is fairly thick and runs right along +% the baseline. However, IEEE now uses a thinner, above baseline, +% six dash long sequence. +FUNCTION {repeated.name.dashes} { "------" } + + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%% PREDEFINED STRING MACROS %% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +MACRO {jan} {"Jan."} +MACRO {feb} {"Feb."} +MACRO {mar} {"Mar."} +MACRO {apr} {"Apr."} +MACRO {may} {"May"} +MACRO {jun} {"Jun."} +MACRO {jul} {"Jul."} +MACRO {aug} {"Aug."} +MACRO {sep} {"Sep."} +MACRO {oct} {"Oct."} +MACRO {nov} {"Nov."} +MACRO {dec} {"Dec."} + + + +%%%%%%%%%%%%%%%%%% +%% ENTRY FIELDS %% +%%%%%%%%%%%%%%%%%% + +ENTRY + { address + assignee + author + booktitle + chapter + day + dayfiled + edition + editor + howpublished + institution + intype + journal + key + language + month + monthfiled + nationality + note + number + organization + pages + paper + publisher + school + series + revision + title + type + url + volume + year + yearfiled + CTLuse_article_number + CTLuse_paper + CTLuse_forced_etal + CTLmax_names_forced_etal + CTLnames_show_etal + CTLuse_alt_spacing + CTLalt_stretch_factor + CTLdash_repeated_names + CTLname_format_string + CTLname_latex_cmd + CTLname_url_prefix + } + {} + { label } + + + + +%%%%%%%%%%%%%%%%%%%%%%% +%% INTEGER VARIABLES %% +%%%%%%%%%%%%%%%%%%%%%%% + +INTEGERS { prev.status.punct this.status.punct punct.std + punct.no punct.comma punct.period + prev.status.space this.status.space space.std + space.no space.normal space.large + prev.status.quote this.status.quote quote.std + quote.no quote.close + prev.status.nline this.status.nline nline.std + nline.no nline.newblock + status.cap cap.std + cap.no cap.yes} + +INTEGERS { longest.label.width multiresult nameptr namesleft number.label numnames } + +INTEGERS { is.use.number.for.article + is.use.paper + is.forced.et.al + max.num.names.before.forced.et.al + num.names.shown.with.forced.et.al + is.use.alt.interword.spacing + is.dash.repeated.names} + + +%%%%%%%%%%%%%%%%%%%%%% +%% STRING VARIABLES %% +%%%%%%%%%%%%%%%%%%%%%% + +STRINGS { bibinfo + longest.label + oldname + s + t + ALTinterwordstretchfactor + name.format.string + name.latex.cmd + name.url.prefix} + + + + +%%%%%%%%%%%%%%%%%%%%%%%%% +%% LOW LEVEL FUNCTIONS %% +%%%%%%%%%%%%%%%%%%%%%%%%% + +FUNCTION {initialize.controls} +{ default.is.use.number.for.article 'is.use.number.for.article := + default.is.use.paper 'is.use.paper := + default.is.forced.et.al 'is.forced.et.al := + default.max.num.names.before.forced.et.al 'max.num.names.before.forced.et.al := + default.num.names.shown.with.forced.et.al 'num.names.shown.with.forced.et.al := + default.is.use.alt.interword.spacing 'is.use.alt.interword.spacing := + default.is.dash.repeated.names 'is.dash.repeated.names := + default.ALTinterwordstretchfactor 'ALTinterwordstretchfactor := + default.name.format.string 'name.format.string := + default.name.latex.cmd 'name.latex.cmd := + default.name.url.prefix 'name.url.prefix := +} + + +% This IEEEtran.bst features a very powerful and flexible mechanism for +% controlling the capitalization, punctuation, spacing, quotation, and +% newlines of the formatted entry fields. (Note: IEEEtran.bst does not need +% or use the newline/newblock feature, but it has been implemented for +% possible future use.) The output states of IEEEtran.bst consist of +% multiple independent attributes and, as such, can be thought of as being +% vectors, rather than the simple scalar values ("before.all", +% "mid.sentence", etc.) used in most other .bst files. +% +% The more flexible and complex design used here was motivated in part by +% IEEE's rather unusual bibliography style. For example, IEEE ends the +% previous field item with a period and large space prior to the publisher +% address; the @electronic entry types use periods as inter-item punctuation +% rather than the commas used by the other entry types; and URLs are never +% followed by periods even though they are the last item in the entry. +% Although it is possible to accommodate these features with the conventional +% output state system, the seemingly endless exceptions make for convoluted, +% unreliable and difficult to maintain code. +% +% IEEEtran.bst's output state system can be easily understood via a simple +% illustration of two most recently formatted entry fields (on the stack): +% +% CURRENT_ITEM +% "PREVIOUS_ITEM +% +% which, in this example, is to eventually appear in the bibliography as: +% +% "PREVIOUS_ITEM," CURRENT_ITEM +% +% It is the job of the output routine to take the previous item off of the +% stack (while leaving the current item at the top of the stack), apply its +% trailing punctuation (including closing quote marks) and spacing, and then +% to write the result to BibTeX's output buffer: +% +% "PREVIOUS_ITEM," +% +% Punctuation (and spacing) between items is often determined by both of the +% items rather than just the first one. The presence of quotation marks +% further complicates the situation because, in standard English, trailing +% punctuation marks are supposed to be contained within the quotes. +% +% IEEEtran.bst maintains two output state (aka "status") vectors which +% correspond to the previous and current (aka "this") items. Each vector +% consists of several independent attributes which track punctuation, +% spacing, quotation, and newlines. Capitalization status is handled by a +% separate scalar because the format routines, not the output routine, +% handle capitalization and, therefore, there is no need to maintain the +% capitalization attribute for both the "previous" and "this" items. +% +% When a format routine adds a new item, it copies the current output status +% vector to the previous output status vector and (usually) resets the +% current (this) output status vector to a "standard status" vector. Using a +% "standard status" vector in this way allows us to redefine what we mean by +% "standard status" at the start of each entry handler and reuse the same +% format routines under the various inter-item separation schemes. For +% example, the standard status vector for the @book entry type may use +% commas for item separators, while the @electronic type may use periods, +% yet both entry handlers exploit many of the exact same format routines. +% +% Because format routines have write access to the output status vector of +% the previous item, they can override the punctuation choices of the +% previous format routine! Therefore, it becomes trivial to implement rules +% such as "Always use a period and a large space before the publisher." By +% pushing the generation of the closing quote mark to the output routine, we +% avoid all the problems caused by having to close a quote before having all +% the information required to determine what the punctuation should be. +% +% The IEEEtran.bst output state system can easily be expanded if needed. +% For instance, it is easy to add a "space.tie" attribute value if the +% bibliography rules mandate that two items have to be joined with an +% unbreakable space. + +FUNCTION {initialize.status.constants} +{ #0 'punct.no := + #1 'punct.comma := + #2 'punct.period := + #0 'space.no := + #1 'space.normal := + #2 'space.large := + #0 'quote.no := + #1 'quote.close := + #0 'cap.no := + #1 'cap.yes := + #0 'nline.no := + #1 'nline.newblock := +} + +FUNCTION {std.status.using.comma} +{ punct.comma 'punct.std := + space.normal 'space.std := + quote.no 'quote.std := + nline.no 'nline.std := + cap.no 'cap.std := +} + +FUNCTION {std.status.using.period} +{ punct.period 'punct.std := + space.normal 'space.std := + quote.no 'quote.std := + nline.no 'nline.std := + cap.yes 'cap.std := +} + +FUNCTION {initialize.prev.this.status} +{ punct.no 'prev.status.punct := + space.no 'prev.status.space := + quote.no 'prev.status.quote := + nline.no 'prev.status.nline := + punct.no 'this.status.punct := + space.no 'this.status.space := + quote.no 'this.status.quote := + nline.no 'this.status.nline := + cap.yes 'status.cap := +} + +FUNCTION {this.status.std} +{ punct.std 'this.status.punct := + space.std 'this.status.space := + quote.std 'this.status.quote := + nline.std 'this.status.nline := +} + +FUNCTION {cap.status.std}{ cap.std 'status.cap := } + +FUNCTION {this.to.prev.status} +{ this.status.punct 'prev.status.punct := + this.status.space 'prev.status.space := + this.status.quote 'prev.status.quote := + this.status.nline 'prev.status.nline := +} + + +FUNCTION {not} +{ { #0 } + { #1 } + if$ +} + +FUNCTION {and} +{ { skip$ } + { pop$ #0 } + if$ +} + +FUNCTION {or} +{ { pop$ #1 } + { skip$ } + if$ +} + + +% convert the strings "yes" or "no" to #1 or #0 respectively +FUNCTION {yes.no.to.int} +{ "l" change.case$ duplicate$ + "yes" = + { pop$ #1 } + { duplicate$ "no" = + { pop$ #0 } + { "unknown boolean " quote$ * swap$ * quote$ * + " in " * cite$ * warning$ + #0 + } + if$ + } + if$ +} + + +% pushes true if the single char string on the stack is in the +% range of "0" to "9" +FUNCTION {is.num} +{ chr.to.int$ + duplicate$ "0" chr.to.int$ < not + swap$ "9" chr.to.int$ > not and +} + +% multiplies the integer on the stack by a factor of 10 +FUNCTION {bump.int.mag} +{ #0 'multiresult := + { duplicate$ #0 > } + { #1 - + multiresult #10 + + 'multiresult := + } + while$ +pop$ +multiresult +} + +% converts a single character string on the stack to an integer +FUNCTION {char.to.integer} +{ duplicate$ + is.num + { chr.to.int$ "0" chr.to.int$ - } + {"noninteger character " quote$ * swap$ * quote$ * + " in integer field of " * cite$ * warning$ + #0 + } + if$ +} + +% converts a string on the stack to an integer +FUNCTION {string.to.integer} +{ duplicate$ text.length$ 'namesleft := + #1 'nameptr := + #0 'numnames := + { nameptr namesleft > not } + { duplicate$ nameptr #1 substring$ + char.to.integer numnames bump.int.mag + + 'numnames := + nameptr #1 + + 'nameptr := + } + while$ +pop$ +numnames +} + + + + +% The output routines write out the *next* to the top (previous) item on the +% stack, adding punctuation and such as needed. Since IEEEtran.bst maintains +% the output status for the top two items on the stack, these output +% routines have to consider the previous output status (which corresponds to +% the item that is being output). Full independent control of punctuation, +% closing quote marks, spacing, and newblock is provided. +% +% "output.nonnull" does not check for the presence of a previous empty +% item. +% +% "output" does check for the presence of a previous empty item and will +% remove an empty item rather than outputing it. +% +% "output.warn" is like "output", but will issue a warning if it detects +% an empty item. + +FUNCTION {output.nonnull} +{ swap$ + prev.status.punct punct.comma = + { "," * } + { skip$ } + if$ + prev.status.punct punct.period = + { add.period$ } + { skip$ } + if$ + prev.status.quote quote.close = + { "''" * } + { skip$ } + if$ + prev.status.space space.normal = + { " " * } + { skip$ } + if$ + prev.status.space space.large = + { large.space * } + { skip$ } + if$ + write$ + prev.status.nline nline.newblock = + { newline$ "\newblock " write$ } + { skip$ } + if$ +} + +FUNCTION {output} +{ duplicate$ empty$ + 'pop$ + 'output.nonnull + if$ +} + +FUNCTION {output.warn} +{ 't := + duplicate$ empty$ + { pop$ "empty " t * " in " * cite$ * warning$ } + 'output.nonnull + if$ +} + +% "fin.entry" is the output routine that handles the last item of the entry +% (which will be on the top of the stack when "fin.entry" is called). + +FUNCTION {fin.entry} +{ this.status.punct punct.no = + { skip$ } + { add.period$ } + if$ + this.status.quote quote.close = + { "''" * } + { skip$ } + if$ +write$ +newline$ +} + + +FUNCTION {is.last.char.not.punct} +{ duplicate$ + "}" * add.period$ + #-1 #1 substring$ "." = +} + +FUNCTION {is.multiple.pages} +{ 't := + #0 'multiresult := + { multiresult not + t empty$ not + and + } + { t #1 #1 substring$ + duplicate$ "-" = + swap$ duplicate$ "," = + swap$ "+" = + or or + { #1 'multiresult := } + { t #2 global.max$ substring$ 't := } + if$ + } + while$ + multiresult +} + +FUNCTION {capitalize}{ "u" change.case$ "t" change.case$ } + +FUNCTION {emphasize} +{ duplicate$ empty$ + { pop$ "" } + { "\emph{" swap$ * "}" * } + if$ +} + +FUNCTION {do.name.latex.cmd} +{ name.latex.cmd + empty$ + { skip$ } + { name.latex.cmd "{" * swap$ * "}" * } + if$ +} + +% IEEEtran.bst uses its own \BIBforeignlanguage command which directly +% invokes the TeX hyphenation patterns without the need of the Babel +% package. Babel does a lot more than switch hyphenation patterns and +% its loading can cause unintended effects in many class files (such as +% IEEEtran.cls). +FUNCTION {select.language} +{ duplicate$ empty$ 'pop$ + { language empty$ 'skip$ + { "\BIBforeignlanguage{" language * "}{" * swap$ * "}" * } + if$ + } + if$ +} + +FUNCTION {tie.or.space.prefix} +{ duplicate$ text.length$ #3 < + { "~" } + { " " } + if$ + swap$ +} + +FUNCTION {get.bbl.editor} +{ editor num.names$ #1 > 'bbl.editors 'bbl.editor if$ } + +FUNCTION {space.word}{ " " swap$ * " " * } + + +% Field Conditioners, Converters, Checkers and External Interfaces + +FUNCTION {empty.field.to.null.string} +{ duplicate$ empty$ + { pop$ "" } + { skip$ } + if$ +} + +FUNCTION {either.or.check} +{ empty$ + { pop$ } + { "can't use both " swap$ * " fields in " * cite$ * warning$ } + if$ +} + +FUNCTION {empty.entry.warn} +{ author empty$ title empty$ howpublished empty$ + month empty$ year empty$ note empty$ url empty$ + and and and and and and + { "all relevant fields are empty in " cite$ * warning$ } + 'skip$ + if$ +} + + +% The bibinfo system provides a way for the electronic parsing/acquisition +% of a bibliography's contents as is done by ReVTeX. For example, a field +% could be entered into the bibliography as: +% \bibinfo{volume}{2} +% Only the "2" would show up in the document, but the LaTeX \bibinfo command +% could do additional things with the information. IEEEtran.bst does provide +% a \bibinfo command via "\providecommand{\bibinfo}[2]{#2}". However, it is +% currently not used as the bogus bibinfo functions defined here output the +% entry values directly without the \bibinfo wrapper. The bibinfo functions +% themselves (and the calls to them) are retained for possible future use. +% +% bibinfo.check avoids acting on missing fields while bibinfo.warn will +% issue a warning message if a missing field is detected. Prior to calling +% the bibinfo functions, the user should push the field value and then its +% name string, in that order. + +FUNCTION {bibinfo.check} +{ swap$ duplicate$ missing$ + { pop$ pop$ "" } + { duplicate$ empty$ + { swap$ pop$ } + { swap$ pop$ } + if$ + } + if$ +} + +FUNCTION {bibinfo.warn} +{ swap$ duplicate$ missing$ + { swap$ "missing " swap$ * " in " * cite$ * warning$ pop$ "" } + { duplicate$ empty$ + { swap$ "empty " swap$ * " in " * cite$ * warning$ } + { swap$ pop$ } + if$ + } + if$ +} + + +% IEEE separates large numbers with more than 4 digits into groups of +% three. IEEE uses a small space to separate these number groups. +% Typical applications include patent and page numbers. + +% number of consecutive digits required to trigger the group separation. +FUNCTION {large.number.trigger}{ #5 } + +% For numbers longer than the trigger, this is the blocksize of the groups. +% The blocksize must be less than the trigger threshold, and 2 * blocksize +% must be greater than the trigger threshold (can't do more than one +% separation on the initial trigger). +FUNCTION {large.number.blocksize}{ #3 } + +% What is actually inserted between the number groups. +FUNCTION {large.number.separator}{ "\," } + +% So as to save on integer variables by reusing existing ones, numnames +% holds the current number of consecutive digits read and nameptr holds +% the number that will trigger an inserted space. +FUNCTION {large.number.separate} +{ 't := + "" + #0 'numnames := + large.number.trigger 'nameptr := + { t empty$ not } + { t #-1 #1 substring$ is.num + { numnames #1 + 'numnames := } + { #0 'numnames := + large.number.trigger 'nameptr := + } + if$ + t #-1 #1 substring$ swap$ * + t #-2 global.max$ substring$ 't := + numnames nameptr = + { duplicate$ #1 nameptr large.number.blocksize - substring$ swap$ + nameptr large.number.blocksize - #1 + global.max$ substring$ + large.number.separator swap$ * * + nameptr large.number.blocksize - 'numnames := + large.number.blocksize #1 + 'nameptr := + } + { skip$ } + if$ + } + while$ +} + +% Converts all single dashes "-" to double dashes "--". +FUNCTION {n.dashify} +{ large.number.separate + 't := + "" + { t empty$ not } + { t #1 #1 substring$ "-" = + { t #1 #2 substring$ "--" = not + { "--" * + t #2 global.max$ substring$ 't := + } + { { t #1 #1 substring$ "-" = } + { "-" * + t #2 global.max$ substring$ 't := + } + while$ + } + if$ + } + { t #1 #1 substring$ * + t #2 global.max$ substring$ 't := + } + if$ + } + while$ +} + + +% This function detects entries with names that are identical to that of +% the previous entry and replaces the repeated names with dashes (if the +% "is.dash.repeated.names" user control is nonzero). +FUNCTION {name.or.dash} +{ 's := + oldname empty$ + { s 'oldname := s } + { s oldname = + { is.dash.repeated.names + { repeated.name.dashes } + { s 'oldname := s } + if$ + } + { s 'oldname := s } + if$ + } + if$ +} + +% Converts the number string on the top of the stack to +% "numerical ordinal form" (e.g., "7" to "7th"). There is +% no artificial limit to the upper bound of the numbers as the +% two least significant digits determine the ordinal form. +FUNCTION {num.to.ordinal} +{ duplicate$ #-2 #1 substring$ "1" = + { bbl.th * } + { duplicate$ #-1 #1 substring$ "1" = + { bbl.st * } + { duplicate$ #-1 #1 substring$ "2" = + { bbl.nd * } + { duplicate$ #-1 #1 substring$ "3" = + { bbl.rd * } + { bbl.th * } + if$ + } + if$ + } + if$ + } + if$ +} + +% If the string on the top of the stack begins with a number, +% (e.g., 11th) then replace the string with the leading number +% it contains. Otherwise retain the string as-is. s holds the +% extracted number, t holds the part of the string that remains +% to be scanned. +FUNCTION {extract.num} +{ duplicate$ 't := + "" 's := + { t empty$ not } + { t #1 #1 substring$ + t #2 global.max$ substring$ 't := + duplicate$ is.num + { s swap$ * 's := } + { pop$ "" 't := } + if$ + } + while$ + s empty$ + 'skip$ + { pop$ s } + if$ +} + +% Converts the word number string on the top of the stack to +% Arabic string form. Will be successful up to "tenth". +FUNCTION {word.to.num} +{ duplicate$ "l" change.case$ 's := + s "first" = + { pop$ "1" } + { skip$ } + if$ + s "second" = + { pop$ "2" } + { skip$ } + if$ + s "third" = + { pop$ "3" } + { skip$ } + if$ + s "fourth" = + { pop$ "4" } + { skip$ } + if$ + s "fifth" = + { pop$ "5" } + { skip$ } + if$ + s "sixth" = + { pop$ "6" } + { skip$ } + if$ + s "seventh" = + { pop$ "7" } + { skip$ } + if$ + s "eighth" = + { pop$ "8" } + { skip$ } + if$ + s "ninth" = + { pop$ "9" } + { skip$ } + if$ + s "tenth" = + { pop$ "10" } + { skip$ } + if$ +} + + +% Converts the string on the top of the stack to numerical +% ordinal (e.g., "11th") form. +FUNCTION {convert.edition} +{ duplicate$ empty$ 'skip$ + { duplicate$ #1 #1 substring$ is.num + { extract.num + num.to.ordinal + } + { word.to.num + duplicate$ #1 #1 substring$ is.num + { num.to.ordinal } + { "edition ordinal word " quote$ * edition * quote$ * + " may be too high (or improper) for conversion" * " in " * cite$ * warning$ + } + if$ + } + if$ + } + if$ +} + + + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%% LATEX BIBLIOGRAPHY CODE %% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +FUNCTION {start.entry} +{ newline$ + "\bibitem{" write$ + cite$ write$ + "}" write$ + newline$ + "" + initialize.prev.this.status +} + +% Here we write out all the LaTeX code that we will need. The most involved +% code sequences are those that control the alternate interword spacing and +% foreign language hyphenation patterns. The heavy use of \providecommand +% gives users a way to override the defaults. Special thanks to Javier Bezos, +% Johannes Braams, Robin Fairbairns, Heiko Oberdiek, Donald Arseneau and all +% the other gurus on comp.text.tex for their help and advice on the topic of +% \selectlanguage, Babel and BibTeX. +FUNCTION {begin.bib} +{ "% Generated by IEEEtran.bst, version: " bst.file.version * " (" * bst.file.date * ")" * + write$ newline$ + preamble$ empty$ 'skip$ + { preamble$ write$ newline$ } + if$ + "\begin{thebibliography}{" longest.label * "}" * + write$ newline$ + "\providecommand{\url}[1]{#1}" + write$ newline$ + "\csname url@samestyle\endcsname" + write$ newline$ + "\providecommand{\newblock}{\relax}" + write$ newline$ + "\providecommand{\bibinfo}[2]{#2}" + write$ newline$ + "\providecommand{\BIBentrySTDinterwordspacing}{\spaceskip=0pt\relax}" + write$ newline$ + "\providecommand{\BIBentryALTinterwordstretchfactor}{" + ALTinterwordstretchfactor * "}" * + write$ newline$ + "\providecommand{\BIBentryALTinterwordspacing}{\spaceskip=\fontdimen2\font plus " + write$ newline$ + "\BIBentryALTinterwordstretchfactor\fontdimen3\font minus \fontdimen4\font\relax}" + write$ newline$ + "\providecommand{\BIBforeignlanguage}[2]{{%" + write$ newline$ + "\expandafter\ifx\csname l@#1\endcsname\relax" + write$ newline$ + "\typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been}%" + write$ newline$ + "\typeout{** loaded for the language `#1'. Using the pattern for}%" + write$ newline$ + "\typeout{** the default language instead.}%" + write$ newline$ + "\else" + write$ newline$ + "\language=\csname l@#1\endcsname" + write$ newline$ + "\fi" + write$ newline$ + "#2}}" + write$ newline$ + "\providecommand{\BIBdecl}{\relax}" + write$ newline$ + "\BIBdecl" + write$ newline$ +} + +FUNCTION {end.bib} +{ newline$ "\end{thebibliography}" write$ newline$ } + +FUNCTION {if.url.alt.interword.spacing} +{ is.use.alt.interword.spacing + {url empty$ 'skip$ {"\BIBentryALTinterwordspacing" write$ newline$} if$} + { skip$ } + if$ +} + +FUNCTION {if.url.std.interword.spacing} +{ is.use.alt.interword.spacing + {url empty$ 'skip$ {"\BIBentrySTDinterwordspacing" write$ newline$} if$} + { skip$ } + if$ +} + + + + +%%%%%%%%%%%%%%%%%%%%%%%% +%% LONGEST LABEL PASS %% +%%%%%%%%%%%%%%%%%%%%%%%% + +FUNCTION {initialize.longest.label} +{ "" 'longest.label := + #1 'number.label := + #0 'longest.label.width := +} + +FUNCTION {longest.label.pass} +{ type$ "ieeetranbstctl" = + { skip$ } + { number.label int.to.str$ 'label := + number.label #1 + 'number.label := + label width$ longest.label.width > + { label 'longest.label := + label width$ 'longest.label.width := + } + { skip$ } + if$ + } + if$ +} + + + + +%%%%%%%%%%%%%%%%%%%%% +%% FORMAT HANDLERS %% +%%%%%%%%%%%%%%%%%%%%% + +%% Lower Level Formats (used by higher level formats) + +FUNCTION {format.address.org.or.pub.date} +{ 't := + "" + year empty$ + { "empty year in " cite$ * warning$ } + { skip$ } + if$ + address empty$ t empty$ and + year empty$ and month empty$ and + { skip$ } + { this.to.prev.status + this.status.std + cap.status.std + address "address" bibinfo.check * + t empty$ + { skip$ } + { punct.period 'prev.status.punct := + space.large 'prev.status.space := + address empty$ + { skip$ } + { ": " * } + if$ + t * + } + if$ + year empty$ month empty$ and + { skip$ } + { t empty$ address empty$ and + { skip$ } + { ", " * } + if$ + month empty$ + { year empty$ + { skip$ } + { year "year" bibinfo.check * } + if$ + } + { month "month" bibinfo.check * + year empty$ + { skip$ } + { " " * year "year" bibinfo.check * } + if$ + } + if$ + } + if$ + } + if$ +} + + +FUNCTION {format.names} +{ 'bibinfo := + duplicate$ empty$ 'skip$ { + this.to.prev.status + this.status.std + 's := + "" 't := + #1 'nameptr := + s num.names$ 'numnames := + numnames 'namesleft := + { namesleft #0 > } + { s nameptr + name.format.string + format.name$ + bibinfo bibinfo.check + 't := + nameptr #1 > + { nameptr num.names.shown.with.forced.et.al #1 + = + numnames max.num.names.before.forced.et.al > + is.forced.et.al and and + { "others" 't := + #1 'namesleft := + } + { skip$ } + if$ + namesleft #1 > + { ", " * t do.name.latex.cmd * } + { s nameptr "{ll}" format.name$ duplicate$ "others" = + { 't := } + { pop$ } + if$ + t "others" = + { " " * bbl.etal emphasize * } + { numnames #2 > + { "," * } + { skip$ } + if$ + bbl.and + space.word * t do.name.latex.cmd * + } + if$ + } + if$ + } + { t do.name.latex.cmd } + if$ + nameptr #1 + 'nameptr := + namesleft #1 - 'namesleft := + } + while$ + cap.status.std + } if$ +} + + + + +%% Higher Level Formats + +%% addresses/locations + +FUNCTION {format.address} +{ address duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + } + if$ +} + + + +%% author/editor names + +FUNCTION {format.authors}{ author "author" format.names } + +FUNCTION {format.editors} +{ editor "editor" format.names duplicate$ empty$ 'skip$ + { ", " * + get.bbl.editor + capitalize + * + } + if$ +} + + + +%% date + +FUNCTION {format.date} +{ + month "month" bibinfo.check duplicate$ empty$ + year "year" bibinfo.check duplicate$ empty$ + { swap$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + "there's a month but no year in " cite$ * warning$ } + if$ + * + } + { this.to.prev.status + this.status.std + cap.status.std + swap$ 'skip$ + { + swap$ + " " * swap$ + } + if$ + * + } + if$ +} + +FUNCTION {format.date.electronic} +{ month "month" bibinfo.check duplicate$ empty$ + year "year" bibinfo.check duplicate$ empty$ + { swap$ + { pop$ } + { "there's a month but no year in " cite$ * warning$ + pop$ ")" * "(" swap$ * + this.to.prev.status + punct.no 'this.status.punct := + space.normal 'this.status.space := + quote.no 'this.status.quote := + cap.yes 'status.cap := + } + if$ + } + { swap$ + { swap$ pop$ ")" * "(" swap$ * } + { "(" swap$ * ", " * swap$ * ")" * } + if$ + this.to.prev.status + punct.no 'this.status.punct := + space.normal 'this.status.space := + quote.no 'this.status.quote := + cap.yes 'status.cap := + } + if$ +} + + + +%% edition/title + +% Note: IEEE considers the edition to be closely associated with +% the title of a book. So, in IEEEtran.bst the edition is normally handled +% within the formatting of the title. The format.edition function is +% retained here for possible future use. +FUNCTION {format.edition} +{ edition duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + convert.edition + status.cap + { "t" } + { "l" } + if$ change.case$ + "edition" bibinfo.check + "~" * bbl.edition * + cap.status.std + } + if$ +} + +% This is used to format the booktitle of a conference proceedings. +% Here we use the "intype" field to provide the user a way to +% override the word "in" (e.g., with things like "presented at") +% Use of intype stops the emphasis of the booktitle to indicate that +% we no longer mean the written conference proceedings, but the +% conference itself. +FUNCTION {format.in.booktitle} +{ booktitle "booktitle" bibinfo.check duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + select.language + intype missing$ + { emphasize + bbl.in " " * + } + { intype " " * } + if$ + swap$ * + cap.status.std + } + if$ +} + +% This is used to format the booktitle of collection. +% Here the "intype" field is not supported, but "edition" is. +FUNCTION {format.in.booktitle.edition} +{ booktitle "booktitle" bibinfo.check duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + select.language + emphasize + edition empty$ 'skip$ + { ", " * + edition + convert.edition + "l" change.case$ + * "~" * bbl.edition * + } + if$ + bbl.in " " * swap$ * + cap.status.std + } + if$ +} + +FUNCTION {format.article.title} +{ title duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + "t" change.case$ + } + if$ + "title" bibinfo.check + duplicate$ empty$ 'skip$ + { quote.close 'this.status.quote := + is.last.char.not.punct + { punct.std 'this.status.punct := } + { punct.no 'this.status.punct := } + if$ + select.language + "``" swap$ * + cap.status.std + } + if$ +} + +FUNCTION {format.article.title.electronic} +{ title duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + "t" change.case$ + } + if$ + "title" bibinfo.check + duplicate$ empty$ + { skip$ } + { select.language } + if$ +} + +FUNCTION {format.book.title.edition} +{ title "title" bibinfo.check + duplicate$ empty$ + { "empty title in " cite$ * warning$ } + { this.to.prev.status + this.status.std + select.language + emphasize + edition empty$ 'skip$ + { ", " * + edition + convert.edition + status.cap + { "t" } + { "l" } + if$ + change.case$ + * "~" * bbl.edition * + } + if$ + cap.status.std + } + if$ +} + +FUNCTION {format.book.title} +{ title "title" bibinfo.check + duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + select.language + emphasize + } + if$ +} + + + +%% journal + +FUNCTION {format.journal} +{ journal duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + select.language + emphasize + } + if$ +} + + + +%% how published + +FUNCTION {format.howpublished} +{ howpublished duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + } + if$ +} + + + +%% institutions/organization/publishers/school + +FUNCTION {format.institution} +{ institution duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + } + if$ +} + +FUNCTION {format.organization} +{ organization duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + } + if$ +} + +FUNCTION {format.address.publisher.date} +{ publisher "publisher" bibinfo.warn format.address.org.or.pub.date } + +FUNCTION {format.address.publisher.date.nowarn} +{ publisher "publisher" bibinfo.check format.address.org.or.pub.date } + +FUNCTION {format.address.organization.date} +{ organization "organization" bibinfo.check format.address.org.or.pub.date } + +FUNCTION {format.school} +{ school duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + cap.status.std + } + if$ +} + + + +%% volume/number/series/chapter/pages + +FUNCTION {format.volume} +{ volume empty.field.to.null.string + duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + bbl.volume + status.cap + { capitalize } + { skip$ } + if$ + swap$ tie.or.space.prefix + "volume" bibinfo.check + * * + cap.status.std + } + if$ +} + +FUNCTION {format.number} +{ number empty.field.to.null.string + duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + status.cap + { bbl.number capitalize } + { bbl.number } + if$ + swap$ tie.or.space.prefix + "number" bibinfo.check + * * + cap.status.std + } + if$ +} + +FUNCTION {format.number.if.use.for.article} +{ is.use.number.for.article + { format.number } + { "" } + if$ +} + +% IEEE does not seem to tie the series so closely with the volume +% and number as is done in other bibliography styles. Instead the +% series is treated somewhat like an extension of the title. +FUNCTION {format.series} +{ series empty$ + { "" } + { this.to.prev.status + this.status.std + bbl.series " " * + series "series" bibinfo.check * + cap.status.std + } + if$ +} + + +FUNCTION {format.chapter} +{ chapter empty$ + { "" } + { this.to.prev.status + this.status.std + type empty$ + { bbl.chapter } + { type "l" change.case$ + "type" bibinfo.check + } + if$ + chapter tie.or.space.prefix + "chapter" bibinfo.check + * * + cap.status.std + } + if$ +} + + +% The intended use of format.paper is for paper numbers of inproceedings. +% The paper type can be overridden via the type field. +% We allow the type to be displayed even if the paper number is absent +% for things like "postdeadline paper" +FUNCTION {format.paper} +{ is.use.paper + { paper empty$ + { type empty$ + { "" } + { this.to.prev.status + this.status.std + type "type" bibinfo.check + cap.status.std + } + if$ + } + { this.to.prev.status + this.status.std + type empty$ + { bbl.paper } + { type "type" bibinfo.check } + if$ + " " * paper + "paper" bibinfo.check + * + cap.status.std + } + if$ + } + { "" } + if$ +} + + +FUNCTION {format.pages} +{ pages duplicate$ empty$ 'skip$ + { this.to.prev.status + this.status.std + duplicate$ is.multiple.pages + { + bbl.pages swap$ + n.dashify + } + { + bbl.page swap$ + } + if$ + tie.or.space.prefix + "pages" bibinfo.check + * * + cap.status.std + } + if$ +} + + + +%% technical report number + +FUNCTION {format.tech.report.number} +{ number "number" bibinfo.check + this.to.prev.status + this.status.std + cap.status.std + type duplicate$ empty$ + { pop$ + bbl.techrep + } + { skip$ } + if$ + "type" bibinfo.check + swap$ duplicate$ empty$ + { pop$ } + { tie.or.space.prefix * * } + if$ +} + + + +%% note + +FUNCTION {format.note} +{ note empty$ + { "" } + { this.to.prev.status + this.status.std + punct.period 'this.status.punct := + note #1 #1 substring$ + duplicate$ "{" = + { skip$ } + { status.cap + { "u" } + { "l" } + if$ + change.case$ + } + if$ + note #2 global.max$ substring$ * "note" bibinfo.check + cap.yes 'status.cap := + } + if$ +} + + + +%% patent + +FUNCTION {format.patent.date} +{ this.to.prev.status + this.status.std + year empty$ + { monthfiled duplicate$ empty$ + { "monthfiled" bibinfo.check pop$ "" } + { "monthfiled" bibinfo.check } + if$ + dayfiled duplicate$ empty$ + { "dayfiled" bibinfo.check pop$ "" * } + { "dayfiled" bibinfo.check + monthfiled empty$ + { "dayfiled without a monthfiled in " cite$ * warning$ + * + } + { " " swap$ * * } + if$ + } + if$ + yearfiled empty$ + { "no year or yearfiled in " cite$ * warning$ } + { yearfiled "yearfiled" bibinfo.check + swap$ + duplicate$ empty$ + { pop$ } + { ", " * swap$ * } + if$ + } + if$ + } + { month duplicate$ empty$ + { "month" bibinfo.check pop$ "" } + { "month" bibinfo.check } + if$ + day duplicate$ empty$ + { "day" bibinfo.check pop$ "" * } + { "day" bibinfo.check + month empty$ + { "day without a month in " cite$ * warning$ + * + } + { " " swap$ * * } + if$ + } + if$ + year "year" bibinfo.check + swap$ + duplicate$ empty$ + { pop$ } + { ", " * swap$ * } + if$ + } + if$ + cap.status.std +} + +FUNCTION {format.patent.nationality.type.number} +{ this.to.prev.status + this.status.std + nationality duplicate$ empty$ + { "nationality" bibinfo.warn pop$ "" } + { "nationality" bibinfo.check + duplicate$ "l" change.case$ "united states" = + { pop$ bbl.patentUS } + { skip$ } + if$ + " " * + } + if$ + type empty$ + { bbl.patent "type" bibinfo.check } + { type "type" bibinfo.check } + if$ + * + number duplicate$ empty$ + { "number" bibinfo.warn pop$ } + { "number" bibinfo.check + large.number.separate + swap$ " " * swap$ * + } + if$ + cap.status.std +} + + + +%% standard + +FUNCTION {format.organization.institution.standard.type.number} +{ this.to.prev.status + this.status.std + organization duplicate$ empty$ + { pop$ + institution duplicate$ empty$ + { "institution" bibinfo.warn } + { "institution" bibinfo.warn " " * } + if$ + } + { "organization" bibinfo.warn " " * } + if$ + type empty$ + { bbl.standard "type" bibinfo.check } + { type "type" bibinfo.check } + if$ + * + number duplicate$ empty$ + { "number" bibinfo.check pop$ } + { "number" bibinfo.check + large.number.separate + swap$ " " * swap$ * + } + if$ + cap.status.std +} + +FUNCTION {format.revision} +{ revision empty$ + { "" } + { this.to.prev.status + this.status.std + bbl.revision + revision tie.or.space.prefix + "revision" bibinfo.check + * * + cap.status.std + } + if$ +} + + +%% thesis + +FUNCTION {format.master.thesis.type} +{ this.to.prev.status + this.status.std + type empty$ + { + bbl.mthesis + } + { + type "type" bibinfo.check + } + if$ +cap.status.std +} + +FUNCTION {format.phd.thesis.type} +{ this.to.prev.status + this.status.std + type empty$ + { + bbl.phdthesis + } + { + type "type" bibinfo.check + } + if$ +cap.status.std +} + + + +%% URL + +FUNCTION {format.url} +{ url empty$ + { "" } + { this.to.prev.status + this.status.std + cap.yes 'status.cap := + name.url.prefix " " * + "\url{" * url * "}" * + punct.no 'this.status.punct := + punct.period 'prev.status.punct := + space.normal 'this.status.space := + space.normal 'prev.status.space := + quote.no 'this.status.quote := + } + if$ +} + + + + +%%%%%%%%%%%%%%%%%%%% +%% ENTRY HANDLERS %% +%%%%%%%%%%%%%%%%%%%% + + +% Note: In many journals, IEEE (or the authors) tend not to show the number +% for articles, so the display of the number is controlled here by the +% switch "is.use.number.for.article" +FUNCTION {article} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.journal "journal" bibinfo.check "journal" output.warn + format.volume output + format.number.if.use.for.article output + format.pages output + format.date "year" output.warn + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {book} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + author empty$ + { format.editors "author and editor" output.warn } + { format.authors output.nonnull } + if$ + name.or.dash + format.book.title.edition output + format.series output + author empty$ + { skip$ } + { format.editors output } + if$ + format.address.publisher.date output + format.volume output + format.number output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {booklet} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.article.title "title" output.warn + format.howpublished "howpublished" bibinfo.check output + format.organization "organization" bibinfo.check output + format.address "address" bibinfo.check output + format.date output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {electronic} +{ std.status.using.period + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.date.electronic output + format.article.title.electronic output + format.howpublished "howpublished" bibinfo.check output + format.organization "organization" bibinfo.check output + format.address "address" bibinfo.check output + format.note output + format.url output + fin.entry + empty.entry.warn + if.url.std.interword.spacing +} + +FUNCTION {inbook} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + author empty$ + { format.editors "author and editor" output.warn } + { format.authors output.nonnull } + if$ + name.or.dash + format.book.title.edition output + format.series output + format.address.publisher.date output + format.volume output + format.number output + format.chapter output + format.pages output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {incollection} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.in.booktitle.edition "booktitle" output.warn + format.series output + format.editors output + format.address.publisher.date.nowarn output + format.volume output + format.number output + format.chapter output + format.pages output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {inproceedings} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.in.booktitle "booktitle" output.warn + format.series output + format.editors output + format.volume output + format.number output + publisher empty$ + { format.address.organization.date output } + { format.organization "organization" bibinfo.check output + format.address.publisher.date output + } + if$ + format.paper output + format.pages output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {manual} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.book.title.edition "title" output.warn + format.howpublished "howpublished" bibinfo.check output + format.organization "organization" bibinfo.check output + format.address "address" bibinfo.check output + format.date output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {mastersthesis} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.master.thesis.type output.nonnull + format.school "school" bibinfo.warn output + format.address "address" bibinfo.check output + format.date "year" output.warn + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {misc} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.article.title output + format.howpublished "howpublished" bibinfo.check output + format.organization "organization" bibinfo.check output + format.address "address" bibinfo.check output + format.pages output + format.date output + format.note output + format.url output + fin.entry + empty.entry.warn + if.url.std.interword.spacing +} + +FUNCTION {patent} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.article.title output + format.patent.nationality.type.number output + format.patent.date output + format.note output + format.url output + fin.entry + empty.entry.warn + if.url.std.interword.spacing +} + +FUNCTION {periodical} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.editors output + name.or.dash + format.book.title "title" output.warn + format.series output + format.volume output + format.number output + format.organization "organization" bibinfo.check output + format.date "year" output.warn + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {phdthesis} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.phd.thesis.type output.nonnull + format.school "school" bibinfo.warn output + format.address "address" bibinfo.check output + format.date "year" output.warn + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {proceedings} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.editors output + name.or.dash + format.book.title "title" output.warn + format.series output + format.volume output + format.number output + publisher empty$ + { format.address.organization.date output } + { format.organization "organization" bibinfo.check output + format.address.publisher.date output + } + if$ + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {standard} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors output + name.or.dash + format.book.title "title" output.warn + format.howpublished "howpublished" bibinfo.check output + format.organization.institution.standard.type.number output + format.revision output + format.date output + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {techreport} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.howpublished "howpublished" bibinfo.check output + format.institution "institution" bibinfo.warn output + format.address "address" bibinfo.check output + format.tech.report.number output.nonnull + format.date "year" output.warn + format.note output + format.url output + fin.entry + if.url.std.interword.spacing +} + +FUNCTION {unpublished} +{ std.status.using.comma + start.entry + if.url.alt.interword.spacing + format.authors "author" output.warn + name.or.dash + format.article.title "title" output.warn + format.date output + format.note "note" output.warn + format.url output + fin.entry + if.url.std.interword.spacing +} + + +% The special entry type which provides the user interface to the +% BST controls +FUNCTION {IEEEtranBSTCTL} +{ is.print.banners.to.terminal + { "** IEEEtran BST control entry " quote$ * cite$ * quote$ * " detected." * + top$ + } + { skip$ } + if$ + CTLuse_article_number + empty$ + { skip$ } + { CTLuse_article_number + yes.no.to.int + 'is.use.number.for.article := + } + if$ + CTLuse_paper + empty$ + { skip$ } + { CTLuse_paper + yes.no.to.int + 'is.use.paper := + } + if$ + CTLuse_forced_etal + empty$ + { skip$ } + { CTLuse_forced_etal + yes.no.to.int + 'is.forced.et.al := + } + if$ + CTLmax_names_forced_etal + empty$ + { skip$ } + { CTLmax_names_forced_etal + string.to.integer + 'max.num.names.before.forced.et.al := + } + if$ + CTLnames_show_etal + empty$ + { skip$ } + { CTLnames_show_etal + string.to.integer + 'num.names.shown.with.forced.et.al := + } + if$ + CTLuse_alt_spacing + empty$ + { skip$ } + { CTLuse_alt_spacing + yes.no.to.int + 'is.use.alt.interword.spacing := + } + if$ + CTLalt_stretch_factor + empty$ + { skip$ } + { CTLalt_stretch_factor + 'ALTinterwordstretchfactor := + "\renewcommand{\BIBentryALTinterwordstretchfactor}{" + ALTinterwordstretchfactor * "}" * + write$ newline$ + } + if$ + CTLdash_repeated_names + empty$ + { skip$ } + { CTLdash_repeated_names + yes.no.to.int + 'is.dash.repeated.names := + } + if$ + CTLname_format_string + empty$ + { skip$ } + { CTLname_format_string + 'name.format.string := + } + if$ + CTLname_latex_cmd + empty$ + { skip$ } + { CTLname_latex_cmd + 'name.latex.cmd := + } + if$ + CTLname_url_prefix + missing$ + { skip$ } + { CTLname_url_prefix + 'name.url.prefix := + } + if$ + + + num.names.shown.with.forced.et.al max.num.names.before.forced.et.al > + { "CTLnames_show_etal cannot be greater than CTLmax_names_forced_etal in " cite$ * warning$ + max.num.names.before.forced.et.al 'num.names.shown.with.forced.et.al := + } + { skip$ } + if$ +} + + +%%%%%%%%%%%%%%%%%%% +%% ENTRY ALIASES %% +%%%%%%%%%%%%%%%%%%% +FUNCTION {conference}{inproceedings} +FUNCTION {online}{electronic} +FUNCTION {internet}{electronic} +FUNCTION {webpage}{electronic} +FUNCTION {www}{electronic} +FUNCTION {default.type}{misc} + + + +%%%%%%%%%%%%%%%%%% +%% MAIN PROGRAM %% +%%%%%%%%%%%%%%%%%% + +READ + +EXECUTE {initialize.controls} +EXECUTE {initialize.status.constants} +EXECUTE {banner.message} + +EXECUTE {initialize.longest.label} +ITERATE {longest.label.pass} + +EXECUTE {begin.bib} +ITERATE {call.type$} +EXECUTE {end.bib} + +EXECUTE{completed.message} + + +%% That's all folks, mds. diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.cls tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.cls --- tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.cls 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/IEEEtran.cls 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,4702 @@ +%% +%% IEEEtran.cls 2007/03/05 version V1.7a +%% +%% +%% This is the official IEEE LaTeX class for authors of the Institute of +%% Electrical and Electronics Engineers (IEEE) Transactions journals and +%% conferences. +%% +%% Support sites: +%% http://www.michaelshell.org/tex/ieeetran/ +%% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/ +%% and +%% http://www.ieee.org/ +%% +%% Based on the original 1993 IEEEtran.cls, but with many bug fixes +%% and enhancements (from both JVH and MDS) over the 1996/7 version. +%% +%% +%% Contributors: +%% Gerry Murray (1993), Silvano Balemi (1993), +%% Jon Dixon (1996), Peter N"uchter (1996), +%% Juergen von Hagen (2000), and Michael Shell (2001-2007) +%% +%% +%% Copyright (c) 1993-2000 by Gerry Murray, Silvano Balemi, +%% Jon Dixon, Peter N"uchter, +%% Juergen von Hagen +%% and +%% Copyright (c) 2001-2007 by Michael Shell +%% +%% Current maintainer (V1.3 to V1.7): Michael Shell +%% See: +%% http://www.michaelshell.org/ +%% for current contact information. +%% +%% Special thanks to Peter Wilson (CUA) and Donald Arseneau +%% for allowing the inclusion of the \@ifmtarg command +%% from their ifmtarg LaTeX package. +%% +%%************************************************************************* +%% Legal Notice: +%% This code is offered as-is without any warranty either expressed or +%% implied; without even the implied warranty of MERCHANTABILITY or +%% FITNESS FOR A PARTICULAR PURPOSE! +%% User assumes all risk. +%% In no event shall IEEE or any contributor to this code be liable for +%% any damages or losses, including, but not limited to, incidental, +%% consequential, or any other damages, resulting from the use or misuse +%% of any information contained here. +%% +%% All comments are the opinions of their respective authors and are not +%% necessarily endorsed by the IEEE. +%% +%% This work is distributed under the LaTeX Project Public License (LPPL) +%% ( http://www.latex-project.org/ ) version 1.3, and may be freely used, +%% distributed and modified. A copy of the LPPL, version 1.3, is included +%% in the base LaTeX documentation of all distributions of LaTeX released +%% 2003/12/01 or later. +%% Retain all contribution notices and credits. +%% ** Modified files should be clearly indicated as such, including ** +%% ** renaming them and changing author support contact information. ** +%% +%% File list of work: IEEEtran.cls, IEEEtran_HOWTO.pdf, bare_adv.tex, +%% bare_conf.tex, bare_jrnl.tex, bare_jrnl_compsoc.tex +%% +%% Major changes to the user interface should be indicated by an +%% increase in the version numbers. If a version is a beta, it will +%% be indicated with a BETA suffix, i.e., 1.4 BETA. +%% Small changes can be indicated by appending letters to the version +%% such as "IEEEtran_v14a.cls". +%% In all cases, \Providesclass, any \typeout messages to the user, +%% \IEEEtransversionmajor and \IEEEtransversionminor must reflect the +%% correct version information. +%% The changes should also be documented via source comments. +%%************************************************************************* +%% +% +% Available class options +% e.g., \documentclass[10pt,conference]{IEEEtran} +% +% *** choose only one from each category *** +% +% 9pt, 10pt, 11pt, 12pt +% Sets normal font size. The default is 10pt. +% +% conference, journal, technote, peerreview, peerreviewca +% determines format mode - conference papers, journal papers, +% correspondence papers (technotes), or peer review papers. The user +% should also select 9pt when using technote. peerreview is like +% journal mode, but provides for a single-column "cover" title page for +% anonymous peer review. The paper title (without the author names) is +% repeated at the top of the page after the cover page. For peer review +% papers, the \IEEEpeerreviewmaketitle command must be executed (will +% automatically be ignored for non-peerreview modes) at the place the +% cover page is to end, usually just after the abstract (keywords are +% not normally used with peer review papers). peerreviewca is like +% peerreview, but allows the author names to be entered and formatted +% as with conference mode so that author affiliation and contact +% information can be easily seen on the cover page. +% The default is journal. +% +% draft, draftcls, draftclsnofoot, final +% determines if paper is formatted as a widely spaced draft (for +% handwritten editor comments) or as a properly typeset final version. +% draftcls restricts draft mode to the class file while all other LaTeX +% packages (i.e., \usepackage{graphicx}) will behave as final - allows +% for a draft paper with visible figures, etc. draftclsnofoot is like +% draftcls, but does not display the date and the word "DRAFT" at the foot +% of the pages. If using one of the draft modes, the user will probably +% also want to select onecolumn. +% The default is final. +% +% letterpaper, a4paper +% determines paper size: 8.5in X 11in or 210mm X 297mm. CHANGING THE PAPER +% SIZE WILL NOT ALTER THE TYPESETTING OF THE DOCUMENT - ONLY THE MARGINS +% WILL BE AFFECTED. In particular, documents using the a4paper option will +% have reduced side margins (A4 is narrower than US letter) and a longer +% bottom margin (A4 is longer than US letter). For both cases, the top +% margins will be the same and the text will be horizontally centered. +% For final submission to IEEE, authors should use US letter (8.5 X 11in) +% paper. Note that authors should ensure that all post-processing +% (ps, pdf, etc.) uses the same paper specificiation as the .tex document. +% Problems here are by far the number one reason for incorrect margins. +% IEEEtran will automatically set the default paper size under pdflatex +% (without requiring a change to pdftex.cfg), so this issue is more +% important to dvips users. Fix config.ps, config.pdf, or ~/.dvipsrc for +% dvips, or use the dvips -t papersize option instead as needed. See the +% testflow documentation +% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/testflow +% for more details on dvips paper size configuration. +% The default is letterpaper. +% +% oneside, twoside +% determines if layout follows single sided or two sided (duplex) +% printing. The only notable change is with the headings at the top of +% the pages. +% The default is oneside. +% +% onecolumn, twocolumn +% determines if text is organized into one or two columns per page. One +% column mode is usually used only with draft papers. +% The default is twocolumn. +% +% compsoc +% Use the format of the IEEE Computer Society. +% +% romanappendices +% Use the "Appendix I" convention when numbering appendices. IEEEtran.cls +% now defaults to Alpha "Appendix A" convention - the opposite of what +% v1.6b and earlier did. +% +% captionsoff +% disables the display of the figure/table captions. Some IEEE journals +% request that captions be removed and figures/tables be put on pages +% of their own at the end of an initial paper submission. The endfloat +% package can be used with this class option to achieve this format. +% +% nofonttune +% turns off tuning of the font interword spacing. Maybe useful to those +% not using the standard Times fonts or for those who have already "tuned" +% their fonts. +% The default is to enable IEEEtran to tune font parameters. +% +% +%---------- +% Available CLASSINPUTs provided (all are macros unless otherwise noted): +% \CLASSINPUTbaselinestretch +% \CLASSINPUTinnersidemargin +% \CLASSINPUToutersidemargin +% \CLASSINPUTtoptextmargin +% \CLASSINPUTbottomtextmargin +% +% Available CLASSINFOs provided: +% \ifCLASSINFOpdf (TeX if conditional) +% \CLASSINFOpaperwidth (macro) +% \CLASSINFOpaperheight (macro) +% \CLASSINFOnormalsizebaselineskip (length) +% \CLASSINFOnormalsizeunitybaselineskip (length) +% +% Available CLASSOPTIONs provided: +% all class option flags (TeX if conditionals) unless otherwise noted, +% e.g., \ifCLASSOPTIONcaptionsoff +% point size options provided as a single macro: +% \CLASSOPTIONpt +% which will be defined as 9, 10, 11, or 12 depending on the document's +% normalsize point size. +% also, class option peerreviewca implies the use of class option peerreview +% and classoption draft implies the use of class option draftcls + + + + + +\ProvidesClass{IEEEtran}[2007/03/05 V1.7a by Michael Shell] +\typeout{-- See the "IEEEtran_HOWTO" manual for usage information.} +\typeout{-- http://www.michaelshell.org/tex/ieeetran/} +\NeedsTeXFormat{LaTeX2e} + +% IEEEtran.cls version numbers, provided as of V1.3 +% These values serve as a way a .tex file can +% determine if the new features are provided. +% The version number of this IEEEtrans.cls can be obtained from +% these values. i.e., V1.4 +% KEEP THESE AS INTEGERS! i.e., NO {4a} or anything like that- +% (no need to enumerate "a" minor changes here) +\def\IEEEtransversionmajor{1} +\def\IEEEtransversionminor{7} + +% These do nothing, but provide them like in article.cls +\newif\if@restonecol +\newif\if@titlepage + + +% class option conditionals +\newif\ifCLASSOPTIONonecolumn \CLASSOPTIONonecolumnfalse +\newif\ifCLASSOPTIONtwocolumn \CLASSOPTIONtwocolumntrue + +\newif\ifCLASSOPTIONoneside \CLASSOPTIONonesidetrue +\newif\ifCLASSOPTIONtwoside \CLASSOPTIONtwosidefalse + +\newif\ifCLASSOPTIONfinal \CLASSOPTIONfinaltrue +\newif\ifCLASSOPTIONdraft \CLASSOPTIONdraftfalse +\newif\ifCLASSOPTIONdraftcls \CLASSOPTIONdraftclsfalse +\newif\ifCLASSOPTIONdraftclsnofoot \CLASSOPTIONdraftclsnofootfalse + +\newif\ifCLASSOPTIONpeerreview \CLASSOPTIONpeerreviewfalse +\newif\ifCLASSOPTIONpeerreviewca \CLASSOPTIONpeerreviewcafalse + +\newif\ifCLASSOPTIONjournal \CLASSOPTIONjournaltrue +\newif\ifCLASSOPTIONconference \CLASSOPTIONconferencefalse +\newif\ifCLASSOPTIONtechnote \CLASSOPTIONtechnotefalse + +\newif\ifCLASSOPTIONnofonttune \CLASSOPTIONnofonttunefalse + +\newif\ifCLASSOPTIONcaptionsoff \CLASSOPTIONcaptionsofffalse + +\newif\ifCLASSOPTIONcompsoc \CLASSOPTIONcompsocfalse + +\newif\ifCLASSOPTIONromanappendices \CLASSOPTIONromanappendicesfalse + + +% class info conditionals + +% indicates if pdf (via pdflatex) output +\newif\ifCLASSINFOpdf \CLASSINFOpdffalse + + +% V1.6b internal flag to show if using a4paper +\newif\if@IEEEusingAfourpaper \@IEEEusingAfourpaperfalse + + + +% IEEEtran class scratch pad registers +% dimen +\newdimen\@IEEEtrantmpdimenA +\newdimen\@IEEEtrantmpdimenB +% count +\newcount\@IEEEtrantmpcountA +\newcount\@IEEEtrantmpcountB +% token list +\newtoks\@IEEEtrantmptoksA + +% we use \CLASSOPTIONpt so that we can ID the point size (even for 9pt docs) +% as well as LaTeX's \@ptsize to retain some compatability with some +% external packages +\def\@ptsize{0} +% LaTeX does not support 9pt, so we set \@ptsize to 0 - same as that of 10pt +\DeclareOption{9pt}{\def\CLASSOPTIONpt{9}\def\@ptsize{0}} +\DeclareOption{10pt}{\def\CLASSOPTIONpt{10}\def\@ptsize{0}} +\DeclareOption{11pt}{\def\CLASSOPTIONpt{11}\def\@ptsize{1}} +\DeclareOption{12pt}{\def\CLASSOPTIONpt{12}\def\@ptsize{2}} + + + +\DeclareOption{letterpaper}{\setlength{\paperheight}{11in}% + \setlength{\paperwidth}{8.5in}% + \@IEEEusingAfourpaperfalse + \def\CLASSOPTIONpaper{letter}% + \def\CLASSINFOpaperwidth{8.5in}% + \def\CLASSINFOpaperheight{11in}} + + +\DeclareOption{a4paper}{\setlength{\paperheight}{297mm}% + \setlength{\paperwidth}{210mm}% + \@IEEEusingAfourpapertrue + \def\CLASSOPTIONpaper{a4}% + \def\CLASSINFOpaperwidth{210mm}% + \def\CLASSINFOpaperheight{297mm}} + +\DeclareOption{oneside}{\@twosidefalse\@mparswitchfalse + \CLASSOPTIONonesidetrue\CLASSOPTIONtwosidefalse} +\DeclareOption{twoside}{\@twosidetrue\@mparswitchtrue + \CLASSOPTIONtwosidetrue\CLASSOPTIONonesidefalse} + +\DeclareOption{onecolumn}{\CLASSOPTIONonecolumntrue\CLASSOPTIONtwocolumnfalse} +\DeclareOption{twocolumn}{\CLASSOPTIONtwocolumntrue\CLASSOPTIONonecolumnfalse} + +% If the user selects draft, then this class AND any packages +% will go into draft mode. +\DeclareOption{draft}{\CLASSOPTIONdrafttrue\CLASSOPTIONdraftclstrue + \CLASSOPTIONdraftclsnofootfalse} +% draftcls is for a draft mode which will not affect any packages +% used by the document. +\DeclareOption{draftcls}{\CLASSOPTIONdraftfalse\CLASSOPTIONdraftclstrue + \CLASSOPTIONdraftclsnofootfalse} +% draftclsnofoot is like draftcls, but without the footer. +\DeclareOption{draftclsnofoot}{\CLASSOPTIONdraftfalse\CLASSOPTIONdraftclstrue + \CLASSOPTIONdraftclsnofoottrue} +\DeclareOption{final}{\CLASSOPTIONdraftfalse\CLASSOPTIONdraftclsfalse + \CLASSOPTIONdraftclsnofootfalse} + +\DeclareOption{journal}{\CLASSOPTIONpeerreviewfalse\CLASSOPTIONpeerreviewcafalse + \CLASSOPTIONjournaltrue\CLASSOPTIONconferencefalse\CLASSOPTIONtechnotefalse} + +\DeclareOption{conference}{\CLASSOPTIONpeerreviewfalse\CLASSOPTIONpeerreviewcafalse + \CLASSOPTIONjournalfalse\CLASSOPTIONconferencetrue\CLASSOPTIONtechnotefalse} + +\DeclareOption{technote}{\CLASSOPTIONpeerreviewfalse\CLASSOPTIONpeerreviewcafalse + \CLASSOPTIONjournalfalse\CLASSOPTIONconferencefalse\CLASSOPTIONtechnotetrue} + +\DeclareOption{peerreview}{\CLASSOPTIONpeerreviewtrue\CLASSOPTIONpeerreviewcafalse + \CLASSOPTIONjournalfalse\CLASSOPTIONconferencefalse\CLASSOPTIONtechnotefalse} + +\DeclareOption{peerreviewca}{\CLASSOPTIONpeerreviewtrue\CLASSOPTIONpeerreviewcatrue + \CLASSOPTIONjournalfalse\CLASSOPTIONconferencefalse\CLASSOPTIONtechnotefalse} + +\DeclareOption{nofonttune}{\CLASSOPTIONnofonttunetrue} + +\DeclareOption{captionsoff}{\CLASSOPTIONcaptionsofftrue} + +\DeclareOption{compsoc}{\CLASSOPTIONcompsoctrue} + +\DeclareOption{romanappendices}{\CLASSOPTIONromanappendicestrue} + + +% default to US letter paper, 10pt, twocolumn, one sided, final, journal +\ExecuteOptions{letterpaper,10pt,twocolumn,oneside,final,journal} +% overrride these defaults per user requests +\ProcessOptions + + + +% Computer Society conditional execution command +\long\def\@IEEEcompsoconly#1{\relax\ifCLASSOPTIONcompsoc\relax#1\relax\fi\relax} +% inverse +\long\def\@IEEEnotcompsoconly#1{\relax\ifCLASSOPTIONcompsoc\else\relax#1\relax\fi\relax} +% compsoc conference +\long\def\@IEEEcompsocconfonly#1{\relax\ifCLASSOPTIONcompsoc\ifCLASSOPTIONconference\relax#1\relax\fi\fi\relax} +% compsoc not conference +\long\def\@IEEEcompsocnotconfonly#1{\relax\ifCLASSOPTIONcompsoc\ifCLASSOPTIONconference\else\relax#1\relax\fi\fi\relax} + + +% IEEE uses Times Roman font, so we'll default to Times. +% These three commands make up the entire times.sty package. +\renewcommand{\sfdefault}{phv} +\renewcommand{\rmdefault}{ptm} +\renewcommand{\ttdefault}{pcr} + +\@IEEEcompsoconly{\typeout{-- Using IEEE Computer Society mode.}} + +% V1.7 compsoc nonconference papers, use Palatino/Palladio as the main text font, +% not Times Roman. +\@IEEEcompsocnotconfonly{\renewcommand{\rmdefault}{ppl}} + +% enable Times/Palatino main text font +\normalfont\selectfont + + + + + +% V1.7 conference notice message hook +\def\@IEEEconsolenoticeconference{\typeout{}% +\typeout{** Conference Paper **}% +\typeout{Before submitting the final camera ready copy, remember to:}% +\typeout{}% +\typeout{ 1. Manually equalize the lengths of two columns on the last page}% +\typeout{ of your paper;}% +\typeout{}% +\typeout{ 2. Ensure that any PostScript and/or PDF output post-processing}% +\typeout{ uses only Type 1 fonts and that every step in the generation}% +\typeout{ process uses the appropriate paper size.}% +\typeout{}} + + +% we can send console reminder messages to the user here +\AtEndDocument{\ifCLASSOPTIONconference\@IEEEconsolenoticeconference\fi} + + +% warn about the use of single column other than for draft mode +\ifCLASSOPTIONtwocolumn\else% + \ifCLASSOPTIONdraftcls\else% + \typeout{** ATTENTION: Single column mode is not typically used with IEEE publications.}% + \fi% +\fi + + +% V1.7 improved paper size setting code. +% Set pdfpage and dvips paper sizes. Conditional tests are similar to that +% of ifpdf.sty. Retain within {} to ensure tested macros are never altered, +% even if only effect is to set them to \relax. +% if \pdfoutput is undefined or equal to relax, output a dvips special +{\@ifundefined{pdfoutput}{\AtBeginDvi{\special{papersize=\CLASSINFOpaperwidth,\CLASSINFOpaperheight}}}{% +% pdfoutput is defined and not equal to \relax +% check for pdfpageheight existence just in case someone sets pdfoutput +% under non-pdflatex. If exists, set them regardless of value of \pdfoutput. +\@ifundefined{pdfpageheight}{\relax}{\global\pdfpagewidth\paperwidth +\global\pdfpageheight\paperheight}% +% if using \pdfoutput=0 under pdflatex, send dvips papersize special +\ifcase\pdfoutput +\AtBeginDvi{\special{papersize=\CLASSINFOpaperwidth,\CLASSINFOpaperheight}}% +\else +% we are using pdf output, set CLASSINFOpdf flag +\global\CLASSINFOpdftrue +\fi}} + +% let the user know the selected papersize +\typeout{-- Using \CLASSINFOpaperwidth\space x \CLASSINFOpaperheight\space +(\CLASSOPTIONpaper)\space paper.} + +\ifCLASSINFOpdf +\typeout{-- Using PDF output.} +\else +\typeout{-- Using DVI output.} +\fi + + +% The idea hinted here is for LaTeX to generate markleft{} and markright{} +% automatically for you after you enter \author{}, \journal{}, +% \journaldate{}, journalvol{}, \journalnum{}, etc. +% However, there may be some backward compatibility issues here as +% well as some special applications for IEEEtran.cls and special issues +% that may require the flexible \markleft{}, \markright{} and/or \markboth{}. +% We'll leave this as an open future suggestion. +%\newcommand{\journal}[1]{\def\@journal{#1}} +%\def\@journal{} + + + +% pointsize values +% used with ifx to determine the document's normal size +\def\@IEEEptsizenine{9} +\def\@IEEEptsizeten{10} +\def\@IEEEptsizeeleven{11} +\def\@IEEEptsizetwelve{12} + + + +% FONT DEFINITIONS (No sizexx.clo file needed) +% V1.6 revised font sizes, displayskip values and +% revised normalsize baselineskip to reduce underfull vbox problems +% on the 58pc = 696pt = 9.5in text height we want +% normalsize #lines/column baselineskip (aka leading) +% 9pt 63 11.0476pt (truncated down) +% 10pt 58 12pt (exact) +% 11pt 52 13.3846pt (truncated down) +% 12pt 50 13.92pt (exact) +% + +% we need to store the nominal baselineskip for the given font size +% in case baselinestretch ever changes. +% this is a dimen, so it will not hold stretch or shrink +\newdimen\@IEEEnormalsizeunitybaselineskip +\@IEEEnormalsizeunitybaselineskip\baselineskip + +\ifx\CLASSOPTIONpt\@IEEEptsizenine +\typeout{-- This is a 9 point document.} +\def\normalsize{\@setfontsize{\normalsize}{9}{11.0476pt}}% +\setlength{\@IEEEnormalsizeunitybaselineskip}{11.0476pt}% +\normalsize +\abovedisplayskip 1.5ex plus3pt minus1pt% +\belowdisplayskip \abovedisplayskip% +\abovedisplayshortskip 0pt plus3pt% +\belowdisplayshortskip 1.5ex plus3pt minus1pt +\def\small{\@setfontsize{\small}{8.5}{10pt}} +\def\footnotesize{\@setfontsize{\footnotesize}{8}{9pt}} +\def\scriptsize{\@setfontsize{\scriptsize}{7}{8pt}} +\def\tiny{\@setfontsize{\tiny}{5}{6pt}} +% sublargesize is the same as large - 10pt +\def\sublargesize{\@setfontsize{\sublargesize}{10}{12pt}} +\def\large{\@setfontsize{\large}{10}{12pt}} +\def\Large{\@setfontsize{\Large}{12}{14pt}} +\def\LARGE{\@setfontsize{\LARGE}{14}{17pt}} +\def\huge{\@setfontsize{\huge}{17}{20pt}} +\def\Huge{\@setfontsize{\Huge}{20}{24pt}} +\fi + + +% Check if we have selected 10 points +\ifx\CLASSOPTIONpt\@IEEEptsizeten +\typeout{-- This is a 10 point document.} +\def\normalsize{\@setfontsize{\normalsize}{10}{12.00pt}}% +\setlength{\@IEEEnormalsizeunitybaselineskip}{12pt}% +\normalsize +\abovedisplayskip 1.5ex plus4pt minus2pt% +\belowdisplayskip \abovedisplayskip% +\abovedisplayshortskip 0pt plus4pt% +\belowdisplayshortskip 1.5ex plus4pt minus2pt +\def\small{\@setfontsize{\small}{9}{10pt}} +\def\footnotesize{\@setfontsize{\footnotesize}{8}{9pt}} +\def\scriptsize{\@setfontsize{\scriptsize}{7}{8pt}} +\def\tiny{\@setfontsize{\tiny}{5}{6pt}} +% sublargesize is a tad smaller than large - 11pt +\def\sublargesize{\@setfontsize{\sublargesize}{11}{13.4pt}} +\def\large{\@setfontsize{\large}{12}{14pt}} +\def\Large{\@setfontsize{\Large}{14}{17pt}} +\def\LARGE{\@setfontsize{\LARGE}{17}{20pt}} +\def\huge{\@setfontsize{\huge}{20}{24pt}} +\def\Huge{\@setfontsize{\Huge}{24}{28pt}} +\fi + + +% Check if we have selected 11 points +\ifx\CLASSOPTIONpt\@IEEEptsizeeleven +\typeout{-- This is an 11 point document.} +\def\normalsize{\@setfontsize{\normalsize}{11}{13.3846pt}}% +\setlength{\@IEEEnormalsizeunitybaselineskip}{13.3846pt}% +\normalsize +\abovedisplayskip 1.5ex plus5pt minus3pt% +\belowdisplayskip \abovedisplayskip% +\abovedisplayshortskip 0pt plus5pt% +\belowdisplayshortskip 1.5ex plus5pt minus3pt +\def\small{\@setfontsize{\small}{10}{12pt}} +\def\footnotesize{\@setfontsize{\footnotesize}{9}{10.5pt}} +\def\scriptsize{\@setfontsize{\scriptsize}{8}{9pt}} +\def\tiny{\@setfontsize{\tiny}{6}{7pt}} +% sublargesize is the same as large - 12pt +\def\sublargesize{\@setfontsize{\sublargesize}{12}{14pt}} +\def\large{\@setfontsize{\large}{12}{14pt}} +\def\Large{\@setfontsize{\Large}{14}{17pt}} +\def\LARGE{\@setfontsize{\LARGE}{17}{20pt}} +\def\huge{\@setfontsize{\huge}{20}{24pt}} +\def\Huge{\@setfontsize{\Huge}{24}{28pt}} +\fi + + +% Check if we have selected 12 points +\ifx\CLASSOPTIONpt\@IEEEptsizetwelve +\typeout{-- This is a 12 point document.} +\def\normalsize{\@setfontsize{\normalsize}{12}{13.92pt}}% +\setlength{\@IEEEnormalsizeunitybaselineskip}{13.92pt}% +\normalsize +\abovedisplayskip 1.5ex plus6pt minus4pt% +\belowdisplayskip \abovedisplayskip% +\abovedisplayshortskip 0pt plus6pt% +\belowdisplayshortskip 1.5ex plus6pt minus4pt +\def\small{\@setfontsize{\small}{10}{12pt}} +\def\footnotesize{\@setfontsize{\footnotesize}{9}{10.5pt}} +\def\scriptsize{\@setfontsize{\scriptsize}{8}{9pt}} +\def\tiny{\@setfontsize{\tiny}{6}{7pt}} +% sublargesize is the same as large - 14pt +\def\sublargesize{\@setfontsize{\sublargesize}{14}{17pt}} +\def\large{\@setfontsize{\large}{14}{17pt}} +\def\Large{\@setfontsize{\Large}{17}{20pt}} +\def\LARGE{\@setfontsize{\LARGE}{20}{24pt}} +\def\huge{\@setfontsize{\huge}{22}{26pt}} +\def\Huge{\@setfontsize{\Huge}{24}{28pt}} +\fi + + +% V1.6 The Computer Modern Fonts will issue a substitution warning for +% 24pt titles (24.88pt is used instead) increase the substitution +% tolerance to turn off this warning +\def\fontsubfuzz{.9pt} +% However, the default (and correct) Times font will scale exactly as needed. + + +% warn the user in case they forget to use the 9pt option with +% technote +\ifCLASSOPTIONtechnote% + \ifx\CLASSOPTIONpt\@IEEEptsizenine\else% + \typeout{** ATTENTION: Technotes are normally 9pt documents.}% + \fi% +\fi + + +% V1.7 +% Improved \textunderscore to provide a much better fake _ when used with +% OT1 encoding. Under OT1, detect use of pcr or cmtt \ttfamily and use +% available true _ glyph for those two typewriter fonts. +\def\@IEEEstringptm{ptm} % Times Roman family +\def\@IEEEstringppl{ppl} % Palatino Roman family +\def\@IEEEstringphv{phv} % Helvetica Sans Serif family +\def\@IEEEstringpcr{pcr} % Courier typewriter family +\def\@IEEEstringcmtt{cmtt} % Computer Modern typewriter family +\DeclareTextCommandDefault{\textunderscore}{\leavevmode +\ifx\f@family\@IEEEstringpcr\string_\else +\ifx\f@family\@IEEEstringcmtt\string_\else +\ifx\f@family\@IEEEstringptm\kern 0em\vbox{\hrule\@width 0.5em\@height 0.5pt\kern -0.3ex}\else +\ifx\f@family\@IEEEstringppl\kern 0em\vbox{\hrule\@width 0.5em\@height 0.5pt\kern -0.3ex}\else +\ifx\f@family\@IEEEstringphv\kern -0.03em\vbox{\hrule\@width 0.62em\@height 0.52pt\kern -0.33ex}\kern -0.03em\else +\kern 0.09em\vbox{\hrule\@width 0.6em\@height 0.44pt\kern -0.63pt\kern -0.42ex}\kern 0.09em\fi\fi\fi\fi\fi\relax} + + + + +% set the default \baselinestretch +\def\baselinestretch{1} +\ifCLASSOPTIONdraftcls + \def\baselinestretch{1.5}% default baselinestretch for draft modes +\fi + + +% process CLASSINPUT baselinestretch +\ifx\CLASSINPUTbaselinestretch\@IEEEundefined +\else + \edef\baselinestretch{\CLASSINPUTbaselinestretch} % user CLASSINPUT override + \typeout{** ATTENTION: Overriding \string\baselinestretch\space to + \baselinestretch\space via \string\CLASSINPUT.} +\fi + +\normalsize % make \baselinestretch take affect + + + + +% store the normalsize baselineskip +\newdimen\CLASSINFOnormalsizebaselineskip +\CLASSINFOnormalsizebaselineskip=\baselineskip\relax +% and the normalsize unity (baselinestretch=1) baselineskip +% we could save a register by giving the user access to +% \@IEEEnormalsizeunitybaselineskip. However, let's protect +% its read only internal status +\newdimen\CLASSINFOnormalsizeunitybaselineskip +\CLASSINFOnormalsizeunitybaselineskip=\@IEEEnormalsizeunitybaselineskip\relax +% store the nominal value of jot +\newdimen\IEEEnormaljot +\IEEEnormaljot=0.25\baselineskip\relax + +% set \jot +\jot=\IEEEnormaljot\relax + + + + +% V1.6, we are now going to fine tune the interword spacing +% The default interword glue for Times under TeX appears to use a +% nominal interword spacing of 25% (relative to the font size, i.e., 1em) +% a maximum of 40% and a minimum of 19%. +% For example, 10pt text uses an interword glue of: +% +% 2.5pt plus 1.49998pt minus 0.59998pt +% +% However, IEEE allows for a more generous range which reduces the need +% for hyphenation, especially for two column text. Furthermore, IEEE +% tends to use a little bit more nominal space between the words. +% IEEE's interword spacing percentages appear to be: +% 35% nominal +% 23% minimum +% 50% maximum +% (They may even be using a tad more for the largest fonts such as 24pt.) +% +% for bold text, IEEE increases the spacing a little more: +% 37.5% nominal +% 23% minimum +% 55% maximum + +% here are the interword spacing ratios we'll use +% for medium (normal weight) +\def\@IEEEinterspaceratioM{0.35} +\def\@IEEEinterspaceMINratioM{0.23} +\def\@IEEEinterspaceMAXratioM{0.50} + +% for bold +\def\@IEEEinterspaceratioB{0.375} +\def\@IEEEinterspaceMINratioB{0.23} +\def\@IEEEinterspaceMAXratioB{0.55} + + +% command to revise the interword spacing for the current font under TeX: +% \fontdimen2 = nominal interword space +% \fontdimen3 = interword stretch +% \fontdimen4 = interword shrink +% since all changes to the \fontdimen are global, we can enclose these commands +% in braces to confine any font attribute or length changes +\def\@@@IEEEsetfontdimens#1#2#3{{% +\setlength{\@IEEEtrantmpdimenB}{\f@size pt}% grab the font size in pt, could use 1em instead. +\setlength{\@IEEEtrantmpdimenA}{#1\@IEEEtrantmpdimenB}% +\fontdimen2\font=\@IEEEtrantmpdimenA\relax +\addtolength{\@IEEEtrantmpdimenA}{-#2\@IEEEtrantmpdimenB}% +\fontdimen3\font=-\@IEEEtrantmpdimenA\relax +\setlength{\@IEEEtrantmpdimenA}{#1\@IEEEtrantmpdimenB}% +\addtolength{\@IEEEtrantmpdimenA}{-#3\@IEEEtrantmpdimenB}% +\fontdimen4\font=\@IEEEtrantmpdimenA\relax}} + +% revise the interword spacing for each font weight +\def\@@IEEEsetfontdimens{{% +\mdseries +\@@@IEEEsetfontdimens{\@IEEEinterspaceratioM}{\@IEEEinterspaceMAXratioM}{\@IEEEinterspaceMINratioM}% +\bfseries +\@@@IEEEsetfontdimens{\@IEEEinterspaceratioB}{\@IEEEinterspaceMAXratioB}{\@IEEEinterspaceMINratioB}% +}} + +% revise the interword spacing for each font shape +% \slshape is not often used for IEEE work and is not altered here. The \scshape caps are +% already a tad too large in the free LaTeX fonts (as compared to what IEEE uses) so we +% won't alter these either. +\def\@IEEEsetfontdimens{{% +\normalfont +\@@IEEEsetfontdimens +\normalfont\itshape +\@@IEEEsetfontdimens +}} + +% command to revise the interword spacing for each font size (and shape +% and weight). Only the \rmfamily is done here as \ttfamily uses a +% fixed spacing and \sffamily is not used as the main text of IEEE papers. +\def\@IEEEtunefonts{{\selectfont\rmfamily +\tiny\@IEEEsetfontdimens +\scriptsize\@IEEEsetfontdimens +\footnotesize\@IEEEsetfontdimens +\small\@IEEEsetfontdimens +\normalsize\@IEEEsetfontdimens +\sublargesize\@IEEEsetfontdimens +\large\@IEEEsetfontdimens +\LARGE\@IEEEsetfontdimens +\huge\@IEEEsetfontdimens +\Huge\@IEEEsetfontdimens}} + +% if the nofonttune class option is not given, revise the interword spacing +% now - in case IEEEtran makes any default length measurements, and make +% sure all the default fonts are loaded +\ifCLASSOPTIONnofonttune\else +\@IEEEtunefonts +\fi + +% and again at the start of the document in case the user loaded different fonts +\AtBeginDocument{\ifCLASSOPTIONnofonttune\else\@IEEEtunefonts\fi} + + + +% V1.6 +% LaTeX is a little to quick to use hyphenations +% So, we increase the penalty for their use and raise +% the badness level that triggers an underfull hbox +% warning. The author may still have to tweak things, +% but the appearance will be much better "right out +% of the box" than that under V1.5 and prior. +% TeX default is 50 +\hyphenpenalty=750 +% If we didn't adjust the interword spacing, 2200 might be better. +% The TeX default is 1000 +\hbadness=1350 +% IEEE does not use extra spacing after punctuation +\frenchspacing + +% V1.7 increase this a tad to discourage equation breaks +\binoppenalty=1000 % default 700 +\relpenalty=800 % default 500 + + +% margin note stuff +\marginparsep 10pt +\marginparwidth 20pt +\marginparpush 25pt + + +% if things get too close, go ahead and let them touch +\lineskip 0pt +\normallineskip 0pt +\lineskiplimit 0pt +\normallineskiplimit 0pt + +% The distance from the lower edge of the text body to the +% footline +\footskip 0.4in + +% normally zero, should be relative to font height. +% put in a little rubber to help stop some bad breaks (underfull vboxes) +\parskip 0ex plus 0.2ex minus 0.1ex + +\parindent 1.0em + +\topmargin -49.0pt +\headheight 12pt +\headsep 0.25in + +% use the normal font baselineskip +% so that \topskip is unaffected by changes in \baselinestretch +\topskip=\@IEEEnormalsizeunitybaselineskip +\textheight 58pc % 9.63in, 696pt +% Tweak textheight to a perfect integer number of lines/page. +% The normal baselineskip for each document point size is used +% to determine these values. +\ifx\CLASSOPTIONpt\@IEEEptsizenine\textheight=63\@IEEEnormalsizeunitybaselineskip\fi % 63 lines/page +\ifx\CLASSOPTIONpt\@IEEEptsizeten\textheight=58\@IEEEnormalsizeunitybaselineskip\fi % 58 lines/page +\ifx\CLASSOPTIONpt\@IEEEptsizeeleven\textheight=52\@IEEEnormalsizeunitybaselineskip\fi % 52 lines/page +\ifx\CLASSOPTIONpt\@IEEEptsizetwelve\textheight=50\@IEEEnormalsizeunitybaselineskip\fi % 50 lines/page + + +\columnsep 1pc +\textwidth 43pc % 2 x 21pc + 1pc = 43pc + + +% the default side margins are equal +\if@IEEEusingAfourpaper +\oddsidemargin 14.32mm +\evensidemargin 14.32mm +\else +\oddsidemargin 0.680in +\evensidemargin 0.680in +\fi +% compensate for LaTeX's 1in offset +\addtolength{\oddsidemargin}{-1in} +\addtolength{\evensidemargin}{-1in} + + + +% adjust margins for conference mode +\ifCLASSOPTIONconference + \topmargin -0.25in + % we retain the reserved, but unused space for headers + \addtolength{\topmargin}{-\headheight} + \addtolength{\topmargin}{-\headsep} + \textheight 9.25in % The standard for conferences (668.4975pt) + % Tweak textheight to a perfect integer number of lines/page. + \ifx\CLASSOPTIONpt\@IEEEptsizenine\textheight=61\@IEEEnormalsizeunitybaselineskip\fi % 61 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizeten\textheight=56\@IEEEnormalsizeunitybaselineskip\fi % 56 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizeeleven\textheight=50\@IEEEnormalsizeunitybaselineskip\fi % 50 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizetwelve\textheight=48\@IEEEnormalsizeunitybaselineskip\fi % 48 lines/page +\fi + + +% compsoc conference +\ifCLASSOPTIONcompsoc +\ifCLASSOPTIONconference + % compsoc conference use a larger value for columnsep + \columnsep 0.375in + % compsoc conferences want 1in top margin, 1.125in bottom margin + \topmargin 0in + \addtolength{\topmargin}{-6pt}% we tweak this a tad to better comply with top of line stuff + % we retain the reserved, but unused space for headers + \addtolength{\topmargin}{-\headheight} + \addtolength{\topmargin}{-\headsep} + \textheight 8.875in % (641.39625pt) + % Tweak textheight to a perfect integer number of lines/page. + \ifx\CLASSOPTIONpt\@IEEEptsizenine\textheight=58\@IEEEnormalsizeunitybaselineskip\fi % 58 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizeten\textheight=53\@IEEEnormalsizeunitybaselineskip\fi % 53 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizeeleven\textheight=48\@IEEEnormalsizeunitybaselineskip\fi % 48 lines/page + \ifx\CLASSOPTIONpt\@IEEEptsizetwelve\textheight=46\@IEEEnormalsizeunitybaselineskip\fi % 46 lines/page + \textwidth 6.5in + % the default side margins are equal + \if@IEEEusingAfourpaper + \oddsidemargin 22.45mm + \evensidemargin 22.45mm + \else + \oddsidemargin 1in + \evensidemargin 1in + \fi + % compensate for LaTeX's 1in offset + \addtolength{\oddsidemargin}{-1in} + \addtolength{\evensidemargin}{-1in} +\fi\fi + + + +% draft mode settings override that of all other modes +% provides a nice 1in margin all around the paper and extra +% space between the lines for editor's comments +\ifCLASSOPTIONdraftcls + % want 1in from top of paper to text + \setlength{\topmargin}{-\headsep}% + \addtolength{\topmargin}{-\headheight}% + % we want 1in side margins regardless of paper type + \oddsidemargin 0in + \evensidemargin 0in + % set the text width + \setlength{\textwidth}{\paperwidth}% + \addtolength{\textwidth}{-2.0in}% + \setlength{\textheight}{\paperheight}% + \addtolength{\textheight}{-2.0in}% + % digitize textheight to be an integer number of lines. + % this may cause the bottom margin to be off a tad + \addtolength{\textheight}{-1\topskip}% + \divide\textheight by \baselineskip% + \multiply\textheight by \baselineskip% + \addtolength{\textheight}{\topskip}% +\fi + + + +% process CLASSINPUT inner/outer margin +% if inner margin defined, but outer margin not, set outer to inner. +\ifx\CLASSINPUTinnersidemargin\@IEEEundefined +\else + \ifx\CLASSINPUToutersidemargin\@IEEEundefined + \edef\CLASSINPUToutersidemargin{\CLASSINPUTinnersidemargin} + \fi +\fi + +\ifx\CLASSINPUToutersidemargin\@IEEEundefined +\else + % if outer margin defined, but inner margin not, set inner to outer. + \ifx\CLASSINPUTinnersidemargin\@IEEEundefined + \edef\CLASSINPUTinnersidemargin{\CLASSINPUToutersidemargin} + \fi + \setlength{\oddsidemargin}{\CLASSINPUTinnersidemargin} + \ifCLASSOPTIONtwoside + \setlength{\evensidemargin}{\CLASSINPUToutersidemargin} + \else + \setlength{\evensidemargin}{\CLASSINPUTinnersidemargin} + \fi + \addtolength{\oddsidemargin}{-1in} + \addtolength{\evensidemargin}{-1in} + \setlength{\textwidth}{\paperwidth} + \addtolength{\textwidth}{-\CLASSINPUTinnersidemargin} + \addtolength{\textwidth}{-\CLASSINPUToutersidemargin} + \typeout{** ATTENTION: Overriding inner side margin to \CLASSINPUTinnersidemargin\space and + outer side margin to \CLASSINPUToutersidemargin\space via \string\CLASSINPUT.} +\fi + + + +% process CLASSINPUT top/bottom text margin +% if toptext margin defined, but bottomtext margin not, set bottomtext to toptext margin +\ifx\CLASSINPUTtoptextmargin\@IEEEundefined +\else + \ifx\CLASSINPUTbottomtextmargin\@IEEEundefined + \edef\CLASSINPUTbottomtextmargin{\CLASSINPUTtoptextmargin} + \fi +\fi + +\ifx\CLASSINPUTbottomtextmargin\@IEEEundefined +\else + % if bottomtext margin defined, but toptext margin not, set toptext to bottomtext margin + \ifx\CLASSINPUTtoptextmargin\@IEEEundefined + \edef\CLASSINPUTtoptextmargin{\CLASSINPUTbottomtextmargin} + \fi + \setlength{\topmargin}{\CLASSINPUTtoptextmargin} + \addtolength{\topmargin}{-1in} + \addtolength{\topmargin}{-\headheight} + \addtolength{\topmargin}{-\headsep} + \setlength{\textheight}{\paperheight} + \addtolength{\textheight}{-\CLASSINPUTtoptextmargin} + \addtolength{\textheight}{-\CLASSINPUTbottomtextmargin} + % in the default format we use the normal baselineskip as topskip + % we only need 0.7 of this to clear typical top text and we need + % an extra 0.3 spacing at the bottom for descenders. This will + % correct for both. + \addtolength{\topmargin}{-0.3\@IEEEnormalsizeunitybaselineskip} + \typeout{** ATTENTION: Overriding top text margin to \CLASSINPUTtoptextmargin\space and + bottom text margin to \CLASSINPUTbottomtextmargin\space via \string\CLASSINPUT.} +\fi + + + + + + + +% LIST SPACING CONTROLS + +% Controls the amount of EXTRA spacing +% above and below \trivlist +% Both \list and IED lists override this. +% However, \trivlist will use this as will most +% things built from \trivlist like the \center +% environment. +\topsep 0.5\baselineskip + +% Controls the additional spacing around lists preceded +% or followed by blank lines. IEEE does not increase +% spacing before or after paragraphs so it is set to zero. +% \z@ is the same as zero, but faster. +\partopsep \z@ + +% Controls the spacing between paragraphs in lists. +% IEEE does not increase spacing before or after paragraphs +% so this is also zero. +% With IEEEtran.cls, global changes to +% this value DO affect lists (but not IED lists). +\parsep \z@ + +% Controls the extra spacing between list items. +% IEEE does not put extra spacing between items. +% With IEEEtran.cls, global changes to this value DO affect +% lists (but not IED lists). +\itemsep \z@ + +% \itemindent is the amount to indent the FIRST line of a list +% item. It is auto set to zero within the \list environment. To alter +% it, you have to do so when you call the \list. +% However, IEEE uses this for the theorem environment +% There is an alternative value for this near \leftmargini below +\itemindent -1em + +% \leftmargin, the spacing from the left margin of the main text to +% the left of the main body of a list item is set by \list. +% Hence this statement does nothing for lists. +% But, quote and verse do use it for indention. +\leftmargin 2em + +% we retain this stuff from the older IEEEtran.cls so that \list +% will work the same way as before. However, itemize, enumerate and +% description (IED) could care less about what these are as they +% all are overridden. +\leftmargini 2em +%\itemindent 2em % Alternative values: sometimes used. +%\leftmargini 0em +\leftmarginii 1em +\leftmarginiii 1.5em +\leftmarginiv 1.5em +\leftmarginv 1.0em +\leftmarginvi 1.0em +\labelsep 0.5em +\labelwidth \z@ + + +% The old IEEEtran.cls behavior of \list is retained. +% However, the new V1.3 IED list environments override all the +% @list stuff (\@listX is called within \list for the +% appropriate level just before the user's list_decl is called). +% \topsep is now 2pt as IEEE puts a little extra space around +% lists - used by those non-IED macros that depend on \list. +% Note that \parsep and \itemsep are not redefined as in +% the sizexx.clo \@listX (which article.cls uses) so global changes +% of these values DO affect \list +% +\def\@listi{\leftmargin\leftmargini \topsep 2pt plus 1pt minus 1pt} +\let\@listI\@listi +\def\@listii{\leftmargin\leftmarginii\labelwidth\leftmarginii% + \advance\labelwidth-\labelsep \topsep 2pt} +\def\@listiii{\leftmargin\leftmarginiii\labelwidth\leftmarginiii% + \advance\labelwidth-\labelsep \topsep 2pt} +\def\@listiv{\leftmargin\leftmarginiv\labelwidth\leftmarginiv% + \advance\labelwidth-\labelsep \topsep 2pt} +\def\@listv{\leftmargin\leftmarginv\labelwidth\leftmarginv% + \advance\labelwidth-\labelsep \topsep 2pt} +\def\@listvi{\leftmargin\leftmarginvi\labelwidth\leftmarginvi% + \advance\labelwidth-\labelsep \topsep 2pt} + + +% IEEE uses 5) not 5. +\def\labelenumi{\theenumi)} \def\theenumi{\arabic{enumi}} + +% IEEE uses a) not (a) +\def\labelenumii{\theenumii)} \def\theenumii{\alph{enumii}} + +% IEEE uses iii) not iii. +\def\labelenumiii{\theenumiii)} \def\theenumiii{\roman{enumiii}} + +% IEEE uses A) not A. +\def\labelenumiv{\theenumiv)} \def\theenumiv{\Alph{enumiv}} + +% exactly the same as in article.cls +\def\p@enumii{\theenumi} +\def\p@enumiii{\theenumi(\theenumii)} +\def\p@enumiv{\p@enumiii\theenumiii} + +% itemized list label styles +\def\labelitemi{$\scriptstyle\bullet$} +\def\labelitemii{\textbf{--}} +\def\labelitemiii{$\ast$} +\def\labelitemiv{$\cdot$} + + + +% **** V1.3 ENHANCEMENTS **** +% Itemize, Enumerate and Description (IED) List Controls +% *************************** +% +% +% IEEE seems to use at least two different values by +% which ITEMIZED list labels are indented to the right +% For The Journal of Lightwave Technology (JLT) and The Journal +% on Selected Areas in Communications (JSAC), they tend to use +% an indention equal to \parindent. For Transactions on Communications +% they tend to indent ITEMIZED lists a little more--- 1.3\parindent. +% We'll provide both values here for you so that you can choose +% which one you like in your document using a command such as: +% setlength{\IEEEilabelindent}{\IEEEilabelindentB} +\newdimen\IEEEilabelindentA +\IEEEilabelindentA \parindent + +\newdimen\IEEEilabelindentB +\IEEEilabelindentB 1.3\parindent +% However, we'll default to using \parindent +% which makes more sense to me +\newdimen\IEEEilabelindent +\IEEEilabelindent \IEEEilabelindentA + + +% This controls the default amount the enumerated list labels +% are indented to the right. +% Normally, this is the same as the paragraph indention +\newdimen\IEEEelabelindent +\IEEEelabelindent \parindent + +% This controls the default amount the description list labels +% are indented to the right. +% Normally, this is the same as the paragraph indention +\newdimen\IEEEdlabelindent +\IEEEdlabelindent \parindent + +% This is the value actually used within the IED lists. +% The IED environments automatically set its value to +% one of the three values above, so global changes do +% not have any effect +\newdimen\IEEElabelindent +\IEEElabelindent \parindent + +% The actual amount labels will be indented is +% \IEEElabelindent multiplied by the factor below +% corresponding to the level of nesting depth +% This provides a means by which the user can +% alter the effective \IEEElabelindent for deeper +% levels +% There may not be such a thing as correct "standard IEEE" +% values. What IEEE actually does may depend on the specific +% circumstances. +% The first list level almost always has full indention. +% The second levels I've seen have only 75% of the normal indentation +% Three level or greater nestings are very rare. I am guessing +% that they don't use any indentation. +\def\IEEElabelindentfactori{1.0} % almost always one +\def\IEEElabelindentfactorii{0.75} % 0.0 or 1.0 may be used in some cases +\def\IEEElabelindentfactoriii{0.0} % 0.75? 0.5? 0.0? +\def\IEEElabelindentfactoriv{0.0} +\def\IEEElabelindentfactorv{0.0} +\def\IEEElabelindentfactorvi{0.0} + +% value actually used within IED lists, it is auto +% set to one of the 6 values above +% global changes here have no effect +\def\IEEElabelindentfactor{1.0} + +% This controls the default spacing between the end of the IED +% list labels and the list text, when normal text is used for +% the labels. +\newdimen\IEEEiednormlabelsep +\IEEEiednormlabelsep 0.6em + +% This controls the default spacing between the end of the IED +% list labels and the list text, when math symbols are used for +% the labels (nomenclature lists). IEEE usually increases the +% spacing in these cases +\newdimen\IEEEiedmathlabelsep +\IEEEiedmathlabelsep 1.2em + +% This controls the extra vertical separation put above and +% below each IED list. IEEE usually puts a little extra spacing +% around each list. However, this spacing is barely noticeable. +\newskip\IEEEiedtopsep +\IEEEiedtopsep 2pt plus 1pt minus 1pt + + +% This command is executed within each IED list environment +% at the beginning of the list. You can use this to set the +% parameters for some/all your IED list(s) without disturbing +% global parameters that affect things other than lists. +% i.e., renewcommand{\IEEEiedlistdecl}{\setlength{\labelsep}{5em}} +% will alter the \labelsep for the next list(s) until +% \IEEEiedlistdecl is redefined. +\def\IEEEiedlistdecl{\relax} + +% This command provides an easy way to set \leftmargin based +% on the \labelwidth, \labelsep and the argument \IEEElabelindent +% Usage: \IEEEcalcleftmargin{width-to-indent-the-label} +% output is in the \leftmargin variable, i.e., effectively: +% \leftmargin = argument + \labelwidth + \labelsep +% Note controlled spacing here, shield end of lines with % +\def\IEEEcalcleftmargin#1{\setlength{\leftmargin}{#1}% +\addtolength{\leftmargin}{\labelwidth}% +\addtolength{\leftmargin}{\labelsep}} + +% This command provides an easy way to set \labelwidth to the +% width of the given text. It is the same as +% \settowidth{\labelwidth}{label-text} +% and useful as a shorter alternative. +% Typically used to set \labelwidth to be the width +% of the longest label in the list +\def\IEEEsetlabelwidth#1{\settowidth{\labelwidth}{#1}} + +% When this command is executed, IED lists will use the +% IEEEiedmathlabelsep label separation rather than the normal +% spacing. To have an effect, this command must be executed via +% the \IEEEiedlistdecl or within the option of the IED list +% environments. +\def\IEEEusemathlabelsep{\setlength{\labelsep}{\IEEEiedmathlabelsep}} + +% A flag which controls whether the IED lists automatically +% calculate \leftmargin from \IEEElabelindent, \labelwidth and \labelsep +% Useful if you want to specify your own \leftmargin +% This flag must be set (\IEEEnocalcleftmargintrue or \IEEEnocalcleftmarginfalse) +% via the \IEEEiedlistdecl or within the option of the IED list +% environments to have an effect. +\newif\ifIEEEnocalcleftmargin +\IEEEnocalcleftmarginfalse + +% A flag which controls whether \IEEElabelindent is multiplied by +% the \IEEElabelindentfactor for each list level. +% This flag must be set via the \IEEEiedlistdecl or within the option +% of the IED list environments to have an effect. +\newif\ifIEEEnolabelindentfactor +\IEEEnolabelindentfactorfalse + + +% internal variable to indicate type of IED label +% justification +% 0 - left; 1 - center; 2 - right +\def\@IEEEiedjustify{0} + + +% commands to allow the user to control IED +% label justifications. Use these commands within +% the IED environment option or in the \IEEEiedlistdecl +% Note that changing the normal list justifications +% is nonstandard and IEEE may not like it if you do so! +% I include these commands as they may be helpful to +% those who are using these enhanced list controls for +% other non-IEEE related LaTeX work. +% itemize and enumerate automatically default to right +% justification, description defaults to left. +\def\IEEEiedlabeljustifyl{\def\@IEEEiedjustify{0}}%left +\def\IEEEiedlabeljustifyc{\def\@IEEEiedjustify{1}}%center +\def\IEEEiedlabeljustifyr{\def\@IEEEiedjustify{2}}%right + + + + +% commands to save to and restore from the list parameter copies +% this allows us to set all the list parameters within +% the list_decl and prevent \list (and its \@list) +% from overriding any of our parameters +% V1.6 use \edefs instead of dimen's to conserve dimen registers +% Note controlled spacing here, shield end of lines with % +\def\@IEEEsavelistparams{\edef\@IEEEiedtopsep{\the\topsep}% +\edef\@IEEEiedlabelwidth{\the\labelwidth}% +\edef\@IEEEiedlabelsep{\the\labelsep}% +\edef\@IEEEiedleftmargin{\the\leftmargin}% +\edef\@IEEEiedpartopsep{\the\partopsep}% +\edef\@IEEEiedparsep{\the\parsep}% +\edef\@IEEEieditemsep{\the\itemsep}% +\edef\@IEEEiedrightmargin{\the\rightmargin}% +\edef\@IEEEiedlistparindent{\the\listparindent}% +\edef\@IEEEieditemindent{\the\itemindent}} + +% Note controlled spacing here +\def\@IEEErestorelistparams{\topsep\@IEEEiedtopsep\relax% +\labelwidth\@IEEEiedlabelwidth\relax% +\labelsep\@IEEEiedlabelsep\relax% +\leftmargin\@IEEEiedleftmargin\relax% +\partopsep\@IEEEiedpartopsep\relax% +\parsep\@IEEEiedparsep\relax% +\itemsep\@IEEEieditemsep\relax% +\rightmargin\@IEEEiedrightmargin\relax% +\listparindent\@IEEEiedlistparindent\relax% +\itemindent\@IEEEieditemindent\relax} + + +% v1.6b provide original LaTeX IED list environments +% note that latex.ltx defines \itemize and \enumerate, but not \description +% which must be created by the base classes +% save original LaTeX itemize and enumerate +\let\LaTeXitemize\itemize +\let\endLaTeXitemize\enditemize +\let\LaTeXenumerate\enumerate +\let\endLaTeXenumerate\endenumerate + +% provide original LaTeX description environment from article.cls +\newenvironment{LaTeXdescription} + {\list{}{\labelwidth\z@ \itemindent-\leftmargin + \let\makelabel\descriptionlabel}} + {\endlist} +\newcommand*\descriptionlabel[1]{\hspace\labelsep + \normalfont\bfseries #1} + + +% override LaTeX's default IED lists +\def\itemize{\@IEEEitemize} +\def\enditemize{\@endIEEEitemize} +\def\enumerate{\@IEEEenumerate} +\def\endenumerate{\@endIEEEenumerate} +\def\description{\@IEEEdescription} +\def\enddescription{\@endIEEEdescription} + +% provide the user with aliases - may help those using packages that +% override itemize, enumerate, or description +\def\IEEEitemize{\@IEEEitemize} +\def\endIEEEitemize{\@endIEEEitemize} +\def\IEEEenumerate{\@IEEEenumerate} +\def\endIEEEenumerate{\@endIEEEenumerate} +\def\IEEEdescription{\@IEEEdescription} +\def\endIEEEdescription{\@endIEEEdescription} + + +% V1.6 we want to keep the IEEEtran IED list definitions as our own internal +% commands so they are protected against redefinition +\def\@IEEEitemize{\@ifnextchar[{\@@IEEEitemize}{\@@IEEEitemize[\relax]}} +\def\@IEEEenumerate{\@ifnextchar[{\@@IEEEenumerate}{\@@IEEEenumerate[\relax]}} +\def\@IEEEdescription{\@ifnextchar[{\@@IEEEdescription}{\@@IEEEdescription[\relax]}} +\def\@endIEEEitemize{\endlist} +\def\@endIEEEenumerate{\endlist} +\def\@endIEEEdescription{\endlist} + + +% DO NOT ALLOW BLANK LINES TO BE IN THESE IED ENVIRONMENTS +% AS THIS WILL FORCE NEW PARAGRAPHS AFTER THE IED LISTS +% IEEEtran itemized list MDS 1/2001 +% Note controlled spacing here, shield end of lines with % +\def\@@IEEEitemize[#1]{% + \ifnum\@itemdepth>3\relax\@toodeep\else% + \ifnum\@listdepth>5\relax\@toodeep\else% + \advance\@itemdepth\@ne% + \edef\@itemitem{labelitem\romannumeral\the\@itemdepth}% + % get the labelindentfactor for this level + \advance\@listdepth\@ne% we need to know what the level WILL be + \edef\IEEElabelindentfactor{\csname IEEElabelindentfactor\romannumeral\the\@listdepth\endcsname}% + \advance\@listdepth-\@ne% undo our increment + \def\@IEEEiedjustify{2}% right justified labels are default + % set other defaults + \IEEEnocalcleftmarginfalse% + \IEEEnolabelindentfactorfalse% + \topsep\IEEEiedtopsep% + \IEEElabelindent\IEEEilabelindent% + \labelsep\IEEEiednormlabelsep% + \partopsep 0ex% + \parsep 0ex% + \itemsep 0ex% + \rightmargin 0em% + \listparindent 0em% + \itemindent 0em% + % calculate the label width + % the user can override this later if + % they specified a \labelwidth + \settowidth{\labelwidth}{\csname labelitem\romannumeral\the\@itemdepth\endcsname}% + \@IEEEsavelistparams% save our list parameters + \list{\csname\@itemitem\endcsname}{% + \@IEEErestorelistparams% override any list{} changes + % to our globals + \let\makelabel\@IEEEiedmakelabel% v1.6b setup \makelabel + \IEEEiedlistdecl% let user alter parameters + #1\relax% + % If the user has requested not to use the + % labelindent factor, don't revise \labelindent + \ifIEEEnolabelindentfactor\relax% + \else\IEEElabelindent=\IEEElabelindentfactor\labelindent% + \fi% + % Unless the user has requested otherwise, + % calculate our left margin based + % on \IEEElabelindent, \labelwidth and + % \labelsep + \ifIEEEnocalcleftmargin\relax% + \else\IEEEcalcleftmargin{\IEEElabelindent}% + \fi}\fi\fi}% + + +% DO NOT ALLOW BLANK LINES TO BE IN THESE IED ENVIRONMENTS +% AS THIS WILL FORCE NEW PARAGRAPHS AFTER THE IED LISTS +% IEEEtran enumerate list MDS 1/2001 +% Note controlled spacing here, shield end of lines with % +\def\@@IEEEenumerate[#1]{% + \ifnum\@enumdepth>3\relax\@toodeep\else% + \ifnum\@listdepth>5\relax\@toodeep\else% + \advance\@enumdepth\@ne% + \edef\@enumctr{enum\romannumeral\the\@enumdepth}% + % get the labelindentfactor for this level + \advance\@listdepth\@ne% we need to know what the level WILL be + \edef\IEEElabelindentfactor{\csname IEEElabelindentfactor\romannumeral\the\@listdepth\endcsname}% + \advance\@listdepth-\@ne% undo our increment + \def\@IEEEiedjustify{2}% right justified labels are default + % set other defaults + \IEEEnocalcleftmarginfalse% + \IEEEnolabelindentfactorfalse% + \topsep\IEEEiedtopsep% + \IEEElabelindent\IEEEelabelindent% + \labelsep\IEEEiednormlabelsep% + \partopsep 0ex% + \parsep 0ex% + \itemsep 0ex% + \rightmargin 0em% + \listparindent 0em% + \itemindent 0em% + % calculate the label width + % We'll set it to the width suitable for all labels using + % normalfont 1) to 9) + % The user can override this later + \settowidth{\labelwidth}{9)}% + \@IEEEsavelistparams% save our list parameters + \list{\csname label\@enumctr\endcsname}{\usecounter{\@enumctr}% + \@IEEErestorelistparams% override any list{} changes + % to our globals + \let\makelabel\@IEEEiedmakelabel% v1.6b setup \makelabel + \IEEEiedlistdecl% let user alter parameters + #1\relax% + % If the user has requested not to use the + % IEEElabelindent factor, don't revise \IEEElabelindent + \ifIEEEnolabelindentfactor\relax% + \else\IEEElabelindent=\IEEElabelindentfactor\IEEElabelindent% + \fi% + % Unless the user has requested otherwise, + % calculate our left margin based + % on \IEEElabelindent, \labelwidth and + % \labelsep + \ifIEEEnocalcleftmargin\relax% + \else\IEEEcalcleftmargin{\IEEElabelindent}% + \fi}\fi\fi}% + + +% DO NOT ALLOW BLANK LINES TO BE IN THESE IED ENVIRONMENTS +% AS THIS WILL FORCE NEW PARAGRAPHS AFTER THE IED LISTS +% IEEEtran description list MDS 1/2001 +% Note controlled spacing here, shield end of lines with % +\def\@@IEEEdescription[#1]{% + \ifnum\@listdepth>5\relax\@toodeep\else% + % get the labelindentfactor for this level + \advance\@listdepth\@ne% we need to know what the level WILL be + \edef\IEEElabelindentfactor{\csname IEEElabelindentfactor\romannumeral\the\@listdepth\endcsname}% + \advance\@listdepth-\@ne% undo our increment + \def\@IEEEiedjustify{0}% left justified labels are default + % set other defaults + \IEEEnocalcleftmarginfalse% + \IEEEnolabelindentfactorfalse% + \topsep\IEEEiedtopsep% + \IEEElabelindent\IEEEdlabelindent% + % assume normal labelsep + \labelsep\IEEEiednormlabelsep% + \partopsep 0ex% + \parsep 0ex% + \itemsep 0ex% + \rightmargin 0em% + \listparindent 0em% + \itemindent 0em% + % Bogus label width in case the user forgets + % to set it. + % TIP: If you want to see what a variable's width is you + % can use the TeX command \showthe\width-variable to + % display it on the screen during compilation + % (This might be helpful to know when you need to find out + % which label is the widest) + \settowidth{\labelwidth}{Hello}% + \@IEEEsavelistparams% save our list parameters + \list{}{\@IEEErestorelistparams% override any list{} changes + % to our globals + \let\makelabel\@IEEEiedmakelabel% v1.6b setup \makelabel + \IEEEiedlistdecl% let user alter parameters + #1\relax% + % If the user has requested not to use the + % labelindent factor, don't revise \IEEElabelindent + \ifIEEEnolabelindentfactor\relax% + \else\IEEElabelindent=\IEEElabelindentfactor\IEEElabelindent% + \fi% + % Unless the user has requested otherwise, + % calculate our left margin based + % on \IEEElabelindent, \labelwidth and + % \labelsep + \ifIEEEnocalcleftmargin\relax% + \else\IEEEcalcleftmargin{\IEEElabelindent}\relax% + \fi}\fi} + +% v1.6b we use one makelabel that does justification as needed. +\def\@IEEEiedmakelabel#1{\relax\if\@IEEEiedjustify 0\relax +\makebox[\labelwidth][l]{\normalfont #1}\else +\if\@IEEEiedjustify 1\relax +\makebox[\labelwidth][c]{\normalfont #1}\else +\makebox[\labelwidth][r]{\normalfont #1}\fi\fi} + + +% VERSE and QUOTE +% V1.7 define environments with newenvironment +\newenvironment{verse}{\let\\=\@centercr + \list{}{\itemsep\z@ \itemindent -1.5em \listparindent \itemindent + \rightmargin\leftmargin\advance\leftmargin 1.5em}\item\relax} + {\endlist} +\newenvironment{quotation}{\list{}{\listparindent 1.5em \itemindent\listparindent + \rightmargin\leftmargin \parsep 0pt plus 1pt}\item\relax} + {\endlist} +\newenvironment{quote}{\list{}{\rightmargin\leftmargin}\item\relax} + {\endlist} + + +% \titlepage +% provided only for backward compatibility. \maketitle is the correct +% way to create the title page. +\newif\if@restonecol +\def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn + \else \newpage \fi \thispagestyle{empty}\c@page\z@} +\def\endtitlepage{\if@restonecol\twocolumn \else \newpage \fi} + +% standard values from article.cls +\arraycolsep 5pt +\arrayrulewidth .4pt +\doublerulesep 2pt + +\tabcolsep 6pt +\tabbingsep 0.5em + + +%% FOOTNOTES +% +%\skip\footins 10pt plus 4pt minus 2pt +% V1.6 respond to changes in font size +% space added above the footnotes (if present) +\skip\footins 0.9\baselineskip plus 0.4\baselineskip minus 0.2\baselineskip + +% V1.6, we need to make \footnotesep responsive to changes +% in \baselineskip or strange spacings will result when in +% draft mode. Here is a little LaTeX secret - \footnotesep +% determines the height of an invisible strut that is placed +% *above* the baseline of footnotes after the first. Since +% LaTeX considers the space for characters to be 0.7/baselineskip +% above the baseline and 0.3/baselineskip below it, we need to +% use 0.7/baselineskip as a \footnotesep to maintain equal spacing +% between all the lines of the footnotes. IEEE often uses a tad +% more, so use 0.8\baselineskip. This slightly larger value also helps +% the text to clear the footnote marks. Note that \thanks in IEEEtran +% uses its own value of \footnotesep which is set in \maketitle. +{\footnotesize +\global\footnotesep 0.8\baselineskip} + + +\skip\@mpfootins = \skip\footins +\fboxsep = 3pt +\fboxrule = .4pt +% V1.6 use 1em, then use LaTeX2e's \@makefnmark +% Note that IEEE normally *left* aligns the footnote marks, so we don't need +% box resizing tricks here. +\long\def\@makefntext#1{\parindent 1em\indent\hbox{\@makefnmark}#1}% V1.6 use 1em +% V1.7 compsoc does not use superscipts for footnote marks +\ifCLASSOPTIONcompsoc +\def\@IEEEcompsocmakefnmark{\hbox{\normalfont\@thefnmark.\ }} +\long\def\@makefntext#1{\parindent 1em\indent\hbox{\@IEEEcompsocmakefnmark}#1} +\fi + +% IEEE does not use footnote rules +\def\footnoterule{} + +% V1.7 for compsoc, IEEE uses a footnote rule only for \thanks. We devise a "one-shot" +% system to implement this. +\newif\if@IEEEenableoneshotfootnoterule +\@IEEEenableoneshotfootnoterulefalse +\ifCLASSOPTIONcompsoc +\def\footnoterule{\relax\if@IEEEenableoneshotfootnoterule +\kern-5pt +\hbox to \columnwidth{\hfill\vrule width 0.5\columnwidth height 0.4pt\hfill} +\kern4.6pt +\global\@IEEEenableoneshotfootnoterulefalse +\else +\relax +\fi} +\fi + +% V1.6 do not allow LaTeX to break a footnote across multiple pages +\interfootnotelinepenalty=10000 + +% V1.6 discourage breaks within equations +% Note that amsmath normally sets this to 10000, +% but LaTeX2e normally uses 100. +\interdisplaylinepenalty=2500 + +% default allows section depth up to /paragraph +\setcounter{secnumdepth}{4} + +% technotes do not allow /paragraph +\ifCLASSOPTIONtechnote + \setcounter{secnumdepth}{3} +\fi +% neither do compsoc conferences +\@IEEEcompsocconfonly{\setcounter{secnumdepth}{3}} + + +\newcounter{section} +\newcounter{subsection}[section] +\newcounter{subsubsection}[subsection] +\newcounter{paragraph}[subsubsection] + +% used only by IEEEtran's IEEEeqnarray as other packages may +% have their own, different, implementations +\newcounter{IEEEsubequation}[equation] + +% as shown when called by user from \ref, \label and in table of contents +\def\theequation{\arabic{equation}} % 1 +\def\theIEEEsubequation{\theequation\alph{IEEEsubequation}} % 1a (used only by IEEEtran's IEEEeqnarray) +\ifCLASSOPTIONcompsoc +% compsoc is all arabic +\def\thesection{\arabic{section}} +\def\thesubsection{\thesection.\arabic{subsection}} +\def\thesubsubsection{\thesubsection.\arabic{subsubsection}} +\def\theparagraph{\thesubsubsection.\arabic{paragraph}} +\else +\def\thesection{\Roman{section}} % I +% V1.7, \mbox prevents breaks around - +\def\thesubsection{\mbox{\thesection-\Alph{subsection}}} % I-A +% V1.7 use I-A1 format used by IEEE rather than I-A.1 +\def\thesubsubsection{\thesubsection\arabic{subsubsection}} % I-A1 +\def\theparagraph{\thesubsubsection\alph{paragraph}} % I-A1a +\fi + +% From Heiko Oberdiek. Because of the \mbox in \thesubsection, we need to +% tell hyperref to disable the \mbox command when making PDF bookmarks. +% This done already with hyperref.sty version 6.74o and later, but +% it will not hurt to do it here again for users of older versions. +\@ifundefined{pdfstringdefPreHook}{\let\pdfstringdefPreHook\@empty}{}% +\g@addto@macro\pdfstringdefPreHook{\let\mbox\relax} + + +% Main text forms (how shown in main text headings) +% V1.6, using \thesection in \thesectiondis allows changes +% in the former to automatically appear in the latter +\ifCLASSOPTIONcompsoc + \ifCLASSOPTIONconference% compsoc conference + \def\thesectiondis{\thesection.} + \def\thesubsectiondis{\thesectiondis\arabic{subsection}.} + \def\thesubsubsectiondis{\thesubsectiondis\arabic{subsubsection}.} + \def\theparagraphdis{\thesubsubsectiondis\arabic{paragraph}.} + \else% compsoc not conferencs + \def\thesectiondis{\thesection} + \def\thesubsectiondis{\thesectiondis.\arabic{subsection}} + \def\thesubsubsectiondis{\thesubsectiondis.\arabic{subsubsection}} + \def\theparagraphdis{\thesubsubsectiondis.\arabic{paragraph}} + \fi +\else% not compsoc + \def\thesectiondis{\thesection.} % I. + \def\thesubsectiondis{\Alph{subsection}.} % B. + \def\thesubsubsectiondis{\arabic{subsubsection})} % 3) + \def\theparagraphdis{\alph{paragraph})} % d) +\fi + +% just like LaTeX2e's \@eqnnum +\def\theequationdis{{\normalfont \normalcolor (\theequation)}}% (1) +% IEEEsubequation used only by IEEEtran's IEEEeqnarray +\def\theIEEEsubequationdis{{\normalfont \normalcolor (\theIEEEsubequation)}}% (1a) +% redirect LaTeX2e's equation number display and all that depend on +% it, through IEEEtran's \theequationdis +\def\@eqnnum{\theequationdis} + + + +% V1.7 provide string macros as article.cls does +\def\contentsname{Contents} +\def\listfigurename{List of Figures} +\def\listtablename{List of Tables} +\def\refname{References} +\def\indexname{Index} +\def\figurename{Fig.} +\def\tablename{TABLE} +\@IEEEcompsocconfonly{\def\figurename{Figure}\def\tablename{Table}} +\def\partname{Part} +\def\appendixname{Appendix} +\def\abstractname{Abstract} +% IEEE specific names +\def\IEEEkeywordsname{Index Terms} +\def\IEEEproofname{Proof} + + +% LIST OF FIGURES AND TABLES AND TABLE OF CONTENTS +% +\def\@pnumwidth{1.55em} +\def\@tocrmarg{2.55em} +\def\@dotsep{4.5} +\setcounter{tocdepth}{3} + +% adjusted some spacings here so that section numbers will not easily +% collide with the section titles. +% VIII; VIII-A; and VIII-A.1 are usually the worst offenders. +% MDS 1/2001 +\def\tableofcontents{\section*{\contentsname}\@starttoc{toc}} +\def\l@section#1#2{\addpenalty{\@secpenalty}\addvspace{1.0em plus 1pt}% + \@tempdima 2.75em \begingroup \parindent \z@ \rightskip \@pnumwidth% + \parfillskip-\@pnumwidth {\bfseries\leavevmode #1}\hfil\hbox to\@pnumwidth{\hss #2}\par% + \endgroup} +% argument format #1:level, #2:labelindent,#3:labelsep +\def\l@subsection{\@dottedtocline{2}{2.75em}{3.75em}} +\def\l@subsubsection{\@dottedtocline{3}{6.5em}{4.5em}} +% must provide \l@ defs for ALL sublevels EVEN if tocdepth +% is such as they will not appear in the table of contents +% these defs are how TOC knows what level these things are! +\def\l@paragraph{\@dottedtocline{4}{6.5em}{5.5em}} +\def\l@subparagraph{\@dottedtocline{5}{6.5em}{6.5em}} +\def\listoffigures{\section*{\listfigurename}\@starttoc{lof}} +\def\l@figure{\@dottedtocline{1}{0em}{2.75em}} +\def\listoftables{\section*{\listtablename}\@starttoc{lot}} +\let\l@table\l@figure + + +%% Definitions for floats +%% +%% Normal Floats +\floatsep 1\baselineskip plus 0.2\baselineskip minus 0.2\baselineskip +\textfloatsep 1.7\baselineskip plus 0.2\baselineskip minus 0.4\baselineskip +\@fptop 0pt plus 1fil +\@fpsep 0.75\baselineskip plus 2fil +\@fpbot 0pt plus 1fil +\def\topfraction{0.9} +\def\bottomfraction{0.4} +\def\floatpagefraction{0.8} +% V1.7, let top floats approach 90% of page +\def\textfraction{0.1} + +%% Double Column Floats +\dblfloatsep 1\baselineskip plus 0.2\baselineskip minus 0.2\baselineskip + +\dbltextfloatsep 1.7\baselineskip plus 0.2\baselineskip minus 0.4\baselineskip +% Note that it would be nice if the rubber here actually worked in LaTeX2e. +% There is a long standing limitation in LaTeX, first discovered (to the best +% of my knowledge) by Alan Jeffrey in 1992. LaTeX ignores the stretchable +% portion of \dbltextfloatsep, and as a result, double column figures can and +% do result in an non-integer number of lines in the main text columns with +% underfull vbox errors as a consequence. A post to comp.text.tex +% by Donald Arseneau confirms that this had not yet been fixed in 1998. +% IEEEtran V1.6 will fix this problem for you in the titles, but it doesn't +% protect you from other double floats. Happy vspace'ing. + +\@dblfptop 0pt plus 1fil +\@dblfpsep 0.75\baselineskip plus 2fil +\@dblfpbot 0pt plus 1fil +\def\dbltopfraction{0.8} +\def\dblfloatpagefraction{0.8} +\setcounter{dbltopnumber}{4} + +\intextsep 1\baselineskip plus 0.2\baselineskip minus 0.2\baselineskip +\setcounter{topnumber}{2} +\setcounter{bottomnumber}{2} +\setcounter{totalnumber}{4} + + + +% article class provides these, we should too. +\newlength\abovecaptionskip +\newlength\belowcaptionskip +% but only \abovecaptionskip is used above figure captions and *below* table +% captions +\setlength\abovecaptionskip{0.5\baselineskip} +\setlength\belowcaptionskip{0pt} +% V1.6 create hooks in case the caption spacing ever needs to be +% overridden by a user +\def\@IEEEfigurecaptionsepspace{\vskip\abovecaptionskip\relax}% +\def\@IEEEtablecaptionsepspace{\vskip\abovecaptionskip\relax}% + + +% 1.6b revise caption system so that \@makecaption uses two arguments +% as with LaTeX2e. Otherwise, there will be problems when using hyperref. +\def\@IEEEtablestring{table} + +\ifCLASSOPTIONcompsoc +% V1.7 compsoc \@makecaption +\ifCLASSOPTIONconference% compsoc conference +\long\def\@makecaption#1#2{% +% test if is a for a figure or table +\ifx\@captype\@IEEEtablestring% +% if a table, do table caption +\normalsize\begin{center}{\normalfont\sffamily\normalsize {#1.}~ #2}\end{center}% +\@IEEEtablecaptionsepspace +% if not a table, format it as a figure +\else +\@IEEEfigurecaptionsepspace +\setbox\@tempboxa\hbox{\normalfont\sffamily\normalsize {#1.}~ #2}% +\ifdim \wd\@tempboxa >\hsize% +% if caption is longer than a line, let it wrap around +\setbox\@tempboxa\hbox{\normalfont\sffamily\normalsize {#1.}~ }% +\parbox[t]{\hsize}{\normalfont\sffamily\normalsize \noindent\unhbox\@tempboxa#2}% +% if caption is shorter than a line, center +\else% +\hbox to\hsize{\normalfont\sffamily\normalsize\hfil\box\@tempboxa\hfil}% +\fi\fi} +\else% nonconference compsoc +\long\def\@makecaption#1#2{% +% test if is a for a figure or table +\ifx\@captype\@IEEEtablestring% +% if a table, do table caption +\normalsize\begin{center}{\normalfont\sffamily\normalsize #1}\\{\normalfont\sffamily\normalsize #2}\end{center}% +\@IEEEtablecaptionsepspace +% if not a table, format it as a figure +\else +\@IEEEfigurecaptionsepspace +\setbox\@tempboxa\hbox{\normalfont\sffamily\normalsize {#1.}~ #2}% +\ifdim \wd\@tempboxa >\hsize% +% if caption is longer than a line, let it wrap around +\setbox\@tempboxa\hbox{\normalfont\sffamily\normalsize {#1.}~ }% +\parbox[t]{\hsize}{\normalfont\sffamily\normalsize \noindent\unhbox\@tempboxa#2}% +% if caption is shorter than a line, left justify +\else% +\hbox to\hsize{\normalfont\sffamily\normalsize\box\@tempboxa\hfil}% +\fi\fi} +\fi + +\else% traditional noncompsoc \@makecaption +\long\def\@makecaption#1#2{% +% test if is a for a figure or table +\ifx\@captype\@IEEEtablestring% +% if a table, do table caption +\footnotesize\begin{center}{\normalfont\footnotesize #1}\\{\normalfont\footnotesize\scshape #2}\end{center}% +\@IEEEtablecaptionsepspace +% if not a table, format it as a figure +\else +\@IEEEfigurecaptionsepspace +% 3/2001 use footnotesize, not small; use two nonbreaking spaces, not one +\setbox\@tempboxa\hbox{\normalfont\footnotesize {#1.}~~ #2}% +\ifdim \wd\@tempboxa >\hsize% +% if caption is longer than a line, let it wrap around +\setbox\@tempboxa\hbox{\normalfont\footnotesize {#1.}~~ }% +\parbox[t]{\hsize}{\normalfont\footnotesize\noindent\unhbox\@tempboxa#2}% +% if caption is shorter than a line, center if conference, left justify otherwise +\else% +\ifCLASSOPTIONconference \hbox to\hsize{\normalfont\footnotesize\hfil\box\@tempboxa\hfil}% +\else \hbox to\hsize{\normalfont\footnotesize\box\@tempboxa\hfil}% +\fi\fi\fi} +\fi + + + +% V1.7 disable captions class option, do so in a way that retains operation of \label +% within \caption +\ifCLASSOPTIONcaptionsoff +\long\def\@makecaption#1#2{\vspace*{2em}\footnotesize\begin{center}{\footnotesize #1}\end{center}% +\let\@IEEEtemporiglabeldefsave\label +\let\@IEEEtemplabelargsave\relax +\def\label##1{\gdef\@IEEEtemplabelargsave{##1}}% +\setbox\@tempboxa\hbox{#2}% +\let\label\@IEEEtemporiglabeldefsave +\ifx\@IEEEtemplabelargsave\relax\else\label{\@IEEEtemplabelargsave}\fi} +\fi + + +% V1.7 define end environments with \def not \let so as to work OK with +% preview-latex +\newcounter{figure} +\def\thefigure{\@arabic\c@figure} +\def\fps@figure{tbp} +\def\ftype@figure{1} +\def\ext@figure{lof} +\def\fnum@figure{\figurename~\thefigure} +\def\figure{\@float{figure}} +\def\endfigure{\end@float} +\@namedef{figure*}{\@dblfloat{figure}} +\@namedef{endfigure*}{\end@dblfloat} +\newcounter{table} +\ifCLASSOPTIONcompsoc +\def\thetable{\arabic{table}} +\else +\def\thetable{\@Roman\c@table} +\fi +\def\fps@table{tbp} +\def\ftype@table{2} +\def\ext@table{lot} +\def\fnum@table{\tablename~\thetable} +% V1.6 IEEE uses 8pt text for tables +% to default to footnotesize, we hack into LaTeX2e's \@floatboxreset and pray +\def\table{\def\@floatboxreset{\reset@font\footnotesize\@setminipage}\@float{table}} +\def\endtable{\end@float} +% v1.6b double column tables need to default to footnotesize as well. +\@namedef{table*}{\def\@floatboxreset{\reset@font\footnotesize\@setminipage}\@dblfloat{table}} +\@namedef{endtable*}{\end@dblfloat} + + + + +%% +%% START OF IEEEeqnarry DEFINITIONS +%% +%% Inspired by the concepts, examples, and previous works of LaTeX +%% coders and developers such as Donald Arseneau, Fred Bartlett, +%% David Carlisle, Tony Liu, Frank Mittelbach, Piet van Oostrum, +%% Roland Winkler and Mark Wooding. +%% I don't make the claim that my work here is even near their calibre. ;) + + +% hook to allow easy changeover to IEEEtran.cls/tools.sty error reporting +\def\@IEEEclspkgerror{\ClassError{IEEEtran}} + +\newif\if@IEEEeqnarraystarform% flag to indicate if the environment was called as the star form +\@IEEEeqnarraystarformfalse + +\newif\if@advanceIEEEeqncolcnt% tracks if the environment should advance the col counter +% allows a way to make an \IEEEeqnarraybox that can be used within an \IEEEeqnarray +% used by IEEEeqnarraymulticol so that it can work properly in both +\@advanceIEEEeqncolcnttrue + +\newcount\@IEEEeqnnumcols % tracks how many IEEEeqnarray cols are defined +\newcount\@IEEEeqncolcnt % tracks how many IEEEeqnarray cols the user actually used + + +% The default math style used by the columns +\def\IEEEeqnarraymathstyle{\displaystyle} +% The default text style used by the columns +% default to using the current font +\def\IEEEeqnarraytextstyle{\relax} + +% like the iedlistdecl but for \IEEEeqnarray +\def\IEEEeqnarraydecl{\relax} +\def\IEEEeqnarrayboxdecl{\relax} + +% \yesnumber is the opposite of \nonumber +% a novel concept with the same def as the equationarray package +% However, we give IEEE versions too since some LaTeX packages such as +% the MDWtools mathenv.sty redefine \nonumber to something else. +\providecommand{\yesnumber}{\global\@eqnswtrue} +\def\IEEEyesnumber{\global\@eqnswtrue} +\def\IEEEnonumber{\global\@eqnswfalse} + + +\def\IEEEyessubnumber{\global\@IEEEissubequationtrue\global\@eqnswtrue% +\if@IEEEeqnarrayISinner% only do something inside an IEEEeqnarray +\if@IEEElastlinewassubequation\addtocounter{equation}{-1}\else\setcounter{IEEEsubequation}{1}\fi% +\def\@currentlabel{\p@IEEEsubequation\theIEEEsubequation}\fi} + +% flag to indicate that an equation is a sub equation +\newif\if@IEEEissubequation% +\@IEEEissubequationfalse + +% allows users to "push away" equations that get too close to the equation numbers +\def\IEEEeqnarraynumspace{\hphantom{\if@IEEEissubequation\theIEEEsubequationdis\else\theequationdis\fi}} + +% provides a way to span multiple columns within IEEEeqnarray environments +% will consider \if@advanceIEEEeqncolcnt before globally advancing the +% column counter - so as to work within \IEEEeqnarraybox +% usage: \IEEEeqnarraymulticol{number cols. to span}{col type}{cell text} +\long\def\IEEEeqnarraymulticol#1#2#3{\multispan{#1}% +% check if column is defined +\relax\expandafter\ifx\csname @IEEEeqnarraycolDEF#2\endcsname\@IEEEeqnarraycolisdefined% +\csname @IEEEeqnarraycolPRE#2\endcsname#3\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\csname @IEEEeqnarraycolPOST#2\endcsname% +\else% if not, error and use default type +\@IEEEclspkgerror{Invalid column type "#2" in \string\IEEEeqnarraymulticol.\MessageBreak +Using a default centering column instead}% +{You must define IEEEeqnarray column types before use.}% +\csname @IEEEeqnarraycolPRE@IEEEdefault\endcsname#3\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\csname @IEEEeqnarraycolPOST@IEEEdefault\endcsname% +\fi% +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by #1\relax\fi} + +% like \omit, but maintains track of the column counter for \IEEEeqnarray +\def\IEEEeqnarrayomit{\omit\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by 1\relax\fi} + + +% provides a way to define a letter referenced column type +% usage: \IEEEeqnarraydefcol{col. type letter/name}{pre insertion text}{post insertion text} +\def\IEEEeqnarraydefcol#1#2#3{\expandafter\def\csname @IEEEeqnarraycolPRE#1\endcsname{#2}% +\expandafter\def\csname @IEEEeqnarraycolPOST#1\endcsname{#3}% +\expandafter\def\csname @IEEEeqnarraycolDEF#1\endcsname{1}} + + +% provides a way to define a numerically referenced inter-column glue types +% usage: \IEEEeqnarraydefcolsep{col. glue number}{glue definition} +\def\IEEEeqnarraydefcolsep#1#2{\expandafter\def\csname @IEEEeqnarraycolSEP\romannumeral #1\endcsname{#2}% +\expandafter\def\csname @IEEEeqnarraycolSEPDEF\romannumeral #1\endcsname{1}} + + +\def\@IEEEeqnarraycolisdefined{1}% just a macro for 1, used for checking undefined column types + + +% expands and appends the given argument to the \@IEEEtrantmptoksA token list +% used to build up the \halign preamble +\def\@IEEEappendtoksA#1{\edef\@@IEEEappendtoksA{\@IEEEtrantmptoksA={\the\@IEEEtrantmptoksA #1}}% +\@@IEEEappendtoksA} + +% also appends to \@IEEEtrantmptoksA, but does not expand the argument +% uses \toks8 as a scratchpad register +\def\@IEEEappendNOEXPANDtoksA#1{\toks8={#1}% +\edef\@@IEEEappendNOEXPANDtoksA{\@IEEEtrantmptoksA={\the\@IEEEtrantmptoksA\the\toks8}}% +\@@IEEEappendNOEXPANDtoksA} + +% define some common column types for the user +% math +\IEEEeqnarraydefcol{l}{$\IEEEeqnarraymathstyle}{$\hfil} +\IEEEeqnarraydefcol{c}{\hfil$\IEEEeqnarraymathstyle}{$\hfil} +\IEEEeqnarraydefcol{r}{\hfil$\IEEEeqnarraymathstyle}{$} +\IEEEeqnarraydefcol{L}{$\IEEEeqnarraymathstyle{}}{{}$\hfil} +\IEEEeqnarraydefcol{C}{\hfil$\IEEEeqnarraymathstyle{}}{{}$\hfil} +\IEEEeqnarraydefcol{R}{\hfil$\IEEEeqnarraymathstyle{}}{{}$} +% text +\IEEEeqnarraydefcol{s}{\IEEEeqnarraytextstyle}{\hfil} +\IEEEeqnarraydefcol{t}{\hfil\IEEEeqnarraytextstyle}{\hfil} +\IEEEeqnarraydefcol{u}{\hfil\IEEEeqnarraytextstyle}{} + +% vertical rules +\IEEEeqnarraydefcol{v}{}{\vrule width\arrayrulewidth} +\IEEEeqnarraydefcol{vv}{\vrule width\arrayrulewidth\hfil}{\hfil\vrule width\arrayrulewidth} +\IEEEeqnarraydefcol{V}{}{\vrule width\arrayrulewidth\hskip\doublerulesep\vrule width\arrayrulewidth} +\IEEEeqnarraydefcol{VV}{\vrule width\arrayrulewidth\hskip\doublerulesep\vrule width\arrayrulewidth\hfil}% +{\hfil\vrule width\arrayrulewidth\hskip\doublerulesep\vrule width\arrayrulewidth} + +% horizontal rules +\IEEEeqnarraydefcol{h}{}{\leaders\hrule height\arrayrulewidth\hfil} +\IEEEeqnarraydefcol{H}{}{\leaders\vbox{\hrule width\arrayrulewidth\vskip\doublerulesep\hrule width\arrayrulewidth}\hfil} + +% plain +\IEEEeqnarraydefcol{x}{}{} +\IEEEeqnarraydefcol{X}{$}{$} + +% the default column type to use in the event a column type is not defined +\IEEEeqnarraydefcol{@IEEEdefault}{\hfil$\IEEEeqnarraymathstyle}{$\hfil} + + +% a zero tabskip (used for "-" col types) +\def\@IEEEeqnarraycolSEPzero{0pt plus 0pt minus 0pt} +% a centering tabskip (used for "+" col types) +\def\@IEEEeqnarraycolSEPcenter{1000pt plus 0pt minus 1000pt} + +% top level default tabskip glues for the start, end, and inter-column +% may be reset within environments not always at the top level, e.g., \IEEEeqnarraybox +\edef\@IEEEeqnarraycolSEPdefaultstart{\@IEEEeqnarraycolSEPcenter}% default start glue +\edef\@IEEEeqnarraycolSEPdefaultend{\@IEEEeqnarraycolSEPcenter}% default end glue +\edef\@IEEEeqnarraycolSEPdefaultmid{\@IEEEeqnarraycolSEPzero}% default inter-column glue + + + +% creates a vertical rule that extends from the bottom to the top a a cell +% Provided in case other packages redefine \vline some other way. +% usage: \IEEEeqnarrayvrule[rule thickness] +% If no argument is provided, \arrayrulewidth will be used for the rule thickness. +\newcommand\IEEEeqnarrayvrule[1][\arrayrulewidth]{\vrule\@width#1\relax} + +% creates a blank separator row +% usage: \IEEEeqnarrayseprow[separation length][font size commands] +% default is \IEEEeqnarrayseprow[0.25\normalbaselineskip][\relax] +% blank arguments inherit the default values +% uses \skip5 as a scratch register - calls \@IEEEeqnarraystrutsize which uses more scratch registers +\def\IEEEeqnarrayseprow{\relax\@ifnextchar[{\@IEEEeqnarrayseprow}{\@IEEEeqnarrayseprow[0.25\normalbaselineskip]}} +\def\@IEEEeqnarrayseprow[#1]{\relax\@ifnextchar[{\@@IEEEeqnarrayseprow[#1]}{\@@IEEEeqnarrayseprow[#1][\relax]}} +\def\@@IEEEeqnarrayseprow[#1][#2]{\def\@IEEEeqnarrayseprowARGONE{#1}% +\ifx\@IEEEeqnarrayseprowARGONE\@empty% +% get the skip value, based on the font commands +% use skip5 because \IEEEeqnarraystrutsize uses \skip0, \skip2, \skip3 +% assign within a bogus box to confine the font changes +{\setbox0=\hbox{#2\relax\global\skip5=0.25\normalbaselineskip}}% +\else% +{\setbox0=\hbox{#2\relax\global\skip5=#1}}% +\fi% +\@IEEEeqnarrayhoptolastcolumn\IEEEeqnarraystrutsize{\skip5}{0pt}[\relax]\relax} + +% creates a blank separator row, but omits all the column templates +% usage: \IEEEeqnarrayseprowcut[separation length][font size commands] +% default is \IEEEeqnarrayseprowcut[0.25\normalbaselineskip][\relax] +% blank arguments inherit the default values +% uses \skip5 as a scratch register - calls \@IEEEeqnarraystrutsize which uses more scratch registers +\def\IEEEeqnarrayseprowcut{\multispan{\@IEEEeqnnumcols}\relax% span all the cols +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\@ifnextchar[{\@IEEEeqnarrayseprowcut}{\@IEEEeqnarrayseprowcut[0.25\normalbaselineskip]}} +\def\@IEEEeqnarrayseprowcut[#1]{\relax\@ifnextchar[{\@@IEEEeqnarrayseprowcut[#1]}{\@@IEEEeqnarrayseprowcut[#1][\relax]}} +\def\@@IEEEeqnarrayseprowcut[#1][#2]{\def\@IEEEeqnarrayseprowARGONE{#1}% +\ifx\@IEEEeqnarrayseprowARGONE\@empty% +% get the skip value, based on the font commands +% use skip5 because \IEEEeqnarraystrutsize uses \skip0, \skip2, \skip3 +% assign within a bogus box to confine the font changes +{\setbox0=\hbox{#2\relax\global\skip5=0.25\normalbaselineskip}}% +\else% +{\setbox0=\hbox{#2\relax\global\skip5=#1}}% +\fi% +\IEEEeqnarraystrutsize{\skip5}{0pt}[\relax]\relax} + + + +% draws a single rule across all the columns optional +% argument determines the rule width, \arrayrulewidth is the default +% updates column counter as needed and turns off struts +% usage: \IEEEeqnarrayrulerow[rule line thickness] +\def\IEEEeqnarrayrulerow{\multispan{\@IEEEeqnnumcols}\relax% span all the cols +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\@ifnextchar[{\@IEEEeqnarrayrulerow}{\@IEEEeqnarrayrulerow[\arrayrulewidth]}} +\def\@IEEEeqnarrayrulerow[#1]{\leaders\hrule height#1\hfil\relax% put in our rule +% turn off any struts +\IEEEeqnarraystrutsize{0pt}{0pt}[\relax]\relax} + + +% draws a double rule by using a single rule row, a separator row, and then +% another single rule row +% first optional argument determines the rule thicknesses, \arrayrulewidth is the default +% second optional argument determines the rule spacing, \doublerulesep is the default +% usage: \IEEEeqnarraydblrulerow[rule line thickness][rule spacing] +\def\IEEEeqnarraydblrulerow{\multispan{\@IEEEeqnnumcols}\relax% span all the cols +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\@ifnextchar[{\@IEEEeqnarraydblrulerow}{\@IEEEeqnarraydblrulerow[\arrayrulewidth]}} +\def\@IEEEeqnarraydblrulerow[#1]{\relax\@ifnextchar[{\@@IEEEeqnarraydblrulerow[#1]}% +{\@@IEEEeqnarraydblrulerow[#1][\doublerulesep]}} +\def\@@IEEEeqnarraydblrulerow[#1][#2]{\def\@IEEEeqnarraydblrulerowARG{#1}% +% we allow the user to say \IEEEeqnarraydblrulerow[][] +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\@IEEEeqnarrayrulerow[\arrayrulewidth]% +\else% +\@IEEEeqnarrayrulerow[#1]\relax% +\fi% +\def\@IEEEeqnarraydblrulerowARG{#2}% +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\\\IEEEeqnarrayseprow[\doublerulesep][\relax]% +\else% +\\\IEEEeqnarrayseprow[#2][\relax]% +\fi% +\\\multispan{\@IEEEeqnnumcols}% +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\def\@IEEEeqnarraydblrulerowARG{#1}% +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\@IEEEeqnarrayrulerow[\arrayrulewidth]% +\else% +\@IEEEeqnarrayrulerow[#1]% +\fi% +} + +% draws a double rule by using a single rule row, a separator (cutting) row, and then +% another single rule row +% first optional argument determines the rule thicknesses, \arrayrulewidth is the default +% second optional argument determines the rule spacing, \doublerulesep is the default +% usage: \IEEEeqnarraydblrulerow[rule line thickness][rule spacing] +\def\IEEEeqnarraydblrulerowcut{\multispan{\@IEEEeqnnumcols}\relax% span all the cols +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\@ifnextchar[{\@IEEEeqnarraydblrulerowcut}{\@IEEEeqnarraydblrulerowcut[\arrayrulewidth]}} +\def\@IEEEeqnarraydblrulerowcut[#1]{\relax\@ifnextchar[{\@@IEEEeqnarraydblrulerowcut[#1]}% +{\@@IEEEeqnarraydblrulerowcut[#1][\doublerulesep]}} +\def\@@IEEEeqnarraydblrulerowcut[#1][#2]{\def\@IEEEeqnarraydblrulerowARG{#1}% +% we allow the user to say \IEEEeqnarraydblrulerow[][] +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\@IEEEeqnarrayrulerow[\arrayrulewidth]% +\else% +\@IEEEeqnarrayrulerow[#1]% +\fi% +\def\@IEEEeqnarraydblrulerowARG{#2}% +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\\\IEEEeqnarrayseprowcut[\doublerulesep][\relax]% +\else% +\\\IEEEeqnarrayseprowcut[#2][\relax]% +\fi% +\\\multispan{\@IEEEeqnnumcols}% +% advance column counter only if the IEEEeqnarray environment wants it +\if@advanceIEEEeqncolcnt\global\advance\@IEEEeqncolcnt by \@IEEEeqnnumcols\relax\fi% +\def\@IEEEeqnarraydblrulerowARG{#1}% +\ifx\@IEEEeqnarraydblrulerowARG\@empty% +\@IEEEeqnarrayrulerow[\arrayrulewidth]% +\else% +\@IEEEeqnarrayrulerow[#1]% +\fi% +} + + + +% inserts a full row's worth of &'s +% relies on \@IEEEeqnnumcols to provide the correct number of columns +% uses \@IEEEtrantmptoksA, \count0 as scratch registers +\def\@IEEEeqnarrayhoptolastcolumn{\@IEEEtrantmptoksA={}\count0=1\relax% +\loop% add cols if the user did not use them all +\ifnum\count0<\@IEEEeqnnumcols\relax% +\@IEEEappendtoksA{&}% +\advance\count0 by 1\relax% update the col count +\repeat% +\the\@IEEEtrantmptoksA%execute the &'s +} + + + +\newif\if@IEEEeqnarrayISinner % flag to indicate if we are within the lines +\@IEEEeqnarrayISinnerfalse % of an IEEEeqnarray - after the IEEEeqnarraydecl + +\edef\@IEEEeqnarrayTHEstrutheight{0pt} % height and depth of IEEEeqnarray struts +\edef\@IEEEeqnarrayTHEstrutdepth{0pt} + +\edef\@IEEEeqnarrayTHEmasterstrutheight{0pt} % default height and depth of +\edef\@IEEEeqnarrayTHEmasterstrutdepth{0pt} % struts within an IEEEeqnarray + +\edef\@IEEEeqnarrayTHEmasterstrutHSAVE{0pt} % saved master strut height +\edef\@IEEEeqnarrayTHEmasterstrutDSAVE{0pt} % and depth + +\newif\if@IEEEeqnarrayusemasterstrut % flag to indicate that the master strut value +\@IEEEeqnarrayusemasterstruttrue % is to be used + + + +% saves the strut height and depth of the master strut +\def\@IEEEeqnarraymasterstrutsave{\relax% +\expandafter\skip0=\@IEEEeqnarrayTHEmasterstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEmasterstrutdepth\relax% +% remove stretchability +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% save values +\edef\@IEEEeqnarrayTHEmasterstrutHSAVE{\the\dimen0}% +\edef\@IEEEeqnarrayTHEmasterstrutDSAVE{\the\dimen2}} + +% restores the strut height and depth of the master strut +\def\@IEEEeqnarraymasterstrutrestore{\relax% +\expandafter\skip0=\@IEEEeqnarrayTHEmasterstrutHSAVE\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEmasterstrutDSAVE\relax% +% remove stretchability +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% restore values +\edef\@IEEEeqnarrayTHEmasterstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEmasterstrutdepth{\the\dimen2}} + + +% globally restores the strut height and depth to the +% master values and sets the master strut flag to true +\def\@IEEEeqnarraystrutreset{\relax% +\expandafter\skip0=\@IEEEeqnarrayTHEmasterstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEmasterstrutdepth\relax% +% remove stretchability +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% restore values +\xdef\@IEEEeqnarrayTHEstrutheight{\the\dimen0}% +\xdef\@IEEEeqnarrayTHEstrutdepth{\the\dimen2}% +\global\@IEEEeqnarrayusemasterstruttrue} + + +% if the master strut is not to be used, make the current +% values of \@IEEEeqnarrayTHEstrutheight, \@IEEEeqnarrayTHEstrutdepth +% and the use master strut flag, global +% this allows user strut commands issued in the last column to be carried +% into the isolation/strut column +\def\@IEEEeqnarrayglobalizestrutstatus{\relax% +\if@IEEEeqnarrayusemasterstrut\else% +\xdef\@IEEEeqnarrayTHEstrutheight{\@IEEEeqnarrayTHEstrutheight}% +\xdef\@IEEEeqnarrayTHEstrutdepth{\@IEEEeqnarrayTHEstrutdepth}% +\global\@IEEEeqnarrayusemasterstrutfalse% +\fi} + + + +% usage: \IEEEeqnarraystrutsize{height}{depth}[font size commands] +% If called outside the lines of an IEEEeqnarray, sets the height +% and depth of both the master and local struts. If called inside +% an IEEEeqnarray line, sets the height and depth of the local strut +% only and sets the flag to indicate the use of the local strut +% values. If the height or depth is left blank, 0.7\normalbaselineskip +% and 0.3\normalbaselineskip will be used, respectively. +% The optional argument can be used to evaluate the lengths under +% a different font size and styles. If none is specified, the current +% font is used. +% uses scratch registers \skip0, \skip2, \skip3, \dimen0, \dimen2 +\def\IEEEeqnarraystrutsize#1#2{\relax\@ifnextchar[{\@IEEEeqnarraystrutsize{#1}{#2}}{\@IEEEeqnarraystrutsize{#1}{#2}[\relax]}} +\def\@IEEEeqnarraystrutsize#1#2[#3]{\def\@IEEEeqnarraystrutsizeARG{#1}% +\ifx\@IEEEeqnarraystrutsizeARG\@empty% +{\setbox0=\hbox{#3\relax\global\skip3=0.7\normalbaselineskip}}% +\skip0=\skip3\relax% +\else% arg one present +{\setbox0=\hbox{#3\relax\global\skip3=#1\relax}}% +\skip0=\skip3\relax% +\fi% if null arg +\def\@IEEEeqnarraystrutsizeARG{#2}% +\ifx\@IEEEeqnarraystrutsizeARG\@empty% +{\setbox0=\hbox{#3\relax\global\skip3=0.3\normalbaselineskip}}% +\skip2=\skip3\relax% +\else% arg two present +{\setbox0=\hbox{#3\relax\global\skip3=#2\relax}}% +\skip2=\skip3\relax% +\fi% if null arg +% remove stretchability, just to be safe +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% dimen0 = height, dimen2 = depth +\if@IEEEeqnarrayISinner% inner does not touch master strut size +\edef\@IEEEeqnarrayTHEstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEstrutdepth{\the\dimen2}% +\@IEEEeqnarrayusemasterstrutfalse% do not use master +\else% outer, have to set master strut too +\edef\@IEEEeqnarrayTHEmasterstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEmasterstrutdepth{\the\dimen2}% +\edef\@IEEEeqnarrayTHEstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEstrutdepth{\the\dimen2}% +\@IEEEeqnarrayusemasterstruttrue% use master strut +\fi} + + +% usage: \IEEEeqnarraystrutsizeadd{added height}{added depth}[font size commands] +% If called outside the lines of an IEEEeqnarray, adds the given height +% and depth to both the master and local struts. +% If called inside an IEEEeqnarray line, adds the given height and depth +% to the local strut only and sets the flag to indicate the use +% of the local strut values. +% In both cases, if a height or depth is left blank, 0pt is used instead. +% The optional argument can be used to evaluate the lengths under +% a different font size and styles. If none is specified, the current +% font is used. +% uses scratch registers \skip0, \skip2, \skip3, \dimen0, \dimen2 +\def\IEEEeqnarraystrutsizeadd#1#2{\relax\@ifnextchar[{\@IEEEeqnarraystrutsizeadd{#1}{#2}}{\@IEEEeqnarraystrutsizeadd{#1}{#2}[\relax]}} +\def\@IEEEeqnarraystrutsizeadd#1#2[#3]{\def\@IEEEeqnarraystrutsizearg{#1}% +\ifx\@IEEEeqnarraystrutsizearg\@empty% +\skip0=0pt\relax% +\else% arg one present +{\setbox0=\hbox{#3\relax\global\skip3=#1}}% +\skip0=\skip3\relax% +\fi% if null arg +\def\@IEEEeqnarraystrutsizearg{#2}% +\ifx\@IEEEeqnarraystrutsizearg\@empty% +\skip2=0pt\relax% +\else% arg two present +{\setbox0=\hbox{#3\relax\global\skip3=#2}}% +\skip2=\skip3\relax% +\fi% if null arg +% remove stretchability, just to be safe +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% dimen0 = height, dimen2 = depth +\if@IEEEeqnarrayISinner% inner does not touch master strut size +% get local strut size +\expandafter\skip0=\@IEEEeqnarrayTHEstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEstrutdepth\relax% +% add it to the user supplied values +\advance\dimen0 by \skip0\relax% +\advance\dimen2 by \skip2\relax% +% update the local strut size +\edef\@IEEEeqnarrayTHEstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEstrutdepth{\the\dimen2}% +\@IEEEeqnarrayusemasterstrutfalse% do not use master +\else% outer, have to set master strut too +% get master strut size +\expandafter\skip0=\@IEEEeqnarrayTHEmasterstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEmasterstrutdepth\relax% +% add it to the user supplied values +\advance\dimen0 by \skip0\relax% +\advance\dimen2 by \skip2\relax% +% update the local and master strut sizes +\edef\@IEEEeqnarrayTHEmasterstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEmasterstrutdepth{\the\dimen2}% +\edef\@IEEEeqnarrayTHEstrutheight{\the\dimen0}% +\edef\@IEEEeqnarrayTHEstrutdepth{\the\dimen2}% +\@IEEEeqnarrayusemasterstruttrue% use master strut +\fi} + + +% allow user a way to see the struts +\newif\ifIEEEvisiblestruts +\IEEEvisiblestrutsfalse + +% inserts an invisible strut using the master or local strut values +% uses scratch registers \skip0, \skip2, \dimen0, \dimen2 +\def\@IEEEeqnarrayinsertstrut{\relax% +\if@IEEEeqnarrayusemasterstrut +% get master strut size +\expandafter\skip0=\@IEEEeqnarrayTHEmasterstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEmasterstrutdepth\relax% +\else% +% get local strut size +\expandafter\skip0=\@IEEEeqnarrayTHEstrutheight\relax% +\expandafter\skip2=\@IEEEeqnarrayTHEstrutdepth\relax% +\fi% +% remove stretchability, probably not needed +\dimen0\skip0\relax% +\dimen2\skip2\relax% +% dimen0 = height, dimen2 = depth +% allow user to see struts if desired +\ifIEEEvisiblestruts% +\vrule width0.2pt height\dimen0 depth\dimen2\relax% +\else% +\vrule width0pt height\dimen0 depth\dimen2\relax\fi} + + +% creates an invisible strut, useable even outside \IEEEeqnarray +% if \IEEEvisiblestrutstrue, the strut will be visible and 0.2pt wide. +% usage: \IEEEstrut[height][depth][font size commands] +% default is \IEEEstrut[0.7\normalbaselineskip][0.3\normalbaselineskip][\relax] +% blank arguments inherit the default values +% uses \dimen0, \dimen2, \skip0, \skip2 +\def\IEEEstrut{\relax\@ifnextchar[{\@IEEEstrut}{\@IEEEstrut[0.7\normalbaselineskip]}} +\def\@IEEEstrut[#1]{\relax\@ifnextchar[{\@@IEEEstrut[#1]}{\@@IEEEstrut[#1][0.3\normalbaselineskip]}} +\def\@@IEEEstrut[#1][#2]{\relax\@ifnextchar[{\@@@IEEEstrut[#1][#2]}{\@@@IEEEstrut[#1][#2][\relax]}} +\def\@@@IEEEstrut[#1][#2][#3]{\mbox{#3\relax% +\def\@IEEEstrutARG{#1}% +\ifx\@IEEEstrutARG\@empty% +\skip0=0.7\normalbaselineskip\relax% +\else% +\skip0=#1\relax% +\fi% +\def\@IEEEstrutARG{#2}% +\ifx\@IEEEstrutARG\@empty% +\skip2=0.3\normalbaselineskip\relax% +\else% +\skip2=#2\relax% +\fi% +% remove stretchability, probably not needed +\dimen0\skip0\relax% +\dimen2\skip2\relax% +\ifIEEEvisiblestruts% +\vrule width0.2pt height\dimen0 depth\dimen2\relax% +\else% +\vrule width0.0pt height\dimen0 depth\dimen2\relax\fi}} + + +% enables strut mode by setting a default strut size and then zeroing the +% \baselineskip, \lineskip, \lineskiplimit and \jot +\def\IEEEeqnarraystrutmode{\IEEEeqnarraystrutsize{0.7\normalbaselineskip}{0.3\normalbaselineskip}[\relax]% +\baselineskip=0pt\lineskip=0pt\lineskiplimit=0pt\jot=0pt} + + + +\def\IEEEeqnarray{\@IEEEeqnarraystarformfalse\@IEEEeqnarray} +\def\endIEEEeqnarray{\end@IEEEeqnarray} + +\@namedef{IEEEeqnarray*}{\@IEEEeqnarraystarformtrue\@IEEEeqnarray} +\@namedef{endIEEEeqnarray*}{\end@IEEEeqnarray} + + +% \IEEEeqnarray is an enhanced \eqnarray. +% The star form defaults to not putting equation numbers at the end of each row. +% usage: \IEEEeqnarray[decl]{cols} +\def\@IEEEeqnarray{\relax\@ifnextchar[{\@@IEEEeqnarray}{\@@IEEEeqnarray[\relax]}} +\def\@@IEEEeqnarray[#1]#2{% + % default to showing the equation number or not based on whether or not + % the star form was involked + \if@IEEEeqnarraystarform\global\@eqnswfalse + \else% not the star form + \global\@eqnswtrue + \fi% if star form + \@IEEEissubequationfalse% default to no subequations + \@IEEElastlinewassubequationfalse% assume last line is not a sub equation + \@IEEEeqnarrayISinnerfalse% not yet within the lines of the halign + \@IEEEeqnarraystrutsize{0pt}{0pt}[\relax]% turn off struts by default + \@IEEEeqnarrayusemasterstruttrue% use master strut till user asks otherwise + \IEEEvisiblestrutsfalse% diagnostic mode defaults to off + % no extra space unless the user specifically requests it + \lineskip=0pt\relax + \lineskiplimit=0pt\relax + \baselineskip=\normalbaselineskip\relax% + \jot=\IEEEnormaljot\relax% + \mathsurround\z@\relax% no extra spacing around math + \@advanceIEEEeqncolcnttrue% advance the col counter for each col the user uses, + % used in \IEEEeqnarraymulticol and in the preamble build + \stepcounter{equation}% advance equation counter before first line + \setcounter{IEEEsubequation}{0}% no subequation yet + \def\@currentlabel{\p@equation\theequation}% redefine the ref label + \IEEEeqnarraydecl\relax% allow a way for the user to make global overrides + #1\relax% allow user to override defaults + \let\\\@IEEEeqnarraycr% replace newline with one that can put in eqn. numbers + \global\@IEEEeqncolcnt\z@% col. count = 0 for first line + \@IEEEbuildpreamble #2\end\relax% build the preamble and put it into \@IEEEtrantmptoksA + % put in the column for the equation number + \ifnum\@IEEEeqnnumcols>0\relax\@IEEEappendtoksA{&}\fi% col separator for those after the first + \toks0={##}% + % advance the \@IEEEeqncolcnt for the isolation col, this helps with error checking + \@IEEEappendtoksA{\global\advance\@IEEEeqncolcnt by 1\relax}% + % add the isolation column + \@IEEEappendtoksA{\tabskip\z@skip\bgroup\the\toks0\egroup}% + % advance the \@IEEEeqncolcnt for the equation number col, this helps with error checking + \@IEEEappendtoksA{&\global\advance\@IEEEeqncolcnt by 1\relax}% + % add the equation number col to the preamble + \@IEEEappendtoksA{\tabskip\z@skip\hb@xt@\z@\bgroup\hss\the\toks0\egroup}% + % note \@IEEEeqnnumcols does not count the equation col or isolation col + % set the starting tabskip glue as determined by the preamble build + \tabskip=\@IEEEBPstartglue\relax + % begin the display alignment + \@IEEEeqnarrayISinnertrue% commands are now within the lines + $$\everycr{}\halign to\displaywidth\bgroup + % "exspand" the preamble + \span\the\@IEEEtrantmptoksA\cr} + +% enter isolation/strut column (or the next column if the user did not use +% every column), record the strut status, complete the columns, do the strut if needed, +% restore counters to correct values and exit +\def\end@IEEEeqnarray{\@IEEEeqnarrayglobalizestrutstatus&\@@IEEEeqnarraycr\egroup% +\if@IEEElastlinewassubequation\global\advance\c@IEEEsubequation\m@ne\fi% +\global\advance\c@equation\m@ne% +$$\@ignoretrue} + +% need a way to remember if last line is a subequation +\newif\if@IEEElastlinewassubequation% +\@IEEElastlinewassubequationfalse + +% IEEEeqnarray uses a modifed \\ instead of the plain \cr to +% end rows. This allows for things like \\*[vskip amount] +% This "cr" macros are modified versions those for LaTeX2e's eqnarray +% the {\ifnum0=`} braces must be kept away from the last column to avoid +% altering spacing of its math, so we use & to advance to the next column +% as there is an isolation/strut column after the user's columns +\def\@IEEEeqnarraycr{\@IEEEeqnarrayglobalizestrutstatus&% save strut status and advance to next column + {\ifnum0=`}\fi + \@ifstar{% + \global\@eqpen\@M\@IEEEeqnarrayYCR + }{% + \global\@eqpen\interdisplaylinepenalty \@IEEEeqnarrayYCR + }% +} + +\def\@IEEEeqnarrayYCR{\@testopt\@IEEEeqnarrayXCR\z@skip} + +\def\@IEEEeqnarrayXCR[#1]{% + \ifnum0=`{\fi}% + \@@IEEEeqnarraycr + \noalign{\penalty\@eqpen\vskip\jot\vskip #1\relax}}% + +\def\@@IEEEeqnarraycr{\@IEEEtrantmptoksA={}% clear token register + \advance\@IEEEeqncolcnt by -1\relax% adjust col count because of the isolation column + \ifnum\@IEEEeqncolcnt>\@IEEEeqnnumcols\relax + \@IEEEclspkgerror{Too many columns within the IEEEeqnarray\MessageBreak + environment}% + {Use fewer \string &'s or put more columns in the IEEEeqnarry column\MessageBreak + specifications.}\relax% + \else + \loop% add cols if the user did not use them all + \ifnum\@IEEEeqncolcnt<\@IEEEeqnnumcols\relax + \@IEEEappendtoksA{&}% + \advance\@IEEEeqncolcnt by 1\relax% update the col count + \repeat + % this number of &'s will take us the the isolation column + \fi + % execute the &'s + \the\@IEEEtrantmptoksA% + % handle the strut/isolation column + \@IEEEeqnarrayinsertstrut% do the strut if needed + \@IEEEeqnarraystrutreset% reset the strut system for next line or IEEEeqnarray + &% and enter the equation number column + % is this line needs an equation number, display it and advance the + % (sub)equation counters, record what type this line was + \if@eqnsw% + \if@IEEEissubequation\theIEEEsubequationdis\addtocounter{equation}{1}\stepcounter{IEEEsubequation}% + \global\@IEEElastlinewassubequationtrue% + \else% display a standard equation number, initialize the IEEEsubequation counter + \theequationdis\stepcounter{equation}\setcounter{IEEEsubequation}{0}% + \global\@IEEElastlinewassubequationfalse\fi% + \fi% + % reset the eqnsw flag to indicate default preference of the display of equation numbers + \if@IEEEeqnarraystarform\global\@eqnswfalse\else\global\@eqnswtrue\fi + \global\@IEEEissubequationfalse% reset the subequation flag + % reset the number of columns the user actually used + \global\@IEEEeqncolcnt\z@\relax + % the real end of the line + \cr} + + + + + +% \IEEEeqnarraybox is like \IEEEeqnarray except the box form puts everything +% inside a vtop, vbox, or vcenter box depending on the letter in the second +% optional argument (t,b,c). Vbox is the default. Unlike \IEEEeqnarray, +% equation numbers are not displayed and \IEEEeqnarraybox can be nested. +% \IEEEeqnarrayboxm is for math mode (like \array) and does not put the vbox +% within an hbox. +% \IEEEeqnarrayboxt is for text mode (like \tabular) and puts the vbox within +% a \hbox{$ $} construct. +% \IEEEeqnarraybox will auto detect whether to use \IEEEeqnarrayboxm or +% \IEEEeqnarrayboxt depending on the math mode. +% The third optional argument specifies the width this box is to be set to - +% natural width is the default. +% The * forms do not add \jot line spacing +% usage: \IEEEeqnarraybox[decl][pos][width]{cols} +\def\IEEEeqnarrayboxm{\@IEEEeqnarraystarformfalse\@IEEEeqnarrayboxHBOXSWfalse\@IEEEeqnarraybox} +\def\endIEEEeqnarrayboxm{\end@IEEEeqnarraybox} +\@namedef{IEEEeqnarrayboxm*}{\@IEEEeqnarraystarformtrue\@IEEEeqnarrayboxHBOXSWfalse\@IEEEeqnarraybox} +\@namedef{endIEEEeqnarrayboxm*}{\end@IEEEeqnarraybox} + +\def\IEEEeqnarrayboxt{\@IEEEeqnarraystarformfalse\@IEEEeqnarrayboxHBOXSWtrue\@IEEEeqnarraybox} +\def\endIEEEeqnarrayboxt{\end@IEEEeqnarraybox} +\@namedef{IEEEeqnarrayboxt*}{\@IEEEeqnarraystarformtrue\@IEEEeqnarrayboxHBOXSWtrue\@IEEEeqnarraybox} +\@namedef{endIEEEeqnarrayboxt*}{\end@IEEEeqnarraybox} + +\def\IEEEeqnarraybox{\@IEEEeqnarraystarformfalse\ifmmode\@IEEEeqnarrayboxHBOXSWfalse\else\@IEEEeqnarrayboxHBOXSWtrue\fi% +\@IEEEeqnarraybox} +\def\endIEEEeqnarraybox{\end@IEEEeqnarraybox} + +\@namedef{IEEEeqnarraybox*}{\@IEEEeqnarraystarformtrue\ifmmode\@IEEEeqnarrayboxHBOXSWfalse\else\@IEEEeqnarrayboxHBOXSWtrue\fi% +\@IEEEeqnarraybox} +\@namedef{endIEEEeqnarraybox*}{\end@IEEEeqnarraybox} + +% flag to indicate if the \IEEEeqnarraybox needs to put things into an hbox{$ $} +% for \vcenter in non-math mode +\newif\if@IEEEeqnarrayboxHBOXSW% +\@IEEEeqnarrayboxHBOXSWfalse + +\def\@IEEEeqnarraybox{\relax\@ifnextchar[{\@@IEEEeqnarraybox}{\@@IEEEeqnarraybox[\relax]}} +\def\@@IEEEeqnarraybox[#1]{\relax\@ifnextchar[{\@@@IEEEeqnarraybox[#1]}{\@@@IEEEeqnarraybox[#1][b]}} +\def\@@@IEEEeqnarraybox[#1][#2]{\relax\@ifnextchar[{\@@@@IEEEeqnarraybox[#1][#2]}{\@@@@IEEEeqnarraybox[#1][#2][\relax]}} + +% #1 = decl; #2 = t,b,c; #3 = width, #4 = col specs +\def\@@@@IEEEeqnarraybox[#1][#2][#3]#4{\@IEEEeqnarrayISinnerfalse % not yet within the lines of the halign + \@IEEEeqnarraymasterstrutsave% save current master strut values + \@IEEEeqnarraystrutsize{0pt}{0pt}[\relax]% turn off struts by default + \@IEEEeqnarrayusemasterstruttrue% use master strut till user asks otherwise + \IEEEvisiblestrutsfalse% diagnostic mode defaults to off + % no extra space unless the user specifically requests it + \lineskip=0pt\relax% + \lineskiplimit=0pt\relax% + \baselineskip=\normalbaselineskip\relax% + \jot=\IEEEnormaljot\relax% + \mathsurround\z@\relax% no extra spacing around math + % the default end glues are zero for an \IEEEeqnarraybox + \edef\@IEEEeqnarraycolSEPdefaultstart{\@IEEEeqnarraycolSEPzero}% default start glue + \edef\@IEEEeqnarraycolSEPdefaultend{\@IEEEeqnarraycolSEPzero}% default end glue + \edef\@IEEEeqnarraycolSEPdefaultmid{\@IEEEeqnarraycolSEPzero}% default inter-column glue + \@advanceIEEEeqncolcntfalse% do not advance the col counter for each col the user uses, + % used in \IEEEeqnarraymulticol and in the preamble build + \IEEEeqnarrayboxdecl\relax% allow a way for the user to make global overrides + #1\relax% allow user to override defaults + \let\\\@IEEEeqnarrayboxcr% replace newline with one that allows optional spacing + \@IEEEbuildpreamble #4\end\relax% build the preamble and put it into \@IEEEtrantmptoksA + % add an isolation column to the preamble to stop \\'s {} from getting into the last col + \ifnum\@IEEEeqnnumcols>0\relax\@IEEEappendtoksA{&}\fi% col separator for those after the first + \toks0={##}% + % add the isolation column to the preamble + \@IEEEappendtoksA{\tabskip\z@skip\bgroup\the\toks0\egroup}% + % set the starting tabskip glue as determined by the preamble build + \tabskip=\@IEEEBPstartglue\relax + % begin the alignment + \everycr{}% + % use only the very first token to determine the positioning + % this stops some problems when the user uses more than one letter, + % but is probably not worth the effort + % \noindent is used as a delimiter + \def\@IEEEgrabfirstoken##1##2\noindent{\let\@IEEEgrabbedfirstoken=##1}% + \@IEEEgrabfirstoken#2\relax\relax\noindent + % \@IEEEgrabbedfirstoken has the first token, the rest are discarded + % if we need to put things into and hbox and go into math mode, do so now + \if@IEEEeqnarrayboxHBOXSW \leavevmode \hbox \bgroup $\fi% + % use the appropriate vbox type + \if\@IEEEgrabbedfirstoken t\relax\vtop\else\if\@IEEEgrabbedfirstoken c\relax% + \vcenter\else\vbox\fi\fi\bgroup% + \@IEEEeqnarrayISinnertrue% commands are now within the lines + \ifx#3\relax\halign\else\halign to #3\relax\fi% + \bgroup + % "exspand" the preamble + \span\the\@IEEEtrantmptoksA\cr} + +% carry strut status and enter the isolation/strut column, +% exit from math mode if needed, and exit +\def\end@IEEEeqnarraybox{\@IEEEeqnarrayglobalizestrutstatus% carry strut status +&% enter isolation/strut column +\@IEEEeqnarrayinsertstrut% do strut if needed +\@IEEEeqnarraymasterstrutrestore% restore the previous master strut values +% reset the strut system for next IEEEeqnarray +% (sets local strut values back to previous master strut values) +\@IEEEeqnarraystrutreset% +% ensure last line, exit from halign, close vbox +\crcr\egroup\egroup% +% exit from math mode and close hbox if needed +\if@IEEEeqnarrayboxHBOXSW $\egroup\fi} + + + +% IEEEeqnarraybox uses a modifed \\ instead of the plain \cr to +% end rows. This allows for things like \\[vskip amount] +% This "cr" macros are modified versions those for LaTeX2e's eqnarray +% For IEEEeqnarraybox, \\* is the same as \\ +% the {\ifnum0=`} braces must be kept away from the last column to avoid +% altering spacing of its math, so we use & to advance to the isolation/strut column +% carry strut status into isolation/strut column +\def\@IEEEeqnarrayboxcr{\@IEEEeqnarrayglobalizestrutstatus% carry strut status +&% enter isolation/strut column +\@IEEEeqnarrayinsertstrut% do strut if needed +% reset the strut system for next line or IEEEeqnarray +\@IEEEeqnarraystrutreset% +{\ifnum0=`}\fi% +\@ifstar{\@IEEEeqnarrayboxYCR}{\@IEEEeqnarrayboxYCR}} + +% test and setup the optional argument to \\[] +\def\@IEEEeqnarrayboxYCR{\@testopt\@IEEEeqnarrayboxXCR\z@skip} + +% IEEEeqnarraybox does not automatically increase line spacing by \jot +\def\@IEEEeqnarrayboxXCR[#1]{\ifnum0=`{\fi}% +\cr\noalign{\if@IEEEeqnarraystarform\else\vskip\jot\fi\vskip#1\relax}} + + + +% starts the halign preamble build +\def\@IEEEbuildpreamble{\@IEEEtrantmptoksA={}% clear token register +\let\@IEEEBPcurtype=u%current column type is not yet known +\let\@IEEEBPprevtype=s%the previous column type was the start +\let\@IEEEBPnexttype=u%next column type is not yet known +% ensure these are valid +\def\@IEEEBPcurglue={0pt plus 0pt minus 0pt}% +\def\@IEEEBPcurcolname{@IEEEdefault}% name of current column definition +% currently acquired numerically referenced glue +% use a name that is easier to remember +\let\@IEEEBPcurnum=\@IEEEtrantmpcountA% +\@IEEEBPcurnum=0% +% tracks number of columns in the preamble +\@IEEEeqnnumcols=0% +% record the default end glues +\edef\@IEEEBPstartglue{\@IEEEeqnarraycolSEPdefaultstart}% +\edef\@IEEEBPendglue{\@IEEEeqnarraycolSEPdefaultend}% +% now parse the user's column specifications +\@@IEEEbuildpreamble} + + +% parses and builds the halign preamble +\def\@@IEEEbuildpreamble#1#2{\let\@@nextIEEEbuildpreamble=\@@IEEEbuildpreamble% +% use only the very first token to check the end +% \noindent is used as a delimiter as \end can be present here +\def\@IEEEgrabfirstoken##1##2\noindent{\let\@IEEEgrabbedfirstoken=##1}% +\@IEEEgrabfirstoken#1\relax\relax\noindent +\ifx\@IEEEgrabbedfirstoken\end\let\@@nextIEEEbuildpreamble=\@@IEEEfinishpreamble\else% +% identify current and next token type +\@IEEEgetcoltype{#1}{\@IEEEBPcurtype}{1}% current, error on invalid +\@IEEEgetcoltype{#2}{\@IEEEBPnexttype}{0}% next, no error on invalid next +% if curtype is a glue, get the glue def +\if\@IEEEBPcurtype g\@IEEEgetcurglue{#1}{\@IEEEBPcurglue}\fi% +% if curtype is a column, get the column def and set the current column name +\if\@IEEEBPcurtype c\@IEEEgetcurcol{#1}\fi% +% if curtype is a numeral, acquire the user defined glue +\if\@IEEEBPcurtype n\@IEEEprocessNcol{#1}\fi% +% process the acquired glue +\if\@IEEEBPcurtype g\@IEEEprocessGcol\fi% +% process the acquired col +\if\@IEEEBPcurtype c\@IEEEprocessCcol\fi% +% ready prevtype for next col spec. +\let\@IEEEBPprevtype=\@IEEEBPcurtype% +% be sure and put back the future token(s) as a group +\fi\@@nextIEEEbuildpreamble{#2}} + + +% executed just after preamble build is completed +% warn about zero cols, and if prevtype type = u, put in end tabskip glue +\def\@@IEEEfinishpreamble#1{\ifnum\@IEEEeqnnumcols<1\relax +\@IEEEclspkgerror{No column specifiers declared for IEEEeqnarray}% +{At least one column type must be declared for each IEEEeqnarray.}% +\fi%num cols less than 1 +%if last type undefined, set default end tabskip glue +\if\@IEEEBPprevtype u\@IEEEappendtoksA{\tabskip=\@IEEEBPendglue}\fi} + + +% Identify and return the column specifier's type code +\def\@IEEEgetcoltype#1#2#3{% +% use only the very first token to determine the type +% \noindent is used as a delimiter as \end can be present here +\def\@IEEEgrabfirstoken##1##2\noindent{\let\@IEEEgrabbedfirstoken=##1}% +\@IEEEgrabfirstoken#1\relax\relax\noindent +% \@IEEEgrabfirstoken has the first token, the rest are discarded +% n = number +% g = glue (any other char in catagory 12) +% c = letter +% e = \end +% u = undefined +% third argument: 0 = no error message, 1 = error on invalid char +\let#2=u\relax% assume invalid until know otherwise +\ifx\@IEEEgrabbedfirstoken\end\let#2=e\else +\ifcat\@IEEEgrabbedfirstoken\relax\else% screen out control sequences +\if0\@IEEEgrabbedfirstoken\let#2=n\else +\if1\@IEEEgrabbedfirstoken\let#2=n\else +\if2\@IEEEgrabbedfirstoken\let#2=n\else +\if3\@IEEEgrabbedfirstoken\let#2=n\else +\if4\@IEEEgrabbedfirstoken\let#2=n\else +\if5\@IEEEgrabbedfirstoken\let#2=n\else +\if6\@IEEEgrabbedfirstoken\let#2=n\else +\if7\@IEEEgrabbedfirstoken\let#2=n\else +\if8\@IEEEgrabbedfirstoken\let#2=n\else +\if9\@IEEEgrabbedfirstoken\let#2=n\else +\ifcat,\@IEEEgrabbedfirstoken\let#2=g\relax +\else\ifcat a\@IEEEgrabbedfirstoken\let#2=c\relax\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi +\if#2u\relax +\if0\noexpand#3\relax\else\@IEEEclspkgerror{Invalid character in column specifications}% +{Only letters, numerals and certain other symbols are allowed \MessageBreak +as IEEEeqnarray column specifiers.}\fi\fi} + + +% identify the current letter referenced column +% if invalid, use a default column +\def\@IEEEgetcurcol#1{\expandafter\ifx\csname @IEEEeqnarraycolDEF#1\endcsname\@IEEEeqnarraycolisdefined% +\def\@IEEEBPcurcolname{#1}\else% invalid column name +\@IEEEclspkgerror{Invalid column type "#1" in column specifications.\MessageBreak +Using a default centering column instead}% +{You must define IEEEeqnarray column types before use.}% +\def\@IEEEBPcurcolname{@IEEEdefault}\fi} + + +% identify and return the predefined (punctuation) glue value +\def\@IEEEgetcurglue#1#2{% +% ! = \! (neg small) -0.16667em (-3/18 em) +% , = \, (small) 0.16667em ( 3/18 em) +% : = \: (med) 0.22222em ( 4/18 em) +% ; = \; (large) 0.27778em ( 5/18 em) +% ' = \quad 1em +% " = \qquad 2em +% . = 0.5\arraycolsep +% / = \arraycolsep +% ? = 2\arraycolsep +% * = 1fil +% + = \@IEEEeqnarraycolSEPcenter +% - = \@IEEEeqnarraycolSEPzero +% Note that all em values are referenced to the math font (textfont2) fontdimen6 +% value for 1em. +% +% use only the very first token to determine the type +% this prevents errant tokens from getting in the main text +% \noindent is used as a delimiter here +\def\@IEEEgrabfirstoken##1##2\noindent{\let\@IEEEgrabbedfirstoken=##1}% +\@IEEEgrabfirstoken#1\relax\relax\noindent +% get the math font 1em value +% LaTeX2e's NFSS2 does not preload the fonts, but \IEEEeqnarray needs +% to gain access to the math (\textfont2) font's spacing parameters. +% So we create a bogus box here that uses the math font to ensure +% that \textfont2 is loaded and ready. If this is not done, +% the \textfont2 stuff here may not work. +% Thanks to Bernd Raichle for his 1997 post on this topic. +{\setbox0=\hbox{$\displaystyle\relax$}}% +% fontdimen6 has the width of 1em (a quad). +\@IEEEtrantmpdimenA=\fontdimen6\textfont2\relax% +% identify the glue value based on the first token +% we discard anything after the first +\if!\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=-0.16667\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if,\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=0.16667\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if:\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=0.22222\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if;\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=0.27778\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if'\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=1\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if"\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=2\@IEEEtrantmpdimenA\edef#2{\the\@IEEEtrantmpdimenA}\else +\if.\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=0.5\arraycolsep\edef#2{\the\@IEEEtrantmpdimenA}\else +\if/\@IEEEgrabbedfirstoken\edef#2{\the\arraycolsep}\else +\if?\@IEEEgrabbedfirstoken\@IEEEtrantmpdimenA=2\arraycolsep\edef#2{\the\@IEEEtrantmpdimenA}\else +\if *\@IEEEgrabbedfirstoken\edef#2{0pt plus 1fil minus 0pt}\else +\if+\@IEEEgrabbedfirstoken\edef#2{\@IEEEeqnarraycolSEPcenter}\else +\if-\@IEEEgrabbedfirstoken\edef#2{\@IEEEeqnarraycolSEPzero}\else +\edef#2{\@IEEEeqnarraycolSEPzero}% +\@IEEEclspkgerror{Invalid predefined inter-column glue type "#1" in\MessageBreak +column specifications. Using a default value of\MessageBreak +0pt instead}% +{Only !,:;'"./?*+ and - are valid predefined glue types in the\MessageBreak +IEEEeqnarray column specifications.}\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} + + + +% process a numerical digit from the column specification +% and look up the corresponding user defined glue value +% can transform current type from n to g or a as the user defined glue is acquired +\def\@IEEEprocessNcol#1{\if\@IEEEBPprevtype g% +\@IEEEclspkgerror{Back-to-back inter-column glue specifiers in column\MessageBreak +specifications. Ignoring consecutive glue specifiers\MessageBreak +after the first}% +{You cannot have two or more glue types next to each other\MessageBreak +in the IEEEeqnarray column specifications.}% +\let\@IEEEBPcurtype=a% abort this glue, future digits will be discarded +\@IEEEBPcurnum=0\relax% +\else% if we previously aborted a glue +\if\@IEEEBPprevtype a\@IEEEBPcurnum=0\let\@IEEEBPcurtype=a%maintain digit abortion +\else%acquire this number +% save the previous type before the numerical digits started +\if\@IEEEBPprevtype n\else\let\@IEEEBPprevsavedtype=\@IEEEBPprevtype\fi% +\multiply\@IEEEBPcurnum by 10\relax% +\advance\@IEEEBPcurnum by #1\relax% add in number, \relax is needed to stop TeX's number scan +\if\@IEEEBPnexttype n\else%close acquisition +\expandafter\ifx\csname @IEEEeqnarraycolSEPDEF\expandafter\romannumeral\number\@IEEEBPcurnum\endcsname\@IEEEeqnarraycolisdefined% +\edef\@IEEEBPcurglue{\csname @IEEEeqnarraycolSEP\expandafter\romannumeral\number\@IEEEBPcurnum\endcsname}% +\else%user glue not defined +\@IEEEclspkgerror{Invalid user defined inter-column glue type "\number\@IEEEBPcurnum" in\MessageBreak +column specifications. Using a default value of\MessageBreak +0pt instead}% +{You must define all IEEEeqnarray numerical inter-column glue types via\MessageBreak +\string\IEEEeqnarraydefcolsep \space before they are used in column specifications.}% +\edef\@IEEEBPcurglue{\@IEEEeqnarraycolSEPzero}% +\fi% glue defined or not +\let\@IEEEBPcurtype=g% change the type to reflect the acquired glue +\let\@IEEEBPprevtype=\@IEEEBPprevsavedtype% restore the prev type before this number glue +\@IEEEBPcurnum=0\relax%ready for next acquisition +\fi%close acquisition, get glue +\fi%discard or acquire number +\fi%prevtype glue or not +} + + +% process an acquired glue +% add any acquired column/glue pair to the preamble +\def\@IEEEprocessGcol{\if\@IEEEBPprevtype a\let\@IEEEBPcurtype=a%maintain previous glue abortions +\else +% if this is the start glue, save it, but do nothing else +% as this is not used in the preamble, but before +\if\@IEEEBPprevtype s\edef\@IEEEBPstartglue{\@IEEEBPcurglue}% +\else%not the start glue +\if\@IEEEBPprevtype g%ignore if back to back glues +\@IEEEclspkgerror{Back-to-back inter-column glue specifiers in column\MessageBreak +specifications. Ignoring consecutive glue specifiers\MessageBreak +after the first}% +{You cannot have two or more glue types next to each other\MessageBreak +in the IEEEeqnarray column specifications.}% +\let\@IEEEBPcurtype=a% abort this glue +\else% not a back to back glue +\if\@IEEEBPprevtype c\relax% if the previoustype was a col, add column/glue pair to preamble +\ifnum\@IEEEeqnnumcols>0\relax\@IEEEappendtoksA{&}\fi +\toks0={##}% +% make preamble advance col counter if this environment needs this +\if@advanceIEEEeqncolcnt\@IEEEappendtoksA{\global\advance\@IEEEeqncolcnt by 1\relax}\fi +% insert the column defintion into the preamble, being careful not to expand +% the column definition +\@IEEEappendtoksA{\tabskip=\@IEEEBPcurglue}% +\@IEEEappendNOEXPANDtoksA{\begingroup\csname @IEEEeqnarraycolPRE}% +\@IEEEappendtoksA{\@IEEEBPcurcolname}% +\@IEEEappendNOEXPANDtoksA{\endcsname}% +\@IEEEappendtoksA{\the\toks0}% +\@IEEEappendNOEXPANDtoksA{\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\csname @IEEEeqnarraycolPOST}% +\@IEEEappendtoksA{\@IEEEBPcurcolname}% +\@IEEEappendNOEXPANDtoksA{\endcsname\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\endgroup}% +\advance\@IEEEeqnnumcols by 1\relax%one more column in the preamble +\else% error: non-start glue with no pending column +\@IEEEclspkgerror{Inter-column glue specifier without a prior column\MessageBreak +type in the column specifications. Ignoring this glue\MessageBreak +specifier}% +{Except for the first and last positions, glue can be placed only\MessageBreak +between column types.}% +\let\@IEEEBPcurtype=a% abort this glue +\fi% previous was a column +\fi% back-to-back glues +\fi% is start column glue +\fi% prev type not a +} + + +% process an acquired letter referenced column and, if necessary, add it to the preamble +\def\@IEEEprocessCcol{\if\@IEEEBPnexttype g\else +\if\@IEEEBPnexttype n\else +% we have a column followed by something other than a glue (or numeral glue) +% so we must add this column to the preamble now +\ifnum\@IEEEeqnnumcols>0\relax\@IEEEappendtoksA{&}\fi%col separator for those after the first +\if\@IEEEBPnexttype e\@IEEEappendtoksA{\tabskip=\@IEEEBPendglue\relax}\else%put in end glue +\@IEEEappendtoksA{\tabskip=\@IEEEeqnarraycolSEPdefaultmid\relax}\fi% or default mid glue +\toks0={##}% +% make preamble advance col counter if this environment needs this +\if@advanceIEEEeqncolcnt\@IEEEappendtoksA{\global\advance\@IEEEeqncolcnt by 1\relax}\fi +% insert the column definition into the preamble, being careful not to expand +% the column definition +\@IEEEappendNOEXPANDtoksA{\begingroup\csname @IEEEeqnarraycolPRE}% +\@IEEEappendtoksA{\@IEEEBPcurcolname}% +\@IEEEappendNOEXPANDtoksA{\endcsname}% +\@IEEEappendtoksA{\the\toks0}% +\@IEEEappendNOEXPANDtoksA{\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\csname @IEEEeqnarraycolPOST}% +\@IEEEappendtoksA{\@IEEEBPcurcolname}% +\@IEEEappendNOEXPANDtoksA{\endcsname\relax\relax\relax\relax\relax% +\relax\relax\relax\relax\relax\endgroup}% +\advance\@IEEEeqnnumcols by 1\relax%one more column in the preamble +\fi%next type not numeral +\fi%next type not glue +} + + +%% +%% END OF IEEEeqnarry DEFINITIONS +%% + + + + +% set up the running headings, this complex because of all the different +% modes IEEEtran supports +\if@twoside + \ifCLASSOPTIONtechnote + \def\ps@headings{% + \def\@oddhead{\hbox{}\scriptsize\leftmark \hfil \thepage} + \def\@evenhead{\scriptsize\thepage \hfil \leftmark\hbox{}} + \ifCLASSOPTIONdraftcls + \ifCLASSOPTIONdraftclsnofoot + \def\@oddfoot{}\def\@evenfoot{}% + \else + \def\@oddfoot{\scriptsize\@date\hfil DRAFT} + \def\@evenfoot{\scriptsize DRAFT\hfil\@date} + \fi + \else + \def\@oddfoot{}\def\@evenfoot{} + \fi} + \else % not a technote + \def\ps@headings{% + \ifCLASSOPTIONconference + \def\@oddhead{} + \def\@evenhead{} + \else + \def\@oddhead{\hbox{}\scriptsize\rightmark \hfil \thepage} + \def\@evenhead{\scriptsize\thepage \hfil \leftmark\hbox{}} + \fi + \ifCLASSOPTIONdraftcls + \def\@oddhead{\hbox{}\scriptsize\rightmark \hfil \thepage} + \def\@evenhead{\scriptsize\thepage \hfil \leftmark\hbox{}} + \ifCLASSOPTIONdraftclsnofoot + \def\@oddfoot{}\def\@evenfoot{}% + \else + \def\@oddfoot{\scriptsize\@date\hfil DRAFT} + \def\@evenfoot{\scriptsize DRAFT\hfil\@date} + \fi + \else + \def\@oddfoot{}\def\@evenfoot{}% + \fi} + \fi +\else % single side +\def\ps@headings{% + \ifCLASSOPTIONconference + \def\@oddhead{} + \def\@evenhead{} + \else + \def\@oddhead{\hbox{}\scriptsize\leftmark \hfil \thepage} + \def\@evenhead{} + \fi + \ifCLASSOPTIONdraftcls + \def\@oddhead{\hbox{}\scriptsize\leftmark \hfil \thepage} + \def\@evenhead{} + \ifCLASSOPTIONdraftclsnofoot + \def\@oddfoot{} + \else + \def\@oddfoot{\scriptsize \@date \hfil DRAFT} + \fi + \else + \def\@oddfoot{} + \fi + \def\@evenfoot{}} +\fi + + +% title page style +\def\ps@IEEEtitlepagestyle{\def\@oddfoot{}\def\@evenfoot{}% +\ifCLASSOPTIONconference + \def\@oddhead{}% + \def\@evenhead{}% +\else + \def\@oddhead{\hbox{}\scriptsize\leftmark \hfil \thepage}% + \def\@evenhead{\scriptsize\thepage \hfil \leftmark\hbox{}}% +\fi +\ifCLASSOPTIONdraftcls + \def\@oddhead{\hbox{}\scriptsize\leftmark \hfil \thepage}% + \def\@evenhead{\scriptsize\thepage \hfil \leftmark\hbox{}}% + \ifCLASSOPTIONdraftclsnofoot\else + \def\@oddfoot{\scriptsize \@date\hfil DRAFT}% + \def\@evenfoot{\scriptsize DRAFT\hfil \@date}% + \fi +\else + % all non-draft mode footers + \if@IEEEusingpubid + % for title pages that are using a pubid + % do not repeat pubid if using peer review option + \ifCLASSOPTIONpeerreview + \else + \footskip 0pt% + \ifCLASSOPTIONcompsoc + \def\@oddfoot{\hss\normalfont\scriptsize\raisebox{-1.5\@IEEEnormalsizeunitybaselineskip}[0ex][0ex]{\@IEEEpubid}\hss}% + \def\@evenfoot{\hss\normalfont\scriptsize\raisebox{-1.5\@IEEEnormalsizeunitybaselineskip}[0ex][0ex]{\@IEEEpubid}\hss}% + \else + \def\@oddfoot{\hss\normalfont\footnotesize\raisebox{1.5ex}[1.5ex]{\@IEEEpubid}\hss}% + \def\@evenfoot{\hss\normalfont\footnotesize\raisebox{1.5ex}[1.5ex]{\@IEEEpubid}\hss}% + \fi + \fi + \fi +\fi} + + +% peer review cover page style +\def\ps@IEEEpeerreviewcoverpagestyle{% +\def\@oddhead{}\def\@evenhead{}% +\def\@oddfoot{}\def\@evenfoot{}% +\ifCLASSOPTIONdraftcls + \ifCLASSOPTIONdraftclsnofoot\else + \def\@oddfoot{\scriptsize \@date\hfil DRAFT}% + \def\@evenfoot{\scriptsize DRAFT\hfil \@date}% + \fi +\else + % non-draft mode footers + \if@IEEEusingpubid + \footskip 0pt% + \ifCLASSOPTIONcompsoc + \def\@oddfoot{\hss\normalfont\scriptsize\raisebox{-1.5\@IEEEnormalsizeunitybaselineskip}[0ex][0ex]{\@IEEEpubid}\hss}% + \def\@evenfoot{\hss\normalfont\scriptsize\raisebox{-1.5\@IEEEnormalsizeunitybaselineskip}[0ex][0ex]{\@IEEEpubid}\hss}% + \else + \def\@oddfoot{\hss\normalfont\footnotesize\raisebox{1.5ex}[1.5ex]{\@IEEEpubid}\hss}% + \def\@evenfoot{\hss\normalfont\footnotesize\raisebox{1.5ex}[1.5ex]{\@IEEEpubid}\hss}% + \fi + \fi +\fi} + + +% start with empty headings +\def\rightmark{}\def\leftmark{} + + +%% Defines the command for putting the header. \footernote{TEXT} is the same +%% as \markboth{TEXT}{TEXT}. +%% Note that all the text is forced into uppercase, if you have some text +%% that needs to be in lower case, for instance et. al., then either manually +%% set \leftmark and \rightmark or use \MakeLowercase{et. al.} within the +%% arguments to \markboth. +\def\markboth#1#2{\def\leftmark{\@IEEEcompsoconly{\sffamily}\MakeUppercase{#1}}% +\def\rightmark{\@IEEEcompsoconly{\sffamily}\MakeUppercase{#2}}} +\def\footernote#1{\markboth{#1}{#1}} + +\def\today{\ifcase\month\or + January\or February\or March\or April\or May\or June\or + July\or August\or September\or October\or November\or December\fi + \space\number\day, \number\year} + + + + +%% CITATION AND BIBLIOGRAPHY COMMANDS +%% +%% V1.6 no longer supports the older, nonstandard \shortcite and \citename setup stuff +% +% +% Modify Latex2e \@citex to separate citations with "], [" +\def\@citex[#1]#2{% + \let\@citea\@empty + \@cite{\@for\@citeb:=#2\do + {\@citea\def\@citea{], [}% + \edef\@citeb{\expandafter\@firstofone\@citeb\@empty}% + \if@filesw\immediate\write\@auxout{\string\citation{\@citeb}}\fi + \@ifundefined{b@\@citeb}{\mbox{\reset@font\bfseries ?}% + \G@refundefinedtrue + \@latex@warning + {Citation `\@citeb' on page \thepage \space undefined}}% + {\hbox{\csname b@\@citeb\endcsname}}}}{#1}} + +% V1.6 we create hooks for the optional use of Donald Arseneau's +% cite.sty package. cite.sty is "smart" and will notice that the +% following format controls are already defined and will not +% redefine them. The result will be the proper sorting of the +% citation numbers and auto detection of 3 or more entry "ranges" - +% all in IEEE style: [1], [2], [5]--[7], [12] +% This also allows for an optional note, i.e., \cite[mynote]{..}. +% If the \cite with note has more than one reference, the note will +% be applied to the last of the listed references. It is generally +% desired that if a note is given, only one reference is listed in +% that \cite. +% Thanks to Mr. Arseneau for providing the required format arguments +% to produce the IEEE style. +\def\citepunct{], [} +\def\citedash{]--[} + +% V1.7 default to using same font for urls made by url.sty +\AtBeginDocument{\csname url@samestyle\endcsname} + +% V1.6 class files should always provide these +\def\newblock{\hskip .11em\@plus.33em\@minus.07em} +\let\@openbib@code\@empty + + +% Provide support for the control entries of IEEEtran.bst V1.00 and later. +% V1.7 optional argument allows for a different aux file to be specified in +% order to handle multiple bibliographies. For example, with multibib.sty: +% \newcites{sec}{Secondary Literature} +% \bstctlcite[@auxoutsec]{BSTcontrolhak} +\def\bstctlcite{\@ifnextchar[{\@bstctlcite}{\@bstctlcite[@auxout]}} +\def\@bstctlcite[#1]#2{\@bsphack + \@for\@citeb:=#2\do{% + \edef\@citeb{\expandafter\@firstofone\@citeb}% + \if@filesw\immediate\write\csname #1\endcsname{\string\citation{\@citeb}}\fi}% + \@esphack} + +% V1.6 provide a way for a user to execute a command just before +% a given reference number - used to insert a \newpage to balance +% the columns on the last page +\edef\@IEEEtriggerrefnum{0} % the default of zero means that + % the command is not executed +\def\@IEEEtriggercmd{\newpage} + +% allow the user to alter the triggered command +\long\def\IEEEtriggercmd#1{\long\def\@IEEEtriggercmd{#1}} + +% allow user a way to specify the reference number just before the +% command is executed +\def\IEEEtriggeratref#1{\@IEEEtrantmpcountA=#1% +\edef\@IEEEtriggerrefnum{\the\@IEEEtrantmpcountA}}% + +% trigger command at the given reference +\def\@IEEEbibitemprefix{\@IEEEtrantmpcountA=\@IEEEtriggerrefnum\relax% +\advance\@IEEEtrantmpcountA by -1\relax% +\ifnum\c@enumiv=\@IEEEtrantmpcountA\relax\@IEEEtriggercmd\relax\fi} + + +\def\@biblabel#1{[#1]} + +% compsoc journals left align the reference numbers +\@IEEEcompsocnotconfonly{\def\@biblabel#1{[#1]\hfill}} + +% controls bib item spacing +\def\IEEEbibitemsep{0pt plus .5pt} + +\@IEEEcompsocconfonly{\def\IEEEbibitemsep{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}} + + +\def\thebibliography#1{\section*{\refname}% + \addcontentsline{toc}{section}{\refname}% + % V1.6 add some rubber space here and provide a command trigger + \footnotesize\@IEEEcompsocconfonly{\small}\vskip 0.3\baselineskip plus 0.1\baselineskip minus 0.1\baselineskip% + \list{\@biblabel{\@arabic\c@enumiv}}% + {\settowidth\labelwidth{\@biblabel{#1}}% + \leftmargin\labelwidth + \advance\leftmargin\labelsep\relax + \itemsep \IEEEbibitemsep\relax + \usecounter{enumiv}% + \let\p@enumiv\@empty + \renewcommand\theenumiv{\@arabic\c@enumiv}}% + \let\@IEEElatexbibitem\bibitem% + \def\bibitem{\@IEEEbibitemprefix\@IEEElatexbibitem}% +\def\newblock{\hskip .11em plus .33em minus .07em}% +% originally: +% \sloppy\clubpenalty4000\widowpenalty4000% +% by adding the \interlinepenalty here, we make it more +% difficult, but not impossible, for LaTeX to break within a reference. +% IEEE almost never breaks a reference (but they do it more often with +% technotes). You may get an underfull vbox warning around the bibliography, +% but the final result will be much more like what IEEE will publish. +% MDS 11/2000 +\ifCLASSOPTIONtechnote\sloppy\clubpenalty4000\widowpenalty4000\interlinepenalty100% +\else\sloppy\clubpenalty4000\widowpenalty4000\interlinepenalty500\fi% + \sfcode`\.=1000\relax} +\let\endthebibliography=\endlist + + + + +% TITLE PAGE COMMANDS +% +% +% \IEEEmembership is used to produce the sublargesize italic font used to indicate author +% IEEE membership. compsoc uses a large size sans slant font +\def\IEEEmembership#1{{\@IEEEnotcompsoconly{\sublargesize}\normalfont\@IEEEcompsoconly{\sffamily}\textit{#1}}} + + +% \IEEEauthorrefmark{} produces a footnote type symbol to indicate author affiliation. +% When given an argument of 1 to 9, \IEEEauthorrefmark{} follows the standard LaTeX footnote +% symbol sequence convention. However, for arguments 10 and above, \IEEEauthorrefmark{} +% reverts to using lower case roman numerals, so it cannot overflow. Do note that you +% cannot use \footnotemark[] in place of \IEEEauthorrefmark{} within \author as the footnote +% symbols will have been turned off to prevent \thanks from creating footnote marks. +% \IEEEauthorrefmark{} produces a symbol that appears to LaTeX as having zero vertical +% height - this allows for a more compact line packing, but the user must ensure that +% the interline spacing is large enough to prevent \IEEEauthorrefmark{} from colliding +% with the text above. +% V1.7 make this a robust command +\DeclareRobustCommand*{\IEEEauthorrefmark}[1]{\raisebox{0pt}[0pt][0pt]{\textsuperscript{\footnotesize\ensuremath{\ifcase#1\or *\or \dagger\or \ddagger\or% + \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger% + \or \ddagger\ddagger \else\textsuperscript{\expandafter\romannumeral#1}\fi}}}} + + +% FONT CONTROLS AND SPACINGS FOR CONFERENCE MODE AUTHOR NAME AND AFFILIATION BLOCKS +% +% The default font styles for the author name and affiliation blocks (confmode) +\def\@IEEEauthorblockNstyle{\normalfont\@IEEEcompsocnotconfonly{\sffamily}\sublargesize\@IEEEcompsocconfonly{\large}} +\def\@IEEEauthorblockAstyle{\normalfont\@IEEEcompsocnotconfonly{\sffamily}\@IEEEcompsocconfonly{\itshape}\normalsize\@IEEEcompsocconfonly{\large}} +% The default if the user does not use an author block +\def\@IEEEauthordefaulttextstyle{\normalfont\@IEEEcompsocnotconfonly{\sffamily}\sublargesize} + +% spacing from title (or special paper notice) to author name blocks (confmode) +% can be negative +\def\@IEEEauthorblockconfadjspace{-0.25em} +% compsoc conferences need more space here +\@IEEEcompsocconfonly{\def\@IEEEauthorblockconfadjspace{0.75\@IEEEnormalsizeunitybaselineskip}} + +% spacing between name and affiliation blocks (confmode) +% This can be negative. +% IEEE doesn't want any added spacing here, but I will leave these +% controls in place in case they ever change their mind. +% Personally, I like 0.75ex. +%\def\@IEEEauthorblockNtopspace{0.75ex} +%\def\@IEEEauthorblockAtopspace{0.75ex} +\def\@IEEEauthorblockNtopspace{0.0ex} +\def\@IEEEauthorblockAtopspace{0.0ex} +% baseline spacing within name and affiliation blocks (confmode) +% must be positive, spacings below certain values will make +% the position of line of text sensitive to the contents of the +% line above it i.e., whether or not the prior line has descenders, +% subscripts, etc. For this reason it is a good idea to keep +% these above 2.6ex +\def\@IEEEauthorblockNinterlinespace{2.6ex} +\def\@IEEEauthorblockAinterlinespace{2.75ex} + +% This tracks the required strut size. +% See the \@IEEEauthorhalign command for the actual default value used. +\def\@IEEEauthorblockXinterlinespace{2.7ex} + +% variables to retain font size and style across groups +% values given here have no effect as they will be overwritten later +\gdef\@IEEESAVESTATEfontsize{10} +\gdef\@IEEESAVESTATEfontbaselineskip{12} +\gdef\@IEEESAVESTATEfontencoding{OT1} +\gdef\@IEEESAVESTATEfontfamily{ptm} +\gdef\@IEEESAVESTATEfontseries{m} +\gdef\@IEEESAVESTATEfontshape{n} + +% saves the current font attributes +\def\@IEEEcurfontSAVE{\global\let\@IEEESAVESTATEfontsize\f@size% +\global\let\@IEEESAVESTATEfontbaselineskip\f@baselineskip% +\global\let\@IEEESAVESTATEfontencoding\f@encoding% +\global\let\@IEEESAVESTATEfontfamily\f@family% +\global\let\@IEEESAVESTATEfontseries\f@series% +\global\let\@IEEESAVESTATEfontshape\f@shape} + +% restores the saved font attributes +\def\@IEEEcurfontRESTORE{\fontsize{\@IEEESAVESTATEfontsize}{\@IEEESAVESTATEfontbaselineskip}% +\fontencoding{\@IEEESAVESTATEfontencoding}% +\fontfamily{\@IEEESAVESTATEfontfamily}% +\fontseries{\@IEEESAVESTATEfontseries}% +\fontshape{\@IEEESAVESTATEfontshape}% +\selectfont} + + +% variable to indicate if the current block is the first block in the column +\newif\if@IEEEprevauthorblockincol \@IEEEprevauthorblockincolfalse + + +% the command places a strut with height and depth = \@IEEEauthorblockXinterlinespace +% we use this technique to have complete manual control over the spacing of the lines +% within the halign environment. +% We set the below baseline portion at 30%, the above +% baseline portion at 70% of the total length. +% Responds to changes in the document's \baselinestretch +\def\@IEEEauthorstrutrule{\@IEEEtrantmpdimenA\@IEEEauthorblockXinterlinespace% +\@IEEEtrantmpdimenA=\baselinestretch\@IEEEtrantmpdimenA% +\rule[-0.3\@IEEEtrantmpdimenA]{0pt}{\@IEEEtrantmpdimenA}} + + +% blocks to hold the authors' names and affilations. +% Makes formatting easy for conferences +% +% use real definitions in conference mode +% name block +\def\IEEEauthorblockN#1{\relax\@IEEEauthorblockNstyle% set the default text style +\gdef\@IEEEauthorblockXinterlinespace{0pt}% disable strut for spacer row +% the \expandafter hides the \cr in conditional tex, see the array.sty docs +% for details, probably not needed here as the \cr is in a macro +% do a spacer row if needed +\if@IEEEprevauthorblockincol\expandafter\@IEEEauthorblockNtopspaceline\fi +\global\@IEEEprevauthorblockincoltrue% we now have a block in this column +%restore the correct strut value +\gdef\@IEEEauthorblockXinterlinespace{\@IEEEauthorblockNinterlinespace}% +% input the author names +#1% +% end the row if the user did not already +\crcr} +% spacer row for names +\def\@IEEEauthorblockNtopspaceline{\cr\noalign{\vskip\@IEEEauthorblockNtopspace}} +% +% affiliation block +\def\IEEEauthorblockA#1{\relax\@IEEEauthorblockAstyle% set the default text style +\gdef\@IEEEauthorblockXinterlinespace{0pt}%disable strut for spacer row +% the \expandafter hides the \cr in conditional tex, see the array.sty docs +% for details, probably not needed here as the \cr is in a macro +% do a spacer row if needed +\if@IEEEprevauthorblockincol\expandafter\@IEEEauthorblockAtopspaceline\fi +\global\@IEEEprevauthorblockincoltrue% we now have a block in this column +%restore the correct strut value +\gdef\@IEEEauthorblockXinterlinespace{\@IEEEauthorblockAinterlinespace}% +% input the author affiliations +#1% +% end the row if the user did not already +\crcr} +% spacer row for affiliations +\def\@IEEEauthorblockAtopspaceline{\cr\noalign{\vskip\@IEEEauthorblockAtopspace}} + + +% allow papers to compile even if author blocks are used in modes other +% than conference or peerreviewca. For such cases, we provide dummy blocks. +\ifCLASSOPTIONconference +\else + \ifCLASSOPTIONpeerreviewca\else + % not conference or peerreviewca mode + \def\IEEEauthorblockN#1{#1}% + \def\IEEEauthorblockA#1{#1}% + \fi +\fi + + + +% we provide our own halign so as not to have to depend on tabular +\def\@IEEEauthorhalign{\@IEEEauthordefaulttextstyle% default text style + \lineskip=0pt\relax% disable line spacing + \lineskiplimit=0pt\relax% + \baselineskip=0pt\relax% + \@IEEEcurfontSAVE% save the current font + \mathsurround\z@\relax% no extra spacing around math + \let\\\@IEEEauthorhaligncr% replace newline with halign friendly one + \tabskip=0pt\relax% no column spacing + \everycr{}% ensure no problems here + \@IEEEprevauthorblockincolfalse% no author blocks yet + \def\@IEEEauthorblockXinterlinespace{2.7ex}% default interline space + \vtop\bgroup%vtop box + \halign\bgroup&\relax\hfil\@IEEEcurfontRESTORE\relax ##\relax + \hfil\@IEEEcurfontSAVE\@IEEEauthorstrutrule\cr} + +% ensure last line, exit from halign, close vbox +\def\end@IEEEauthorhalign{\crcr\egroup\egroup} + +% handle bogus star form +\def\@IEEEauthorhaligncr{{\ifnum0=`}\fi\@ifstar{\@@IEEEauthorhaligncr}{\@@IEEEauthorhaligncr}} + +% test and setup the optional argument to \\[] +\def\@@IEEEauthorhaligncr{\@testopt\@@@IEEEauthorhaligncr\z@skip} + +% end the line and do the optional spacer +\def\@@@IEEEauthorhaligncr[#1]{\ifnum0=`{\fi}\cr\noalign{\vskip#1\relax}} + + + +% flag to prevent multiple \and warning messages +\newif\if@IEEEWARNand +\@IEEEWARNandtrue + +% if in conference or peerreviewca modes, we support the use of \and as \author is a +% tabular environment, otherwise we warn the user that \and is invalid +% outside of conference or peerreviewca modes. +\def\and{\relax} % provide a bogus \and that we will then override + +\renewcommand{\and}[1][\relax]{\if@IEEEWARNand\typeout{** WARNING: \noexpand\and is valid only + when in conference or peerreviewca}\typeout{modes (line \the\inputlineno).}\fi\global\@IEEEWARNandfalse} + +\ifCLASSOPTIONconference% +\renewcommand{\and}[1][\hfill]{\end{@IEEEauthorhalign}#1\begin{@IEEEauthorhalign}}% +\fi +\ifCLASSOPTIONpeerreviewca +\renewcommand{\and}[1][\hfill]{\end{@IEEEauthorhalign}#1\begin{@IEEEauthorhalign}}% +\fi + + +% page clearing command +% based on LaTeX2e's \cleardoublepage, but allows different page styles +% for the inserted blank pages +\def\@IEEEcleardoublepage#1{\clearpage\if@twoside\ifodd\c@page\else +\hbox{}\thispagestyle{#1}\newpage\if@twocolumn\hbox{}\thispagestyle{#1}\newpage\fi\fi\fi} + + +% user command to invoke the title page +\def\maketitle{\par% + \begingroup% + \normalfont% + \def\thefootnote{}% the \thanks{} mark type is empty + \def\footnotemark{}% and kill space from \thanks within author + \let\@makefnmark\relax% V1.7, must *really* kill footnotemark to remove all \textsuperscript spacing as well. + \footnotesize% equal spacing between thanks lines + \footnotesep 0.7\baselineskip%see global setting of \footnotesep for more info + % V1.7 disable \thanks note indention for compsoc + \@IEEEcompsoconly{\long\def\@makefntext##1{\parindent 1em\noindent\hbox{\@makefnmark}##1}}% + \normalsize% + \ifCLASSOPTIONpeerreview + \newpage\global\@topnum\z@ \@maketitle\@IEEEstatictitlevskip\@IEEEaftertitletext% + \thispagestyle{IEEEpeerreviewcoverpagestyle}\@thanks% + \else + \if@twocolumn% + \ifCLASSOPTIONtechnote% + \newpage\global\@topnum\z@ \@maketitle\@IEEEstatictitlevskip\@IEEEaftertitletext% + \else + \twocolumn[\@maketitle\@IEEEdynamictitlevspace\@IEEEaftertitletext]% + \fi + \else + \newpage\global\@topnum\z@ \@maketitle\@IEEEstatictitlevskip\@IEEEaftertitletext% + \fi + \thispagestyle{IEEEtitlepagestyle}\@thanks% + \fi + % pullup page for pubid if used. + \if@IEEEusingpubid + \enlargethispage{-\@IEEEpubidpullup}% + \fi + \endgroup + \setcounter{footnote}{0}\let\maketitle\relax\let\@maketitle\relax + \gdef\@thanks{}% + % v1.6b do not clear these as we will need the title again for peer review papers + % \gdef\@author{}\gdef\@title{}% + \let\thanks\relax} + + + +% V1.7 parbox to format \@IEEEcompsoctitleabstractindextext +\long\def\@IEEEcompsoctitleabstractindextextbox#1{\parbox{0.915\textwidth}{#1}} + +% formats the Title, authors names, affiliations and special paper notice +% THIS IS A CONTROLLED SPACING COMMAND! Do not allow blank lines or unintentional +% spaces to enter the definition - use % at the end of each line +\def\@maketitle{\newpage +\begin{center}% +\ifCLASSOPTIONtechnote% technotes + {\bfseries\large\@IEEEcompsoconly{\sffamily}\@title\par}\vskip 1.3em{\lineskip .5em\@IEEEcompsoconly{\sffamily}\@author + \@IEEEspecialpapernotice\par{\@IEEEcompsoconly{\vskip 1.5em\relax + \@IEEEcompsoctitleabstractindextextbox{\@IEEEcompsoctitleabstractindextext}\par + \hfill\@IEEEcompsocdiamondline\hfill\hbox{}\par}}}\relax +\else% not a technote + \vskip0.2em{\Huge\@IEEEcompsoconly{\sffamily}\@IEEEcompsocconfonly{\normalfont\normalsize\vskip 2\@IEEEnormalsizeunitybaselineskip + \bfseries\Large}\@title\par}\vskip1.0em\par% + % V1.6 handle \author differently if in conference mode + \ifCLASSOPTIONconference% + {\@IEEEspecialpapernotice\mbox{}\vskip\@IEEEauthorblockconfadjspace% + \mbox{}\hfill\begin{@IEEEauthorhalign}\@author\end{@IEEEauthorhalign}\hfill\mbox{}\par}\relax + \else% peerreviewca, peerreview or journal + \ifCLASSOPTIONpeerreviewca + % peerreviewca handles author names just like conference mode + {\@IEEEcompsoconly{\sffamily}\@IEEEspecialpapernotice\mbox{}\vskip\@IEEEauthorblockconfadjspace% + \mbox{}\hfill\begin{@IEEEauthorhalign}\@author\end{@IEEEauthorhalign}\hfill\mbox{}\par + {\@IEEEcompsoconly{\vskip 1.5em\relax + \@IEEEcompsoctitleabstractindextextbox{\@IEEEcompsoctitleabstractindextext}\par\hfill + \@IEEEcompsocdiamondline\hfill\hbox{}\par}}}\relax + \else% journal or peerreview + {\lineskip.5em\@IEEEcompsoconly{\sffamily}\sublargesize\@author\@IEEEspecialpapernotice\par + {\@IEEEcompsoconly{\vskip 1.5em\relax + \@IEEEcompsoctitleabstractindextextbox{\@IEEEcompsoctitleabstractindextext}\par\hfill + \@IEEEcompsocdiamondline\hfill\hbox{}\par}}}\relax + \fi + \fi +\fi\end{center}} + + + +% V1.7 Computer Society "diamond line" which follows index terms for nonconference papers +\def\@IEEEcompsocdiamondline{\vrule depth 0pt height 0.5pt width 4cm\hspace{7.5pt}% +\raisebox{-3.5pt}{\fontfamily{pzd}\fontencoding{U}\fontseries{m}\fontshape{n}\fontsize{11}{12}\selectfont\char70}% +\hspace{7.5pt}\vrule depth 0pt height 0.5pt width 4cm\relax} + +% V1.7 standard LateX2e \thanks, but with \itshape under compsoc. Also make it a \long\def +% We also need to trigger the one-shot footnote rule +\def\@IEEEtriggeroneshotfootnoterule{\global\@IEEEenableoneshotfootnoteruletrue} + + +\long\def\thanks#1{\footnotemark + \protected@xdef\@thanks{\@thanks + \protect\footnotetext[\the\c@footnote]{\@IEEEcompsoconly{\itshape + \protect\@IEEEtriggeroneshotfootnoterule\relax}\ignorespaces#1}}} +\let\@thanks\@empty + +% V1.7 allow \author to contain \par's. This is needed to allow \thanks to contain \par. +\long\def\author#1{\gdef\@author{#1}} + + +% in addition to setting up IEEEitemize, we need to remove a baselineskip space above and +% below it because \list's \pars introduce blank lines because of the footnote struts. +\def\@IEEEsetupcompsocitemizelist{\def\labelitemi{$\bullet$}% +\setlength{\IEEElabelindent}{0pt}\setlength{\parskip}{0pt}% +\setlength{\partopsep}{0pt}\setlength{\topsep}{0.5\baselineskip}\vspace{-1\baselineskip}\relax} + + +% flag for fake non-compsoc \IEEEcompsocthanksitem - prevents line break on very first item +\newif\if@IEEEbreakcompsocthanksitem \@IEEEbreakcompsocthanksitemfalse + +\ifCLASSOPTIONcompsoc +% V1.7 compsoc bullet item \thanks +% also, we need to redefine this to destroy the argument in \@IEEEdynamictitlevspace +\long\def\IEEEcompsocitemizethanks#1{\relax\@IEEEbreakcompsocthanksitemfalse\footnotemark + \protected@xdef\@thanks{\@thanks + \protect\footnotetext[\the\c@footnote]{\itshape\protect\@IEEEtriggeroneshotfootnoterule + {\let\IEEEiedlistdecl\relax\protect\begin{IEEEitemize}[\protect\@IEEEsetupcompsocitemizelist]\ignorespaces#1\relax + \protect\end{IEEEitemize}}\protect\vspace{-1\baselineskip}}}} +\DeclareRobustCommand*{\IEEEcompsocthanksitem}{\item} +\else +% non-compsoc, allow for dual compilation via rerouting to normal \thanks +\long\def\IEEEcompsocitemizethanks#1{\thanks{#1}} +% redirect to "pseudo-par" \hfil\break\indent after swallowing [] from \IEEEcompsocthanksitem[] +\DeclareRobustCommand{\IEEEcompsocthanksitem}{\@ifnextchar [{\@IEEEthanksswallowoptionalarg}% +{\@IEEEthanksswallowoptionalarg[\relax]}} +% be sure and break only after first item, be sure and ignore spaces after optional argument +\def\@IEEEthanksswallowoptionalarg[#1]{\relax\if@IEEEbreakcompsocthanksitem\hfil\break +\indent\fi\@IEEEbreakcompsocthanksitemtrue\ignorespaces} +\fi + + +% V1.6b define the \IEEEpeerreviewmaketitle as needed +\ifCLASSOPTIONpeerreview +\def\IEEEpeerreviewmaketitle{\@IEEEcleardoublepage{empty}% +\ifCLASSOPTIONtwocolumn +\twocolumn[\@IEEEpeerreviewmaketitle\@IEEEdynamictitlevspace] +\else +\newpage\@IEEEpeerreviewmaketitle\@IEEEstatictitlevskip +\fi +\thispagestyle{IEEEtitlepagestyle}} +\else +% \IEEEpeerreviewmaketitle does nothing if peer review option has not been selected +\def\IEEEpeerreviewmaketitle{\relax} +\fi + +% peerreview formats the repeated title like the title in journal papers. +\def\@IEEEpeerreviewmaketitle{\begin{center}\@IEEEcompsoconly{\sffamily}% +\normalfont\normalsize\vskip0.2em{\Huge\@title\par}\vskip1.0em\par +\end{center}} + + + +% V1.6 +% this is a static rubber spacer between the title/authors and the main text +% used for single column text, or when the title appears in the first column +% of two column text (technotes). +\def\@IEEEstatictitlevskip{{\normalfont\normalsize +% adjust spacing to next text +% v1.6b handle peer review papers +\ifCLASSOPTIONpeerreview +% for peer review papers, the same value is used for both title pages +% regardless of the other paper modes + \vskip 1\baselineskip plus 0.375\baselineskip minus 0.1875\baselineskip +\else + \ifCLASSOPTIONconference% conference + \vskip 1\baselineskip plus 0.375\baselineskip minus 0.1875\baselineskip% + \else% + \ifCLASSOPTIONtechnote% technote + \vskip 1\baselineskip plus 0.375\baselineskip minus 0.1875\baselineskip% + \else% journal uses more space + \vskip 2.5\baselineskip plus 0.75\baselineskip minus 0.375\baselineskip% + \fi + \fi +\fi}} + + +% V1.6 +% This is a dynamically determined rigid spacer between the title/authors +% and the main text. This is used only for single column titles over two +% column text (most common) +% This is bit tricky because we have to ensure that the textheight of the +% main text is an integer multiple of \baselineskip +% otherwise underfull vbox problems may develop in the second column of the +% text on the titlepage +% The possible use of \IEEEpubid must also be taken into account. +\def\@IEEEdynamictitlevspace{{% + % we run within a group so that all the macros can be forgotten when we are done + \long\def\thanks##1{\relax}%don't allow \thanks to run when we evaluate the vbox height + \long\def\IEEEcompsocitemizethanks##1{\relax}%don't allow \IEEEcompsocitemizethanks to run when we evaluate the vbox height + \normalfont\normalsize% we declare more descriptive variable names + \let\@IEEEmaintextheight=\@IEEEtrantmpdimenA%height of the main text columns + \let\@IEEEINTmaintextheight=\@IEEEtrantmpdimenB%height of the main text columns with integer # lines + % set the nominal and minimum values for the title spacer + % the dynamic algorithm will not allow the spacer size to + % become less than \@IEEEMINtitlevspace - instead it will be + % lengthened + % default to journal values + \def\@IEEENORMtitlevspace{2.5\baselineskip}% + \def\@IEEEMINtitlevspace{2\baselineskip}% + % conferences and technotes need tighter spacing + \ifCLASSOPTIONconference%conference + \def\@IEEENORMtitlevspace{1\baselineskip}% + \def\@IEEEMINtitlevspace{0.75\baselineskip}% + \fi + \ifCLASSOPTIONtechnote%technote + \def\@IEEENORMtitlevspace{1\baselineskip}% + \def\@IEEEMINtitlevspace{0.75\baselineskip}% + \fi% + % get the height that the title will take up + \ifCLASSOPTIONpeerreview + \settoheight{\@IEEEmaintextheight}{\vbox{\hsize\textwidth \@IEEEpeerreviewmaketitle}}% + \else + \settoheight{\@IEEEmaintextheight}{\vbox{\hsize\textwidth \@maketitle}}% + \fi + \@IEEEmaintextheight=-\@IEEEmaintextheight% title takes away from maintext, so reverse sign + % add the height of the page textheight + \advance\@IEEEmaintextheight by \textheight% + % correct for title pages using pubid + \ifCLASSOPTIONpeerreview\else + % peerreview papers use the pubid on the cover page only. + % And the cover page uses a static spacer. + \if@IEEEusingpubid\advance\@IEEEmaintextheight by -\@IEEEpubidpullup\fi + \fi% + % subtract off the nominal value of the title bottom spacer + \advance\@IEEEmaintextheight by -\@IEEENORMtitlevspace% + % \topskip takes away some too + \advance\@IEEEmaintextheight by -\topskip% + % calculate the column height of the main text for lines + % now we calculate the main text height as if holding + % an integer number of \normalsize lines after the first + % and discard any excess fractional remainder + % we subtracted the first line, because the first line + % is placed \topskip into the maintext, not \baselineskip like the + % rest of the lines. + \@IEEEINTmaintextheight=\@IEEEmaintextheight% + \divide\@IEEEINTmaintextheight by \baselineskip% + \multiply\@IEEEINTmaintextheight by \baselineskip% + % now we calculate how much the title spacer height will + % have to be reduced from nominal (\@IEEEREDUCEmaintextheight is always + % a positive value) so that the maintext area will contain an integer + % number of normal size lines + % we change variable names here (to avoid confusion) as we no longer + % need \@IEEEINTmaintextheight and can reuse its dimen register + \let\@IEEEREDUCEmaintextheight=\@IEEEINTmaintextheight% + \advance\@IEEEREDUCEmaintextheight by -\@IEEEmaintextheight% + \advance\@IEEEREDUCEmaintextheight by \baselineskip% + % this is the calculated height of the spacer + % we change variable names here (to avoid confusion) as we no longer + % need \@IEEEmaintextheight and can reuse its dimen register + \let\@IEEECOMPENSATElen=\@IEEEmaintextheight% + \@IEEECOMPENSATElen=\@IEEENORMtitlevspace% set the nominal value + % we go with the reduced length if it is smaller than an increase + \ifdim\@IEEEREDUCEmaintextheight < 0.5\baselineskip\relax% + \advance\@IEEECOMPENSATElen by -\@IEEEREDUCEmaintextheight% + % if the resulting spacer is too small back out and go with an increase instead + \ifdim\@IEEECOMPENSATElen<\@IEEEMINtitlevspace\relax% + \advance\@IEEECOMPENSATElen by \baselineskip% + \fi% + \else% + % go with an increase because it is closer to the nominal than a decrease + \advance\@IEEECOMPENSATElen by -\@IEEEREDUCEmaintextheight% + \advance\@IEEECOMPENSATElen by \baselineskip% + \fi% + % set the calculated rigid spacer + \vspace{\@IEEECOMPENSATElen}}} + + + +% V1.6 +% we allow the user access to the last part of the title area +% useful in emergencies such as when a different spacing is needed +% This text is NOT compensated for in the dynamic sizer. +\let\@IEEEaftertitletext=\relax +\long\def\IEEEaftertitletext#1{\def\@IEEEaftertitletext{#1}} + +% V1.7 provide a way for users to enter abstract and keywords +% into the onecolumn title are. This text is compensated for +% in the dynamic sizer. +\let\@IEEEcompsoctitleabstractindextext=\relax +\long\def\IEEEcompsoctitleabstractindextext#1{\def\@IEEEcompsoctitleabstractindextext{#1}} +% V1.7 provide a way for users to get the \@IEEEcompsoctitleabstractindextext if +% not in compsoc journal mode - this way abstract and keywords can be placed +% in their conventional position if not in compsoc mode. +\def\IEEEdisplaynotcompsoctitleabstractindextext{% +\ifCLASSOPTIONcompsoc% display if compsoc conf +\ifCLASSOPTIONconference\@IEEEcompsoctitleabstractindextext\fi +\else% or if not compsoc +\@IEEEcompsoctitleabstractindextext\fi} + + +% command to allow alteration of baselinestretch, but only if the current +% baselineskip is unity. Used to tweak the compsoc abstract and keywords line spacing. +\def\@IEEEtweakunitybaselinestretch#1{{\def\baselinestretch{1}\selectfont +\global\@tempskipa\baselineskip}\ifnum\@tempskipa=\baselineskip% +\def\baselinestretch{#1}\selectfont\fi\relax} + + +% abstract and keywords are in \small, except +% for 9pt docs in which they are in \footnotesize +% Because 9pt docs use an 8pt footnotesize, \small +% becomes a rather awkward 8.5pt +\def\@IEEEabskeysecsize{\small} +\ifx\CLASSOPTIONpt\@IEEEptsizenine + \def\@IEEEabskeysecsize{\footnotesize} +\fi + +% compsoc journals use \footnotesize, compsoc conferences use normalsize +\@IEEEcompsoconly{\def\@IEEEabskeysecsize{\footnotesize}} +\@IEEEcompsocconfonly{\def\@IEEEabskeysecsize{\normalsize}} + + + + +% V1.6 have abstract and keywords strip leading spaces, pars and newlines +% so that spacing is more tightly controlled. +\def\abstract{\normalfont + \if@twocolumn + \@IEEEabskeysecsize\bfseries\textit{\abstractname}---\relax + \else + \begin{center}\vspace{-1.78ex}\@IEEEabskeysecsize\textbf{\abstractname}\end{center}\quotation\@IEEEabskeysecsize + \fi\@IEEEgobbleleadPARNLSP} +% V1.6 IEEE wants only 1 pica from end of abstract to introduction heading when in +% conference mode (the heading already has this much above it) +\def\endabstract{\relax\ifCLASSOPTIONconference\vspace{0ex}\else\vspace{1.34ex}\fi\par\if@twocolumn\else\endquotation\fi + \normalfont\normalsize} + +\def\IEEEkeywords{\normalfont + \if@twocolumn + \@IEEEabskeysecsize\bfseries\textit{\IEEEkeywordsname}---\relax + \else + \begin{center}\@IEEEabskeysecsize\textbf{\IEEEkeywordsname}\end{center}\quotation\@IEEEabskeysecsize + \fi\@IEEEgobbleleadPARNLSP} +\def\endIEEEkeywords{\relax\ifCLASSOPTIONtechnote\vspace{1.34ex}\else\vspace{0.67ex}\fi + \par\if@twocolumn\else\endquotation\fi% + \normalfont\normalsize} + +% V1.7 compsoc keywords index terms +\ifCLASSOPTIONcompsoc + \ifCLASSOPTIONconference% compsoc conference +\def\abstract{\normalfont + \begin{center}\@IEEEabskeysecsize\textbf{\large\abstractname}\end{center}\vskip 0.5\baselineskip plus 0.1\baselineskip minus 0.1\baselineskip + \if@twocolumn\else\quotation\fi\itshape\@IEEEabskeysecsize% + \par\@IEEEgobbleleadPARNLSP} +\def\IEEEkeywords{\normalfont\vskip 1.5\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip + \begin{center}\@IEEEabskeysecsize\textbf{\large\IEEEkeywordsname}\end{center}\vskip 0.5\baselineskip plus 0.1\baselineskip minus 0.1\baselineskip + \if@twocolumn\else\quotation\fi\itshape\@IEEEabskeysecsize% + \par\@IEEEgobbleleadPARNLSP} + \else% compsoc not conference +\def\abstract{\normalfont\@IEEEtweakunitybaselinestretch{1.15}\sffamily + \if@twocolumn + \@IEEEabskeysecsize\noindent\textbf{\abstractname}---\relax + \else + \begin{center}\vspace{-1.78ex}\@IEEEabskeysecsize\textbf{\abstractname}\end{center}\quotation\@IEEEabskeysecsize% + \fi\@IEEEgobbleleadPARNLSP} +\def\IEEEkeywords{\normalfont\@IEEEtweakunitybaselinestretch{1.15}\sffamily + \if@twocolumn + \@IEEEabskeysecsize\vskip 0.5\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip\noindent + \textbf{\IEEEkeywordsname}---\relax + \else + \begin{center}\@IEEEabskeysecsize\textbf{\IEEEkeywordsname}\end{center}\quotation\@IEEEabskeysecsize% + \fi\@IEEEgobbleleadPARNLSP} + \fi +\fi + + + +% gobbles all leading \, \\ and \par, upon finding first token that +% is not a \ , \\ or a \par, it ceases and returns that token +% +% used to strip leading \, \\ and \par from the input +% so that such things in the beginning of an environment will not +% affect the formatting of the text +\long\def\@IEEEgobbleleadPARNLSP#1{\let\@IEEEswallowthistoken=0% +\let\@IEEEgobbleleadPARNLSPtoken#1% +\let\@IEEEgobbleleadPARtoken=\par% +\let\@IEEEgobbleleadNLtoken=\\% +\let\@IEEEgobbleleadSPtoken=\ % +\def\@IEEEgobbleleadSPMACRO{\ }% +\ifx\@IEEEgobbleleadPARNLSPtoken\@IEEEgobbleleadPARtoken% +\let\@IEEEswallowthistoken=1% +\fi% +\ifx\@IEEEgobbleleadPARNLSPtoken\@IEEEgobbleleadNLtoken% +\let\@IEEEswallowthistoken=1% +\fi% +\ifx\@IEEEgobbleleadPARNLSPtoken\@IEEEgobbleleadSPtoken% +\let\@IEEEswallowthistoken=1% +\fi% +% a control space will come in as a macro +% when it is the last one on a line +\ifx\@IEEEgobbleleadPARNLSPtoken\@IEEEgobbleleadSPMACRO% +\let\@IEEEswallowthistoken=1% +\fi% +% if we have to swallow this token, do so and taste the next one +% else spit it out and stop gobbling +\ifx\@IEEEswallowthistoken 1\let\@IEEEnextgobbleleadPARNLSP=\@IEEEgobbleleadPARNLSP\else% +\let\@IEEEnextgobbleleadPARNLSP=#1\fi% +\@IEEEnextgobbleleadPARNLSP}% + + + + +% TITLING OF SECTIONS +\def\@IEEEsectpunct{:\ \,} % Punctuation after run-in section heading (headings which are + % part of the paragraphs), need little bit more than a single space + % spacing from section number to title +% compsoc conferences use regular period/space punctuation +\ifCLASSOPTIONcompsoc +\ifCLASSOPTIONconference +\def\@IEEEsectpunct{.\ } +\fi\fi + + +\def\@seccntformat#1{\csname the#1dis\endcsname\hskip 0.5em\relax} + +\ifCLASSOPTIONcompsoc +% compsoc journals need extra spacing +\ifCLASSOPTIONconference\else +\def\@seccntformat#1{\csname the#1dis\endcsname\hskip 1em\relax} +\fi\fi + +%v1.7 put {} after #6 to allow for some types of user font control +%and use \@@par rather than \par +\def\@sect#1#2#3#4#5#6[#7]#8{% + \ifnum #2>\c@secnumdepth + \let\@svsec\@empty + \else + \refstepcounter{#1}% + % load section label and spacer into \@svsec + \protected@edef\@svsec{\@seccntformat{#1}\relax}% + \fi% + \@tempskipa #5\relax + \ifdim \@tempskipa>\z@% tempskipa determines whether is treated as a high + \begingroup #6{\relax% or low level heading + \noindent % subsections are NOT indented + % print top level headings. \@svsec is label, #8 is heading title + % IEEE does not block indent the section title text, it flows like normal + {\hskip #3\relax\@svsec}{\interlinepenalty \@M #8\@@par}}% + \endgroup + \addcontentsline{toc}{#1}{\ifnum #2>\c@secnumdepth\relax\else + \protect\numberline{\csname the#1\endcsname}\fi#7}% + \else % printout low level headings + % svsechd seems to swallow the trailing space, protect it with \mbox{} + % got rid of sectionmark stuff + \def\@svsechd{#6{\hskip #3\relax\@svsec #8\@IEEEsectpunct\mbox{}}% + \addcontentsline{toc}{#1}{\ifnum #2>\c@secnumdepth\relax\else + \protect\numberline{\csname the#1\endcsname}\fi#7}}% + \fi%skip down + \@xsect{#5}} + + +% section* handler +%v1.7 put {} after #4 to allow for some types of user font control +%and use \@@par rather than \par +\def\@ssect#1#2#3#4#5{\@tempskipa #3\relax + \ifdim \@tempskipa>\z@ + %\begingroup #4\@hangfrom{\hskip #1}{\interlinepenalty \@M #5\par}\endgroup + % IEEE does not block indent the section title text, it flows like normal + \begingroup \noindent #4{\relax{\hskip #1}{\interlinepenalty \@M #5\@@par}}\endgroup + % svsechd swallows the trailing space, protect it with \mbox{} + \else \def\@svsechd{#4{\hskip #1\relax #5\@IEEEsectpunct\mbox{}}}\fi + \@xsect{#3}} + + +%% SECTION heading spacing and font +%% +% arguments are: #1 - sectiontype name +% (for \@sect) #2 - section level +% #3 - section heading indent +% #4 - top separation (absolute value used, neg indicates not to indent main text) +% If negative, make stretch parts negative too! +% #5 - (absolute value used) positive: bottom separation after heading, +% negative: amount to indent main text after heading +% Both #4 and #5 negative means to indent main text and use negative top separation +% #6 - font control +% You've got to have \normalfont\normalsize in the font specs below to prevent +% trouble when you do something like: +% \section{Note}{\ttfamily TT-TEXT} is known to ... +% IEEE sometimes REALLY stretches the area before a section +% heading by up to about 0.5in. However, it may not be a good +% idea to let LaTeX have quite this much rubber. +\ifCLASSOPTIONconference% +% IEEE wants section heading spacing to decrease for conference mode +\def\section{\@startsection{section}{1}{\z@}{1.5ex plus 1.5ex minus 0.5ex}% +{0.7ex plus 1ex minus 0ex}{\normalfont\normalsize\centering\scshape}}% +\def\subsection{\@startsection{subsection}{2}{\z@}{1.5ex plus 1.5ex minus 0.5ex}% +{0.7ex plus .5ex minus 0ex}{\normalfont\normalsize\itshape}}% +\else % for journals +\def\section{\@startsection{section}{1}{\z@}{3.0ex plus 1.5ex minus 1.5ex}% V1.6 3.0ex from 3.5ex +{0.7ex plus 1ex minus 0ex}{\normalfont\normalsize\centering\scshape}}% +\def\subsection{\@startsection{subsection}{2}{\z@}{3.5ex plus 1.5ex minus 1.5ex}% +{0.7ex plus .5ex minus 0ex}{\normalfont\normalsize\itshape}}% +\fi + +% for both journals and conferences +% decided to put in a little rubber above the section, might help somebody +\def\subsubsection{\@startsection{subsubsection}{3}{\parindent}{0ex plus 0.1ex minus 0.1ex}% +{0ex}{\normalfont\normalsize\itshape}}% +\def\paragraph{\@startsection{paragraph}{4}{2\parindent}{0ex plus 0.1ex minus 0.1ex}% +{0ex}{\normalfont\normalsize\itshape}}% + + +% compsoc +\ifCLASSOPTIONcompsoc +\ifCLASSOPTIONconference +% compsoc conference +\def\section{\@startsection{section}{1}{\z@}{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}% +{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}{\normalfont\large\bfseries}}% +\def\subsection{\@startsection{subsection}{2}{\z@}{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}% +{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}{\normalfont\sublargesize\bfseries}}% +\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{1\baselineskip plus 0.25\baselineskip minus 0.25\baselineskip}% +{0ex}{\normalfont\normalsize\bfseries}}% +\def\paragraph{\@startsection{paragraph}{4}{2\parindent}{0ex plus 0.1ex minus 0.1ex}% +{0ex}{\normalfont\normalsize}}% +\else% compsoc journals +% use negative top separation as compsoc journals do not indent paragraphs after section titles +\def\section{\@startsection{section}{1}{\z@}{-3ex plus -2ex minus -1.5ex}% +{0.7ex plus 1ex minus 0ex}{\normalfont\large\sffamily\bfseries\scshape}}% +% Note that subsection and smaller may not be correct for the Computer Society, +% I have to look up an example. +\def\subsection{\@startsection{subsection}{2}{\z@}{-3.5ex plus -1.5ex minus -1.5ex}% +{0.7ex plus .5ex minus 0ex}{\normalfont\normalsize\sffamily\bfseries}}% +\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-2.5ex plus -1ex minus -1ex}% +{0.5ex plus 0.5ex minus 0ex}{\normalfont\normalsize\sffamily\itshape}}% +\def\paragraph{\@startsection{paragraph}{4}{2\parindent}{-0ex plus -0.1ex minus -0.1ex}% +{0ex}{\normalfont\normalsize}}% +\fi\fi + + + + +%% ENVIRONMENTS +% "box" symbols at end of proofs +\def\IEEEQEDclosed{\mbox{\rule[0pt]{1.3ex}{1.3ex}}} % for a filled box +% V1.6 some journals use an open box instead that will just fit around a closed one +\def\IEEEQEDopen{{\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.2pt}\fbox{\rule[0pt]{0pt}{1.3ex}\rule[0pt]{1.3ex}{0pt}}}} +\ifCLASSOPTIONcompsoc +\def\IEEEQED{\IEEEQEDopen} % default to open for compsoc +\else +\def\IEEEQED{\IEEEQEDclosed} % otherwise default to closed +\fi + +% v1.7 name change to avoid namespace collision with amsthm. Also add support +% for an optional argument. +\def\IEEEproof{\@ifnextchar[{\@IEEEproof}{\@IEEEproof[\IEEEproofname]}} +\def\@IEEEproof[#1]{\par\noindent\hspace{2em}{\itshape #1: }} +\def\endIEEEproof{\hspace*{\fill}~\IEEEQED\par} + + +%\itemindent is set to \z@ by list, so define new temporary variable +\newdimen\@IEEEtmpitemindent +\def\@begintheorem#1#2{\@IEEEtmpitemindent\itemindent\topsep 0pt\rmfamily\trivlist% + \item[\hskip \labelsep{\indent\itshape #1\ #2:}]\itemindent\@IEEEtmpitemindent} +\def\@opargbegintheorem#1#2#3{\@IEEEtmpitemindent\itemindent\topsep 0pt\rmfamily \trivlist% +% V1.6 IEEE is back to using () around theorem names which are also in italics +% Thanks to Christian Peel for reporting this. + \item[\hskip\labelsep{\indent\itshape #1\ #2\ (#3):}]\itemindent\@IEEEtmpitemindent} +% V1.7 remove bogus \unskip that caused equations in theorems to collide with +% lines below. +\def\@endtheorem{\endtrivlist} + +% V1.6 +% display command for the section the theorem is in - so that \thesection +% is not used as this will be in Roman numerals when we want arabic. +% LaTeX2e uses \def\@thmcounter#1{\noexpand\arabic{#1}} for the theorem number +% (second part) display and \def\@thmcountersep{.} as a separator. +% V1.7 intercept calls to the section counter and reroute to \@IEEEthmcounterinsection +% to allow \appendix(ices} to override as needed. +% +% special handler for sections, allows appendix(ices) to override +\gdef\@IEEEthmcounterinsection#1{\arabic{#1}} +% string macro +\edef\@IEEEstringsection{section} + +% redefine the #1#2[#3] form of newtheorem to use a hook to \@IEEEthmcounterinsection +% if section in_counter is used +\def\@xnthm#1#2[#3]{% + \expandafter\@ifdefinable\csname #1\endcsname + {\@definecounter{#1}\@newctr{#1}[#3]% + \edef\@IEEEstringtmp{#3} + \ifx\@IEEEstringtmp\@IEEEstringsection + \expandafter\xdef\csname the#1\endcsname{% + \noexpand\@IEEEthmcounterinsection{#3}\@thmcountersep + \@thmcounter{#1}}% + \else + \expandafter\xdef\csname the#1\endcsname{% + \expandafter\noexpand\csname the#3\endcsname \@thmcountersep + \@thmcounter{#1}}% + \fi + \global\@namedef{#1}{\@thm{#1}{#2}}% + \global\@namedef{end#1}{\@endtheorem}}} + + + +%% SET UP THE DEFAULT PAGESTYLE +\ps@headings +\pagenumbering{arabic} + +% normally the page counter starts at 1 +\setcounter{page}{1} +% however, for peerreview the cover sheet is page 0 or page -1 +% (for duplex printing) +\ifCLASSOPTIONpeerreview + \if@twoside + \setcounter{page}{-1} + \else + \setcounter{page}{0} + \fi +\fi + +% standard book class behavior - let bottom line float up and down as +% needed when single sided +\ifCLASSOPTIONtwoside\else\raggedbottom\fi +% if two column - turn on twocolumn, allow word spacings to stretch more and +% enforce a rigid position for the last lines +\ifCLASSOPTIONtwocolumn +% the peer review option delays invoking twocolumn + \ifCLASSOPTIONpeerreview\else + \twocolumn + \fi +\sloppy +\flushbottom +\fi + + + + +% \APPENDIX and \APPENDICES definitions + +% This is the \@ifmtarg command from the LaTeX ifmtarg package +% by Peter Wilson (CUA) and Donald Arseneau +% \@ifmtarg is used to determine if an argument to a command +% is present or not. +% For instance: +% \@ifmtarg{#1}{\typeout{empty}}{\typeout{has something}} +% \@ifmtarg is used with our redefined \section command if +% \appendices is invoked. +% The command \section will behave slightly differently depending +% on whether the user specifies a title: +% \section{My appendix title} +% or not: +% \section{} +% This way, we can eliminate the blank lines where the title +% would be, and the unneeded : after Appendix in the table of +% contents +\begingroup +\catcode`\Q=3 +\long\gdef\@ifmtarg#1{\@xifmtarg#1QQ\@secondoftwo\@firstoftwo\@nil} +\long\gdef\@xifmtarg#1#2Q#3#4#5\@nil{#4} +\endgroup +% end of \@ifmtarg defs + + +% V1.7 +% command that allows the one time saving of the original definition +% of section to \@IEEEappendixsavesection for \appendix or \appendices +% we don't save \section here as it may be redefined later by other +% packages (hyperref.sty, etc.) +\def\@IEEEsaveoriginalsectiononce{\let\@IEEEappendixsavesection\section +\let\@IEEEsaveoriginalsectiononce\relax} + +% neat trick to grab and process the argument from \section{argument} +% we process differently if the user invoked \section{} with no +% argument (title) +% note we reroute the call to the old \section* +\def\@IEEEprocessthesectionargument#1{% +\@ifmtarg{#1}{% +\@IEEEappendixsavesection*{\appendixname~\thesectiondis}% +\addcontentsline{toc}{section}{\appendixname~\thesection}}{% +\@IEEEappendixsavesection*{\appendixname~\thesectiondis \\* #1}% +\addcontentsline{toc}{section}{\appendixname~\thesection: #1}}} + +% we use this if the user calls \section{} after +% \appendix-- which has no meaning. So, we ignore the +% command and its argument. Then, warn the user. +\def\@IEEEdestroythesectionargument#1{\typeout{** WARNING: Ignoring useless +\protect\section\space in Appendix (line \the\inputlineno).}} + + +% remember \thesection forms will be displayed in \ref calls +% and in the Table of Contents. +% The \sectiondis form is used in the actual heading itself + +% appendix command for one single appendix +% normally has no heading. However, if you want a +% heading, you can do so via the optional argument: +% \appendix[Optional Heading] +\def\appendix{\relax} +\renewcommand{\appendix}[1][]{\@IEEEsaveoriginalsectiononce\par + % v1.6 keep hyperref's identifiers unique + \gdef\theHsection{Appendix.A}% + % v1.6 adjust hyperref's string name for the section + \xdef\Hy@chapapp{appendix}% + \setcounter{section}{0}% + \setcounter{subsection}{0}% + \setcounter{subsubsection}{0}% + \setcounter{paragraph}{0}% + \gdef\thesection{A}% + \gdef\thesectiondis{}% + \gdef\thesubsection{\Alph{subsection}}% + \gdef\@IEEEthmcounterinsection##1{A} + \refstepcounter{section}% update the \ref counter + \@ifmtarg{#1}{\@IEEEappendixsavesection*{\appendixname}% + \addcontentsline{toc}{section}{\appendixname}}{% + \@IEEEappendixsavesection*{\appendixname~\\* #1}% + \addcontentsline{toc}{section}{\appendixname: #1}}% + % redefine \section command for appendix + % leave \section* as is + \def\section{\@ifstar{\@IEEEappendixsavesection*}{% + \@IEEEdestroythesectionargument}}% throw out the argument + % of the normal form +} + + + +% appendices command for multiple appendices +% user then calls \section with an argument (possibly empty) to +% declare the individual appendices +\def\appendices{\@IEEEsaveoriginalsectiononce\par + % v1.6 keep hyperref's identifiers unique + \gdef\theHsection{Appendix.\Alph{section}}% + % v1.6 adjust hyperref's string name for the section + \xdef\Hy@chapapp{appendix}% + \setcounter{section}{-1}% we want \refstepcounter to use section 0 + \setcounter{subsection}{0}% + \setcounter{subsubsection}{0}% + \setcounter{paragraph}{0}% + \ifCLASSOPTIONromanappendices% + \gdef\thesection{\Roman{section}}% + \gdef\thesectiondis{\Roman{section}}% + \@IEEEcompsocconfonly{\gdef\thesectiondis{\Roman{section}.}}% + \gdef\@IEEEthmcounterinsection##1{A\arabic{##1}} + \else% + \gdef\thesection{\Alph{section}}% + \gdef\thesectiondis{\Alph{section}}% + \@IEEEcompsocconfonly{\gdef\thesectiondis{\Alph{section}.}}% + \gdef\@IEEEthmcounterinsection##1{\Alph{##1}} + \fi% + \refstepcounter{section}% update the \ref counter + \setcounter{section}{0}% NEXT \section will be the FIRST appendix + % redefine \section command for appendices + % leave \section* as is + \def\section{\@ifstar{\@IEEEappendixsavesection*}{% process the *-form + \refstepcounter{section}% or is a new section so, + \@IEEEprocessthesectionargument}}% process the argument + % of the normal form +} + + + +% \IEEEPARstart +% Definition for the big two line drop cap letter at the beginning of the +% first paragraph of journal papers. The first argument is the first letter +% of the first word, the second argument is the remaining letters of the +% first word which will be rendered in upper case. +% In V1.6 this has been completely rewritten to: +% +% 1. no longer have problems when the user begins an environment +% within the paragraph that uses \IEEEPARstart. +% 2. auto-detect and use the current font family +% 3. revise handling of the space at the end of the first word so that +% interword glue will now work as normal. +% 4. produce correctly aligned edges for the (two) indented lines. +% +% We generalize things via control macros - playing with these is fun too. +% +% V1.7 added more control macros to make it easy for IEEEtrantools.sty users +% to change the font style. +% +% the number of lines that are indented to clear it +% may need to increase if using decenders +\def\@IEEEPARstartDROPLINES{2} +% minimum number of lines left on a page to allow a \@IEEEPARstart +% Does not take into consideration rubber shrink, so it tends to +% be overly cautious +\def\@IEEEPARstartMINPAGELINES{2} +% V1.7 the height of the drop cap is adjusted to match the height of this text +% in the current font (when \IEEEPARstart is called). +\def\@IEEEPARstartHEIGHTTEXT{T} +% the depth the letter is lowered below the baseline +% the height (and size) of the letter is determined by the sum +% of this value and the height of the \@IEEEPARstartHEIGHTTEXT in the current +% font. It is a good idea to set this value in terms of the baselineskip +% so that it can respond to changes therein. +\def\@IEEEPARstartDROPDEPTH{1.1\baselineskip} +% V1.7 the font the drop cap will be rendered in, +% can take zero or one argument. +\def\@IEEEPARstartFONTSTYLE{\bfseries} +% V1.7 any additional, non-font related commands needed to modify +% the drop cap letter, can take zero or one argument. +\def\@IEEEPARstartCAPSTYLE{\MakeUppercase} +% V1.7 the font that will be used to render the rest of the word, +% can take zero or one argument. +\def\@IEEEPARstartWORDFONTSTYLE{\relax} +% V1.7 any additional, non-font related commands needed to modify +% the rest of the word, can take zero or one argument. +\def\@IEEEPARstartWORDCAPSTYLE{\MakeUppercase} +% This is the horizontal separation distance from the drop letter to the main text. +% Lengths that depend on the font (e.g., ex, em, etc.) will be referenced +% to the font that is active when \IEEEPARstart is called. +\def\@IEEEPARstartSEP{0.15em} +% V1.7 horizontal offset applied to the left of the drop cap. +\def\@IEEEPARstartHOFFSET{0em} +% V1.7 Italic correction command applied at the end of the drop cap. +\def\@IEEEPARstartITLCORRECT{\/} + +% V1.7 compoc uses nonbold drop cap and small caps word style +\ifCLASSOPTIONcompsoc +\def\@IEEEPARstartFONTSTYLE{\mdseries} +\def\@IEEEPARstartWORDFONTSTYLE{\scshape} +\def\@IEEEPARstartWORDCAPSTYLE{\relax} +\fi + +% definition of \IEEEPARstart +% THIS IS A CONTROLLED SPACING AREA, DO NOT ALLOW SPACES WITHIN THESE LINES +% +% The token \@IEEEPARstartfont will be globally defined after the first use +% of \IEEEPARstart and will be a font command which creates the big letter +% The first argument is the first letter of the first word and the second +% argument is the rest of the first word(s). +\def\IEEEPARstart#1#2{\par{% +% if this page does not have enough space, break it and lets start +% on a new one +\@IEEEtranneedspace{\@IEEEPARstartMINPAGELINES\baselineskip}{\relax}% +% V1.7 move this up here in case user uses \textbf for \@IEEEPARstartFONTSTYLE +% which uses command \leavevmode which causes an unwanted \indent to be issued +\noindent +% calculate the desired height of the big letter +% it extends from the top of \@IEEEPARstartHEIGHTTEXT in the current font +% down to \@IEEEPARstartDROPDEPTH below the current baseline +\settoheight{\@IEEEtrantmpdimenA}{\@IEEEPARstartHEIGHTTEXT}% +\addtolength{\@IEEEtrantmpdimenA}{\@IEEEPARstartDROPDEPTH}% +% extract the name of the current font in bold +% and place it in \@IEEEPARstartFONTNAME +\def\@IEEEPARstartGETFIRSTWORD##1 ##2\relax{##1}% +{\@IEEEPARstartFONTSTYLE{\selectfont\edef\@IEEEPARstartFONTNAMESPACE{\fontname\font\space}% +\xdef\@IEEEPARstartFONTNAME{\expandafter\@IEEEPARstartGETFIRSTWORD\@IEEEPARstartFONTNAMESPACE\relax}}}% +% define a font based on this name with a point size equal to the desired +% height of the drop letter +\font\@IEEEPARstartsubfont\@IEEEPARstartFONTNAME\space at \@IEEEtrantmpdimenA\relax% +% save this value as a counter (integer) value (sp points) +\@IEEEtrantmpcountA=\@IEEEtrantmpdimenA% +% now get the height of the actual letter produced by this font size +\settoheight{\@IEEEtrantmpdimenB}{\@IEEEPARstartsubfont\@IEEEPARstartCAPSTYLE{#1}}% +% If something bogus happens like the first argument is empty or the +% current font is strange, do not allow a zero height. +\ifdim\@IEEEtrantmpdimenB=0pt\relax% +\typeout{** WARNING: IEEEPARstart drop letter has zero height! (line \the\inputlineno)}% +\typeout{ Forcing the drop letter font size to 10pt.}% +\@IEEEtrantmpdimenB=10pt% +\fi% +% and store it as a counter +\@IEEEtrantmpcountB=\@IEEEtrantmpdimenB% +% Since a font size doesn't exactly correspond to the height of the capital +% letters in that font, the actual height of the letter, \@IEEEtrantmpcountB, +% will be less than that desired, \@IEEEtrantmpcountA +% we need to raise the font size, \@IEEEtrantmpdimenA +% by \@IEEEtrantmpcountA / \@IEEEtrantmpcountB +% But, TeX doesn't have floating point division, so we have to use integer +% division. Hence the use of the counters. +% We need to reduce the denominator so that the loss of the remainder will +% have minimal affect on the accuracy of the result +\divide\@IEEEtrantmpcountB by 200% +\divide\@IEEEtrantmpcountA by \@IEEEtrantmpcountB% +% Then reequalize things when we use TeX's ability to multiply by +% floating point values +\@IEEEtrantmpdimenB=0.005\@IEEEtrantmpdimenA% +\multiply\@IEEEtrantmpdimenB by \@IEEEtrantmpcountA% +% \@IEEEPARstartfont is globaly set to the calculated font of the big letter +% We need to carry this out of the local calculation area to to create the +% big letter. +\global\font\@IEEEPARstartfont\@IEEEPARstartFONTNAME\space at \@IEEEtrantmpdimenB% +% Now set \@IEEEtrantmpdimenA to the width of the big letter +% We need to carry this out of the local calculation area to set the +% hanging indent +\settowidth{\global\@IEEEtrantmpdimenA}{\@IEEEPARstartfont +\@IEEEPARstartCAPSTYLE{#1\@IEEEPARstartITLCORRECT}}}% +% end of the isolated calculation environment +% add in the extra clearance we want +\advance\@IEEEtrantmpdimenA by \@IEEEPARstartSEP\relax% +% add in the optional offset +\advance\@IEEEtrantmpdimenA by \@IEEEPARstartHOFFSET\relax% +% V1.7 don't allow negative offsets to produce negative hanging indents +\@IEEEtrantmpdimenB\@IEEEtrantmpdimenA +\ifnum\@IEEEtrantmpdimenB < 0 \@IEEEtrantmpdimenB 0pt\fi +% \@IEEEtrantmpdimenA has the width of the big letter plus the +% separation space and \@IEEEPARstartfont is the font we need to use +% Now, we make the letter and issue the hanging indent command +% The letter is placed in a box of zero width and height so that other +% text won't be displaced by it. +\hangindent\@IEEEtrantmpdimenB\hangafter=-\@IEEEPARstartDROPLINES% +\makebox[0pt][l]{\hspace{-\@IEEEtrantmpdimenA}% +\raisebox{-\@IEEEPARstartDROPDEPTH}[0pt][0pt]{\hspace{\@IEEEPARstartHOFFSET}% +\@IEEEPARstartfont\@IEEEPARstartCAPSTYLE{#1\@IEEEPARstartITLCORRECT}% +\hspace{\@IEEEPARstartSEP}}}% +{\@IEEEPARstartWORDFONTSTYLE{\@IEEEPARstartWORDCAPSTYLE{\selectfont#2}}}} + + + + + + +% determines if the space remaining on a given page is equal to or greater +% than the specified space of argument one +% if not, execute argument two (only if the remaining space is greater than zero) +% and issue a \newpage +% +% example: \@IEEEtranneedspace{2in}{\vfill} +% +% Does not take into consideration rubber shrinkage, so it tends to +% be overly cautious +% Based on an example posted by Donald Arseneau +% Note this macro uses \@IEEEtrantmpdimenB internally for calculations, +% so DO NOT PASS \@IEEEtrantmpdimenB to this routine +% if you need a dimen register, import with \@IEEEtrantmpdimenA instead +\def\@IEEEtranneedspace#1#2{\penalty-100\begingroup%shield temp variable +\@IEEEtrantmpdimenB\pagegoal\advance\@IEEEtrantmpdimenB-\pagetotal% space left +\ifdim #1>\@IEEEtrantmpdimenB\relax% not enough space left +\ifdim\@IEEEtrantmpdimenB>\z@\relax #2\fi% +\newpage% +\fi\endgroup} + + + +% IEEEbiography ENVIRONMENT +% Allows user to enter biography leaving place for picture (adapts to font size) +% As of V1.5, a new optional argument allows you to have a real graphic! +% V1.5 and later also fixes the "colliding biographies" which could happen when a +% biography's text was shorter than the space for the photo. +% MDS 7/2001 +% V1.6 prevent multiple biographies from making multiple TOC entries +\newif\if@IEEEbiographyTOCentrynotmade +\global\@IEEEbiographyTOCentrynotmadetrue + +% biography counter so hyperref can jump directly to the biographies +% and not just the previous section +\newcounter{IEEEbiography} +\setcounter{IEEEbiography}{0} + +% photo area size +\def\@IEEEBIOphotowidth{1.0in} % width of the biography photo area +\def\@IEEEBIOphotodepth{1.25in} % depth (height) of the biography photo area +% area cleared for photo +\def\@IEEEBIOhangwidth{1.14in} % width cleared for the biography photo area +\def\@IEEEBIOhangdepth{1.25in} % depth cleared for the biography photo area + % actual depth will be a multiple of + % \baselineskip, rounded up +\def\@IEEEBIOskipN{4\baselineskip}% nominal value of the vskip above the biography + +\newenvironment{IEEEbiography}[2][]{\normalfont\@IEEEcompsoconly{\sffamily}\footnotesize% +\unitlength 1in\parskip=0pt\par\parindent 1em\interlinepenalty500% +% we need enough space to support the hanging indent +% the nominal value of the spacer +% and one extra line for good measure +\@IEEEtrantmpdimenA=\@IEEEBIOhangdepth% +\advance\@IEEEtrantmpdimenA by \@IEEEBIOskipN% +\advance\@IEEEtrantmpdimenA by 1\baselineskip% +% if this page does not have enough space, break it and lets start +% with a new one +\@IEEEtranneedspace{\@IEEEtrantmpdimenA}{\relax}% +% nominal spacer can strech, not shrink use 1fil so user can out stretch with \vfill +\vskip \@IEEEBIOskipN plus 1fil minus 0\baselineskip% +% the default box for where the photo goes +\def\@IEEEtempbiographybox{{\setlength{\fboxsep}{0pt}\framebox{% +\begin{minipage}[b][\@IEEEBIOphotodepth][c]{\@IEEEBIOphotowidth}\centering PLACE\\ PHOTO\\ HERE \end{minipage}}}}% +% +% detect if the optional argument was supplied, this requires the +% \@ifmtarg command as defined in the appendix section above +% and if so, override the default box with what they want +\@ifmtarg{#1}{\relax}{\def\@IEEEtempbiographybox{\mbox{\begin{minipage}[b][\@IEEEBIOphotodepth][c]{\@IEEEBIOphotowidth}% +\centering% +#1% +\end{minipage}}}}% end if optional argument supplied +% Make an entry into the table of contents only if we have not done so before +\if@IEEEbiographyTOCentrynotmade% +% link labels to the biography counter so hyperref will jump +% to the biography, not the previous section +\setcounter{IEEEbiography}{-1}% +\refstepcounter{IEEEbiography}% +\addcontentsline{toc}{section}{Biographies}% +\global\@IEEEbiographyTOCentrynotmadefalse% +\fi% +% one more biography +\refstepcounter{IEEEbiography}% +% Make an entry for this name into the table of contents +\addcontentsline{toc}{subsection}{#2}% +% V1.6 properly handle if a new paragraph should occur while the +% hanging indent is still active. Do this by redefining \par so +% that it will not start a new paragraph. (But it will appear to the +% user as if it did.) Also, strip any leading pars, newlines, or spaces. +\let\@IEEEBIOORGparCMD=\par% save the original \par command +\edef\par{\hfil\break\indent}% the new \par will not be a "real" \par +\settoheight{\@IEEEtrantmpdimenA}{\@IEEEtempbiographybox}% get height of biography box +\@IEEEtrantmpdimenB=\@IEEEBIOhangdepth% +\@IEEEtrantmpcountA=\@IEEEtrantmpdimenB% countA has the hang depth +\divide\@IEEEtrantmpcountA by \baselineskip% calculates lines needed to produce the hang depth +\advance\@IEEEtrantmpcountA by 1% ensure we overestimate +% set the hanging indent +\hangindent\@IEEEBIOhangwidth% +\hangafter-\@IEEEtrantmpcountA% +% reference the top of the photo area to the top of a capital T +\settoheight{\@IEEEtrantmpdimenB}{\mbox{T}}% +% set the photo box, give it zero width and height so as not to disturb anything +\noindent\makebox[0pt][l]{\hspace{-\@IEEEBIOhangwidth}\raisebox{\@IEEEtrantmpdimenB}[0pt][0pt]{% +\raisebox{-\@IEEEBIOphotodepth}[0pt][0pt]{\@IEEEtempbiographybox}}}% +% now place the author name and begin the bio text +\noindent\textbf{#2\ }\@IEEEgobbleleadPARNLSP}{\relax\let\par=\@IEEEBIOORGparCMD\par% +% 7/2001 V1.5 detect when the biography text is shorter than the photo area +% and pad the unused area - preventing a collision from the next biography entry +% MDS +\ifnum \prevgraf <\@IEEEtrantmpcountA\relax% detect when the biography text is shorter than the photo + \advance\@IEEEtrantmpcountA by -\prevgraf% calculate how many lines we need to pad + \advance\@IEEEtrantmpcountA by -1\relax% we compensate for the fact that we indented an extra line + \@IEEEtrantmpdimenA=\baselineskip% calculate the length of the padding + \multiply\@IEEEtrantmpdimenA by \@IEEEtrantmpcountA% + \noindent\rule{0pt}{\@IEEEtrantmpdimenA}% insert an invisible support strut +\fi% +\par\normalfont} + + + +% V1.6 +% added biography without a photo environment +\newenvironment{IEEEbiographynophoto}[1]{% +% Make an entry into the table of contents only if we have not done so before +\if@IEEEbiographyTOCentrynotmade% +% link labels to the biography counter so hyperref will jump +% to the biography, not the previous section +\setcounter{IEEEbiography}{-1}% +\refstepcounter{IEEEbiography}% +\addcontentsline{toc}{section}{Biographies}% +\global\@IEEEbiographyTOCentrynotmadefalse% +\fi% +% one more biography +\refstepcounter{IEEEbiography}% +% Make an entry for this name into the table of contents +\addcontentsline{toc}{subsection}{#1}% +\normalfont\@IEEEcompsoconly{\sffamily}\footnotesize\interlinepenalty500% +\vskip 4\baselineskip plus 1fil minus 0\baselineskip% +\parskip=0pt\par% +\noindent\textbf{#1\ }\@IEEEgobbleleadPARNLSP}{\relax\par\normalfont} + + +% provide the user with some old font commands +% got this from article.cls +\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} +\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} +\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} +\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} +\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} +\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} +\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} +\DeclareRobustCommand*\cal{\@fontswitch\relax\mathcal} +\DeclareRobustCommand*\mit{\@fontswitch\relax\mathnormal} + + +% SPECIAL PAPER NOTICE COMMANDS +% +% holds the special notice text +\def\@IEEEspecialpapernotice{\relax} + +% for special papers, like invited papers, the user can do: +% \IEEEspecialpapernotice{(Invited Paper)} before \maketitle +\def\IEEEspecialpapernotice#1{\ifCLASSOPTIONconference% +\def\@IEEEspecialpapernotice{{\sublargesize\textit{#1}\vspace*{1em}}}% +\else% +\def\@IEEEspecialpapernotice{{\\*[1.5ex]\sublargesize\textit{#1}}\vspace*{-2ex}}% +\fi} + + + + +% PUBLISHER ID COMMANDS +% to insert a publisher's ID footer +% V1.6 \IEEEpubid has been changed so that the change in page size and style +% occurs in \maketitle. \IEEEpubid must now be issued prior to \maketitle +% use \IEEEpubidadjcol as before - in the second column of the title page +% These changes allow \maketitle to take the reduced page height into +% consideration when dynamically setting the space between the author +% names and the maintext. +% +% the amount the main text is pulled up to make room for the +% publisher's ID footer +% IEEE uses about 1.3\baselineskip for journals, +% dynamic title spacing will clean up the fraction +\def\@IEEEpubidpullup{1.3\baselineskip} +\ifCLASSOPTIONtechnote +% for technotes it must be an integer of baselineskip as there can be no +% dynamic title spacing for two column mode technotes (the title is in the +% in first column) and we should maintain an integer number of lines in the +% second column +% There are some examples (such as older issues of "Transactions on +% Information Theory") in which IEEE really pulls the text off the ID for +% technotes - about 0.55in (or 4\baselineskip). We'll use 2\baselineskip +% and call it even. +\def\@IEEEpubidpullup{2\baselineskip} +\fi + +% V1.7 compsoc does not use a pullup +\ifCLASSOPTIONcompsoc +\def\@IEEEpubidpullup{0pt} +\fi + +% holds the ID text +\def\@IEEEpubid{\relax} + +% flag so \maketitle can tell if \IEEEpubid was called +\newif\if@IEEEusingpubid +\global\@IEEEusingpubidfalse +% issue this command in the page to have the ID at the bottom +% V1.6 use before \maketitle +\def\IEEEpubid#1{\def\@IEEEpubid{#1}\global\@IEEEusingpubidtrue} + + +% command which will pull up (shorten) the column it is executed in +% to make room for the publisher ID. Place in the second column of +% the title page when using \IEEEpubid +% Is smart enough not to do anything when in single column text or +% if the user hasn't called \IEEEpubid +% currently needed in for the second column of a page with the +% publisher ID. If not needed in future releases, please provide this +% command and define it as \relax for backward compatibility +% v1.6b do not allow command to operate if the peer review option has been +% selected because \IEEEpubidadjcol will not be on the cover page. +% V1.7 do nothing if compsoc +\def\IEEEpubidadjcol{\ifCLASSOPTIONcompsoc\else\ifCLASSOPTIONpeerreview\else +\if@twocolumn\if@IEEEusingpubid\enlargethispage{-\@IEEEpubidpullup}\fi\fi\fi\fi} + +% Special thanks to Peter Wilson, Daniel Luecking, and the other +% gurus at comp.text.tex, for helping me to understand how best to +% implement the IEEEpubid command in LaTeX. + + + +%% Lockout some commands under various conditions + +% general purpose bit bucket +\newsavebox{\@IEEEtranrubishbin} + +% flags to prevent multiple warning messages +\newif\if@IEEEWARNthanks +\newif\if@IEEEWARNIEEEPARstart +\newif\if@IEEEWARNIEEEbiography +\newif\if@IEEEWARNIEEEbiographynophoto +\newif\if@IEEEWARNIEEEpubid +\newif\if@IEEEWARNIEEEpubidadjcol +\newif\if@IEEEWARNIEEEmembership +\newif\if@IEEEWARNIEEEaftertitletext +\@IEEEWARNthankstrue +\@IEEEWARNIEEEPARstarttrue +\@IEEEWARNIEEEbiographytrue +\@IEEEWARNIEEEbiographynophototrue +\@IEEEWARNIEEEpubidtrue +\@IEEEWARNIEEEpubidadjcoltrue +\@IEEEWARNIEEEmembershiptrue +\@IEEEWARNIEEEaftertitletexttrue + + +%% Lockout some commands when in various modes, but allow them to be restored if needed +%% +% save commands which might be locked out +% so that the user can later restore them if needed +\let\@IEEESAVECMDthanks\thanks +\let\@IEEESAVECMDIEEEPARstart\IEEEPARstart +\let\@IEEESAVECMDIEEEbiography\IEEEbiography +\let\@IEEESAVECMDendIEEEbiography\endIEEEbiography +\let\@IEEESAVECMDIEEEbiographynophoto\IEEEbiographynophoto +\let\@IEEESAVECMDendIEEEbiographynophoto\endIEEEbiographynophoto +\let\@IEEESAVECMDIEEEpubid\IEEEpubid +\let\@IEEESAVECMDIEEEpubidadjcol\IEEEpubidadjcol +\let\@IEEESAVECMDIEEEmembership\IEEEmembership +\let\@IEEESAVECMDIEEEaftertitletext\IEEEaftertitletext + + +% disable \IEEEPARstart when in draft mode +% This may have originally been done because the pre-V1.6 drop letter +% algorithm had problems with a non-unity baselinestretch +% At any rate, it seems too formal to have a drop letter in a draft +% paper. +\ifCLASSOPTIONdraftcls +\def\IEEEPARstart#1#2{#1#2\if@IEEEWARNIEEEPARstart\typeout{** ATTENTION: \noexpand\IEEEPARstart + is disabled in draft mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEPARstartfalse} +\fi +% and for technotes +\ifCLASSOPTIONtechnote +\def\IEEEPARstart#1#2{#1#2\if@IEEEWARNIEEEPARstart\typeout{** WARNING: \noexpand\IEEEPARstart + is locked out for technotes (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEPARstartfalse} +\fi + + +% lockout unneeded commands when in conference mode +\ifCLASSOPTIONconference +% when locked out, \thanks, \IEEEbiography, \IEEEbiographynophoto, \IEEEpubid, +% \IEEEmembership and \IEEEaftertitletext will all swallow their given text. +% \IEEEPARstart will output a normal character instead +% warn the user about these commands only once to prevent the console screen +% from filling up with redundant messages +\def\thanks#1{\if@IEEEWARNthanks\typeout{** WARNING: \noexpand\thanks + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNthanksfalse} +\def\IEEEPARstart#1#2{#1#2\if@IEEEWARNIEEEPARstart\typeout{** WARNING: \noexpand\IEEEPARstart + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEPARstartfalse} + + +% LaTeX treats environments and commands with optional arguments differently. +% the actual ("internal") command is stored as \\commandname +% (accessed via \csname\string\commandname\endcsname ) +% the "external" command \commandname is a macro with code to determine +% whether or not the optional argument is presented and to provide the +% default if it is absent. So, in order to save and restore such a command +% we would have to save and restore \\commandname as well. But, if LaTeX +% ever changes the way it names the internal names, the trick would break. +% Instead let us just define a new environment so that the internal +% name can be left undisturbed. +\newenvironment{@IEEEbogusbiography}[2][]{\if@IEEEWARNIEEEbiography\typeout{** WARNING: \noexpand\IEEEbiography + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEbiographyfalse% +\setbox\@IEEEtranrubishbin\vbox\bgroup}{\egroup\relax} +% and make biography point to our bogus biography +\let\IEEEbiography=\@IEEEbogusbiography +\let\endIEEEbiography=\end@IEEEbogusbiography + +\renewenvironment{IEEEbiographynophoto}[1]{\if@IEEEWARNIEEEbiographynophoto\typeout{** WARNING: \noexpand\IEEEbiographynophoto + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEbiographynophotofalse% +\setbox\@IEEEtranrubishbin\vbox\bgroup}{\egroup\relax} + +\def\IEEEpubid#1{\if@IEEEWARNIEEEpubid\typeout{** WARNING: \noexpand\IEEEpubid + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEpubidfalse} +\def\IEEEpubidadjcol{\if@IEEEWARNIEEEpubidadjcol\typeout{** WARNING: \noexpand\IEEEpubidadjcol + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEpubidadjcolfalse} +\def\IEEEmembership#1{\if@IEEEWARNIEEEmembership\typeout{** WARNING: \noexpand\IEEEmembership + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEmembershipfalse} +\def\IEEEaftertitletext#1{\if@IEEEWARNIEEEaftertitletext\typeout{** WARNING: \noexpand\IEEEaftertitletext + is locked out when in conference mode (line \the\inputlineno).}\fi\global\@IEEEWARNIEEEaftertitletextfalse} +\fi + + +% provide a way to restore the commands that are locked out +\def\IEEEoverridecommandlockouts{% +\typeout{** ATTENTION: Overriding command lockouts (line \the\inputlineno).}% +\let\thanks\@IEEESAVECMDthanks% +\let\IEEEPARstart\@IEEESAVECMDIEEEPARstart% +\let\IEEEbiography\@IEEESAVECMDIEEEbiography% +\let\endIEEEbiography\@IEEESAVECMDendIEEEbiography% +\let\IEEEbiographynophoto\@IEEESAVECMDIEEEbiographynophoto% +\let\endIEEEbiographynophoto\@IEEESAVECMDendIEEEbiographynophoto% +\let\IEEEpubid\@IEEESAVECMDIEEEpubid% +\let\IEEEpubidadjcol\@IEEESAVECMDIEEEpubidadjcol% +\let\IEEEmembership\@IEEESAVECMDIEEEmembership% +\let\IEEEaftertitletext\@IEEESAVECMDIEEEaftertitletext} + + + +% need a backslash character for typeout output +{\catcode`\|=0 \catcode`\\=12 +|xdef|@IEEEbackslash{\}} + + +% hook to allow easy disabling of all legacy warnings +\def\@IEEElegacywarn#1#2{\typeout{** ATTENTION: \@IEEEbackslash #1 is deprecated (line \the\inputlineno). +Use \@IEEEbackslash #2 instead.}} + + +% provide for legacy commands +\def\authorblockA{\@IEEElegacywarn{authorblockA}{IEEEauthorblockA}\IEEEauthorblockA} +\def\authorblockN{\@IEEElegacywarn{authorblockN}{IEEEauthorblockN}\IEEEauthorblockN} +\def\authorrefmark{\@IEEElegacywarn{authorrefmark}{IEEEauthorrefmark}\IEEEauthorrefmark} +\def\PARstart{\@IEEElegacywarn{PARstart}{IEEEPARstart}\IEEEPARstart} +\def\pubid{\@IEEElegacywarn{pubid}{IEEEpubid}\IEEEpubid} +\def\pubidadjcol{\@IEEElegacywarn{pubidadjcol}{IEEEpubidadjcol}\IEEEpubidadjcol} +\def\QED{\@IEEElegacywarn{QED}{IEEEQED}\IEEEQED} +\def\QEDclosed{\@IEEElegacywarn{QEDclosed}{IEEEQEDclosed}\IEEEQEDclosed} +\def\QEDopen{\@IEEElegacywarn{QEDopen}{IEEEQEDopen}\IEEEQEDopen} +\def\specialpapernotice{\@IEEElegacywarn{specialpapernotice}{IEEEspecialpapernotice}\IEEEspecialpapernotice} + + + +% provide for legacy environments +\def\biography{\@IEEElegacywarn{biography}{IEEEbiography}\IEEEbiography} +\def\biographynophoto{\@IEEElegacywarn{biographynophoto}{IEEEbiographynophoto}\IEEEbiographynophoto} +\def\keywords{\@IEEElegacywarn{keywords}{IEEEkeywords}\IEEEkeywords} +\def\endbiography{\endIEEEbiography} +\def\endbiographynophoto{\endIEEEbiographynophoto} +\def\endkeywords{\endIEEEkeywords} + + +% provide for legacy IED commands/lengths when possible +\let\labelindent\IEEElabelindent +\def\calcleftmargin{\@IEEElegacywarn{calcleftmargin}{IEEEcalcleftmargin}\IEEEcalcleftmargin} +\def\setlabelwidth{\@IEEElegacywarn{setlabelwidth}{IEEEsetlabelwidth}\IEEEsetlabelwidth} +\def\usemathlabelsep{\@IEEElegacywarn{usemathlabelsep}{IEEEusemathlabelsep}\IEEEusemathlabelsep} +\def\iedlabeljustifyc{\@IEEElegacywarn{iedlabeljustifyc}{IEEEiedlabeljustifyc}\IEEEiedlabeljustifyc} +\def\iedlabeljustifyl{\@IEEElegacywarn{iedlabeljustifyl}{IEEEiedlabeljustifyl}\IEEEiedlabeljustifyl} +\def\iedlabeljustifyr{\@IEEElegacywarn{iedlabeljustifyr}{IEEEiedlabeljustifyr}\IEEEiedlabeljustifyr} + + + +% let \proof use the IEEEtran version even after amsthm is loaded +% \proof is now deprecated in favor of \IEEEproof +\AtBeginDocument{\def\proof{\@IEEElegacywarn{proof}{IEEEproof}\IEEEproof}\def\endproof{\endIEEEproof}} + +% V1.7 \overrideIEEEmargins is no longer supported. +\def\overrideIEEEmargins{% +\typeout{** WARNING: \string\overrideIEEEmargins \space no longer supported (line \the\inputlineno).}% +\typeout{** Use the \string\CLASSINPUTinnersidemargin, \string\CLASSINPUToutersidemargin \space controls instead.}} + + +\endinput + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%% End of IEEEtran.cls %%%%%%%%%%%%%%%%%%%%%%%%%%%% +% That's all folks! + Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swarm.png and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swarm.png differ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.bib tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.bib --- tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.bib 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.bib 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,30 @@ +@INPROCEEDINGS{hemminger:lca2005, + AUTHOR = "Stephen Hemminger", + TITLE = "Network Emulation with NetEm", + BOOKTITLE = "Proceedings of the 6th Australia's National Linux + Conference (LCA2005), Canberra, Australia.", + MONTH = "April", + YEAR = {2005} +} + +@MISC{devera:htb, + AUTHOR = "Martin Devera", + TITLE = "HTB home", + HOWPUBLISHED={\url{http://luxik.cdi.cz/~devik/qos/htb/}} +} + +@INPROCEEDINGS{perala:tridentcom2010, + AUTHOR = "Pekka H. J. Per{\"a}l{\"a} and Jori P. Paananen and Milton + Mukhopadhyay and Jukka-Pekka Laulajainen", + TITLE = "A Novel Testbed for P2P Networks", + BOOKTITLE = "Testbeds and Research + Infrastructures. Development of Networks and + Communities: 6th International ICST Conference, + TridentCom 2010", + PAGES = "69-83", + ORGANIZATION = "The Institute for Computer Sciences, Social + Informatics and Telecommunications Engineering + (ICST)", + MONTH = "May", + YEAR = {2010} +} Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.pdf and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.pdf differ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.tex tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.tex --- tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.tex 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/mfold-article/swift-sigicn-jpp.tex 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,255 @@ +\documentclass[conference]{IEEEtran} +\usepackage{graphicx} +\usepackage{url} +\usepackage{stfloats} +\usepackage[percent]{overpic} +\bstctlcite{IEEEexample:BSTcontrol} + +\begin{document} + +\title{Manifold: $O(N^2)$ testing of network protocols} + +\author{\IEEEauthorblockN{XXX} +\IEEEauthorblockA{XXX\\ +XXX,\\ XXX \\ +Email: XXX} +\and +\IEEEauthorblockN{XXX} +\IEEEauthorblockA{XXX\\ +XXX\\ +XXX} +\and +\IEEEauthorblockN{XXX} +\IEEEauthorblockA{XXX\\ +XXX,\\ XXX \\ +Email: XXX} +} +%\author{\IEEEauthorblockN{Victor~Grishchenko} +%\IEEEauthorblockA{Delft University of Technology\\ +%Mekelweg 4, Delft,\\ The Netherlands \\ +%Email: victor.grishchenko@gmail.com} +%\and +%\IEEEauthorblockN{Jori~Paananen} +%\IEEEauthorblockA{VTT Technical Research Centre of Finland\\ +%Vuorimiehentie 3, Espoo, Finland\\ +%Email: jori.paananen@vtt.fi} +%\and +%\IEEEauthorblockN{J. Pouwelse} +%\IEEEauthorblockA{Delft University of Technology\\ +%Mekelweg 4, Delft,\\ The Netherlands \\ +%Email: j.a.pouwelse@ewi.tudelft.nl} +%} + +\maketitle + +\begin{abstract} +While developing a UDP-based peer-to-peer transport protocol, we faced the problem of testing the implementation, its state machine, and congestion control algorithms. The problem is known to be fundamentally hard. Discoveries of decades-old bugs in TCP/IP stacks give a good illustration to this. Not being satisfied with classic methods, we have created the Manifold framework for automated massively parallel testing. Our main challenge was the combinatorial amount of diverse network conditions, protocol states and code paths affecting the implementation's behavior. By running traffic flows between every pair of nodes, Manifold covers $O(N^2)$ combinations of simulated and/or real network conditions thus performing massive case coverage in limited time. Reports, graphs, and a full system-wide event log allows to trace code paths and investigate problems. Being integrated into the code/build/test loop, Manifold instantly reveals both progress and regressions and enables rapid iterations on the code. +\end{abstract} +% jori 21.4: Should the last sentence above be 'Manifold instantly +% reveals both progress and regressions and enables rapid iterations +% on the code.' + +\section{Introduction} +The development and testing of network protocols is made difficult by the non-determinism of distributed systems. Congestion control is one of the most complicated topics as workings of the algorithms heavily depend on race conditions, packet losses, and other peculiarities of network behavior. Being put in somewhat different conditions, a ``proven'' code might turn ``problematic'', as it was the case with the famous LFN problem~\cite{lfn1,lfn2} of TCP. Bugs in TCP implementations are found till this day~\cite{tcp-bug}, despite the excessive level of use and testing in past 30 years. +New TCP congestion control algorithms are normally tested in the settings of dumb bell and/or parking lot~\cite{tcp-eval} simulated network topologies, as well as in the wild. The dumb bell setting allows to test stream behavior while competing with other flows for a single bottleneck; the parking lot topology simulates a sequence of bottlenecks. + +We have developed a multiparty (swarming) transport protocol~\cite{swift} using the LEDBAT~\cite{ledbat} least-than-best-effort (``scavenger'') congestion control algorithm. We had to test the implementation's behavior in a swarm, an aspect not addressed by the classic methods. +As well, we found out, that those methods do not allow to fully test a protocol and its implementation against various network behavior peculiarities. + +As an illustrative anecdote, one of the authors was debugging the implementation on an ordinary DSL line, when its uplink losses suddenly went to 10\% because of some technical problem on the ISP side. Regular web browsing was unaffected by the issue, as TCP is highly resilient to acknowledgement losses on the reverse (i.e. receiver to sender) path. Thus, the search for a non-existing regression lasted for a day, till the author decided to upload some photos thus uncovering the issue. Informally, a network might be ``special'' or simply ``broken'' in many different ways, and we needed a systematic way of checking our code against those peculiarities. + +As any change in the code might cause unintended effects in particular network conditions, we needed a fast way of checking those effects to track our progress as well as regressions. +While unit tests check for \emph{correctness} of a deterministic result given certain inputs, we needed massive case coverage of network conditions to ensure the results are \emph{acceptable} in every particular case. +Ideally, a test run had to be integrated into the regular code-build-test loop, similarly to unit testing. + +Thus, we came up with the idea of $O(N^{2})$ testing where $N$ real or emulated nodes represent different network conditions (high RTT, jitter, losses, NAT, asymmetry, etc). During one test run, we send traffic flows between every pair of nodes, ideally covering $O(N^{2})$ combinations of network conditions, like ``RTT and jitter'', or ``NAT and losses''. +The testing setup had to run tests and present the resulting statistics comprehensibly and quickly, to allow for repeated testing runs. + +\section{Manifold test suite} + +% jori 21.4: The abbrev everywhere in {\tt mfold}? + +The building of the Manifold testing suite started with the realization that manually testing the code under various network conditions is an extremely cumbersome and error-prone process, as setting up network configurations involves multi-step technical operations, likely spanning several hosts. + +Thus, the objective was to implement our $O(N^{2})$ testing approach using simple, improvised means, allowing for maximum parallelism, supporting diverse real and emulated setups. The system had to be simple enough and flexible in adapting to conditions, as the testing setups necessarily included diverse existing servers as well as (uniform) clusters. + +The resulting suite is a collection of shell scripts intended for use +with Linux/Unix test machines. Scripts are launched from a single controlling +machine using Secure Shell ({\tt ssh}). +Log parsing is done with {\tt perl} scripts, graphs are created with {\tt gnuplot}. Manifold scripts are included in the open source implementation of our protocol~\cite{libswift}. + +%: bash, ssh, perl for log parsing +%easily customize by the same means +%parallel as possible (time) parallelize execution as well +%pluggable; extension script and variables +%The Manifold (mfold) test suite enables creating and running test swarms with various topologies and a unique network QoS for each peer. After distributed parsing, the test results are combined and visualized. + +\subsection{General workings} + +Manifold execution is centered around a ``fan-out'' shell script named {\tt doall} that opens parallel {\tt ssh} sessions to every server of the testing setup and runs all the necessary commands. +Every run involves a sequence of operations, typically {\tt build}, {\tt netem}, {\tt run}, {\tt clean}, all ran by {\tt doall}. For example, {\tt ./doall build} will check out a certain version of the code, check for dependencies, build it and do fast unit tests, on every server of the testing setup, in parallel. +Individual server quirks are resolved through per-server plug-in extension scripts or environment variable profiles. Typically, in most cases it suffices to adjust environment variables in a profile script (named {\tt env.hostname.sh}). Given a really special platform, an extension script (e.g. {\tt build.hostname.sh}) might override the default process (i.e. {\tt build.default.sh}). + +In all scripts, servers are identified with their {\tt ssh} handles (as opposed to hostnames or IP addresses). That extra level of indirection allows to run several testing nodes at the same physical server or to move a node from one server to another. +% 'instances' and 'peer' sound right. Is 'profile name' synonym to % 'ssh hostname?' +%Names of the servers participating in the swarm are listed in a ``setup'' file. Multiple instances??? can be run at one physical server node by using a different profile name for each peer??? in the same machine and configuring them to use different UDP ports. + +%Each running instance leaves very detailed event logs; after every run, all logs are collected, parsed and summarized, the resulting data presented to the user. + +\subsection{Traffic manipulation} + +Testing the code on real nodes in the wild has its advantages and drawbacks. The main advantage is that the code is tested in a \emph{real} network. The main drawback is that live network conditions are transient and can't be fully reproduced, so different runs may not be comparable. As well, using real setups is expensive. Thus, we developed several test cluster setups using nodes with emulated network conditions. + +%In principle, mfold scripts can be used to create a peer-to-peer swarm +%over any kind of network infrastructure, with peers residing over +%e.g. different continents. This enables experiments in real-life +%conditions. However, they lack in controllability and fast +%modification of network characteristics or in the reproducibility of +%test results. These are achieved by using also shorter distance high +%quality connections with traffic manipulation added into the test +%server network interfaces. + +We added scripts to control traffic conditions using two standard Linux kernel queuing disciplines (qdisc) for network devices, used in our previous work \cite{perala} as well. +HTB (Hierarchy Token Bucket) \cite{devera:htb} provides packet rate +control capabilities. It also enables emulating different network conditions for several peers sharing the same physical server. +Using Netem \cite{netem}, we added different packet delay, jitter +and loss rules to every HTB class. +%Our approach was similar to \cite{perala:tridentcom2010}. It can be +%called controlled experimenting, which combines features from pure +%network emulation in a virtualised environment to experiments with +%real network equipment over real networks. +Egress and ingress packet flows can have different sets of qdisc +parameters. Ingress qdiscs are attached to an IFB (Intermediate +Functional Block) pseudo-device. Rules are applied based on the UDP port of a packet, as every node occupies a single port. + +Thus, HTB/Netem scripts allow to emulate wide range of specific network conditions and to freely mix emulated and real network setups in a single test swarm. + +\subsection{Test swarm setups} + +Given $N$ peers running on $k$ servers, we may use different variants of a network topology to put an accent on different aspects of protocol behavior. +We considered three types of topologies: swarm (mesh), chain (sequential data relay) and pairwise transfers. They test the code for swarming behavior, robustness, and single-stream performance, respectively. + +% In addition to the peer list, the swarm setup is controlled with the +% environment variables handling the run parameters of each peer. They +% determine e.g. the file name and hash of data to be up- or downloaded, +% the peers own listening address and port and the tracker address - no +% tracker address means the peer is only a seeder. + +\subsubsection{Large swarms} + +This swarm topology mostly tests the code for general robustness, creating near-real-world swarming download scenarios. The main challenge with this topology was to run bigger swarms (and to process the resulting data). +Limits of the swarm size are determined by the number of parallel ssh +connections the control machine may start and the maximum number +of peers each test server may run without exhausting its +resources. (The former limit could be side-stepped by starting parts +of a swarm from different control machines). So far, swarms of about thousand peers have been successfully run with one controlling machine (Lenovo T400 laptop with 2GB RAM) and 11-13 servers (Sun Fire X2100 servers with 8GB RAM). + +\subsubsection{Chain tests} + +Chain tests are mercilessly effective in finding state machine bugs. +In a chained setup, each node is only connected to the previous (source) and to the next (sink) node. Thus, the data has to traverse the entire chain sequentially. +That topology is the least forgiving with regard to state machine/ congestion control robustness, as a stall or a slowdown in one flow inevitably affects all the nodes further down the chain. That differs drastically from a swarm topology, that may run fairly well with 50\% transfers failed, because of its high redundancy. +Technically, our chained setup restricts node connections by starting local {\tt iptables} firewalls at every node. + +\subsubsection{Pairwise tests} +This setup aims to cleanly test protocol behavior in different network conditions, by eliminating third factors. Namely, with no swarming or data relay, precisely one transfer is done form every node to every other node. +This topology puts flows on equal footing as opposed to the swarmed and chained setups, where one transfer typically depends on others. +For larger $N$, it might pose a challenge to run $N^{2}$ streams without interference, using $N$ servers. But, in this particular setup, we need just one node to represent one ``peculiarity''. So, we would not need larger $N$. +%in a way that every node deals with one stream at a time, still the entire run terminates fast enough. A straightforward solution will consume $O(N)$ time for every run. +%(This testing regime is not implemented yet.) + +\begin{figure*}[hb] +\centering +\includegraphics[width=0.95\textwidth]{big-graph.pdf} +\caption{A detailed graph exposes congestion control history of a flow.} +\label{fig:graph} +\end{figure*} + +\subsection{Data harvesting} + +Automatic harvesting and analysis of the data turned up to be a major challenge due to the sheer volume of it. While sending or receiving one datagram, a peer generates 10-20 events that are necessary to understand the inner workings of the state machine. A small 10MB transfer requires tens of thousands of datagrams. Given 20-30 peers in an average setup, that results in at least $10^{7}$ log records per a single run, or around 1GB of logs. Not precisely the Google scale, but that data had to be digested and delivered to the user as soon as possible, in a form that allows rapid analysis. + +The problem was solved the way it was created. Namely, log processing was implemented to run at the original servers, the controlling machine only left to do one-pass log merge and graph drawing. Thus, data harvesting and analysis was made to scale together with the cluster. + + +% After the test is done and the peers are stopped, a data harvesting +% script \texttt{dohrv} can be run on the control machine. This script +% starts one log parser script for each peer on the test servers. The +% parser scripts gather event-specific information and map the +% information into the general swarm setup (e.g. IP addresses and ports +% into ssh \texttt{Host} names). The general output of the parsers is +% lines of timestamped log events with sending and receiving peer +% names. This output is sent compressed via ssh to the controlling +% machine. The output streams are directed through fifos to be merged +% and sorted by their timestamps to one file. Additionally, the parser +% scripts write timestamped parameter values (e.g. transferred data, +% losses, RTT, OWD, CWND) into connection-specific files (identified +% with the peer names), which are copied onto the control machine. + +% In order to prevent lack of resources on the control machine, when +% parsing results of large swarms, the maximum number of parallel +% parsings can be controlled with an environment variable. Parsings are +% then done in a sequence of number of peers divided by maximum number +% of parsings and sorted into temporary files. When all peers are +% parsed, the temporary files are themselves merged and sorted. + +Although the bulk of parsing and statistics is done at the servers, it turned out, that with larger swarms (hundreds of nodes), even maintaining so many parallel {\tt ssh} connections and merging the logs exhausted the control machine. In order to prevent this, we added an option to restrict the maximum number of parallel parsings. Thus, log processing may be done in a sequence of $\sim\frac{N}{k}$ batches, each batch no more than $k$ logs. +Since the number of sender-receiver pairs, and thus the number of traffic flows, might be on the order of $N^{2}$, the maximum number of running {\tt gnuplot} instances can also be limited. + +% When data copying and sorting is ready, a gnuplot instance is started +% for each sender-receiver pair. A plot is created showing the +% connections data out, RTT and its standard deviation, OWD, minimum +% OWD, CWND, CWND target and send and receive losses over time. + +% Since the number of sender-receiver pairs can be almost the square of +% swarm size at maximum, the maximum number of running gnuplots can +% again be restricted with an environment variable. + + +% At last, an html page is generated, where information of each +% connection is shown as an element in a matrix of senders and receivers +% (Figure \ref{fig:harvest}). The information contains the statistics of +% sent and received protocol messages and their types. A thumbnail +% picture of the plotted data is included. Clicking the thumbnail shows +% a larger plot picture. + +% jori 21.4: Should this be \subsection{Reports}? Isn't it part of +% section\{Manifold test suite}? +\subsection{Reports} + +\begin{figure}[t] +%\includegraphics[width=0.45\textwidth]{swarm-tomography.pdf} +\setlength\fboxsep{0pt} +\setlength\fboxrule{0.5pt} +\makebox[0pt][l]{\includegraphics[width=0.45\textwidth]{swarm-tomography.pdf}}% +\makebox[150pt][l]{}\fbox{\includegraphics[width=0.2\textwidth]{stats-cell.pdf}} +\caption{The main $N$ by $N$ ``harvest'' spreadsheet (back) shows the big picture. Each cell (right) provides statistics on a flow.} +\label{fig:swarm} +\end{figure} + +The resulting reports must allow the user to rapidly examine the test run traces for performance and abnormalities. The top-level report must be simple enough to let the user grok the ``big picture'' of swarm/flow behavior. Once the user focuses on a particular location or event, it must be easy to switch fast to more detailed data, down to the full event log. + +After harvesting and processing the data, Manifold produces an HTML spreadsheet $N$ by $N$, showing summary stats for every flow, as well as small graphs showing dynamics of flows (Fig.~\ref{fig:swarm}). +At this point, a user is able to estimate performance and stability of the streams. Closer inspection of every statistics bar reveals stats on message patterns. In case the summary raises some suspicions, the user may navigate to a large detailed version of the graph that gives a good overview of congestion control behavior and network conditions during the lifetime of the flow~(see~Fig.~\ref{fig:graph}). The graph plots three groups of parameters: time-based (average round trip time, RTT deviation, one-way delay, minimum delay, delay target~\cite{ledbat}), packet-based (congestion window, outstanding packets) and events (packet losses, detected by timeout or reordering). This data is sufficient to understand in great detail, how the transfer performed. Once the user is interested in finer details, then the event of interest, its causes and consequences, might be found in the full \emph{all-swarm} event log. The log is primarily analyzed with {\tt grep} and similar custom utilities. The process is helped by the uniform format of log records: (time, node, flow, event, parameters). + +%\begin{figure}[t] +%\centering +%\includegraphics[width=0.2\textwidth]{stats-cell.pdf} +%\caption{A cell in the spreadsheet gives a summary for a flow.} +%\label{fig:cell} +%\end{figure} + +As a result, a Manifold user is able to start with a fast qualitative estimation of the swarm and flows, then delve deeper into details as necessary, down to quantitative examination of the log and event-by-event analysis. + +\section{Conclusion} + +The Manifold testing approach performs massively parallel $O(N^{2})$ case coverage, showering your code with millions and millions of unpredictable state/event combinations. +The results often lead to a realization, that your code's performance is never ``perfect'', but probably it is ``good enough'' for the current conditions. +Despite the fact that Manifold invokes non-trivial computational resources, it still can be used in the routine code-build-test loop of software development. +We consider Manifold a useful addition to the standard dumb-bell/parking-lot toolset of network protocol testbeds. + + +\bibliographystyle{IEEEtran} +\bibliography{sources} + +\end{document} Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/p2tp-lancaster.pdf and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/p2tp-lancaster.pdf differ Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/p2tp-uml-er.pdf and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/p2tp-uml-er.pdf differ Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/doc/state-diagram.pdf and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/doc/state-diagram.pdf differ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/style.css tribler-6.2.0/Tribler/SwiftEngine/doc/style.css --- tribler-6.2.0/Tribler/SwiftEngine/doc/style.css 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/style.css 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,68 @@ +body { + background: white no-repeat fixed; + color: black; +} + +h1, h2, h3 { + font-family: Verdana; +} + +p { + text-align: justify; +} + +body > div { + width: 60em; + margin: auto; + margin-top: 64px; + margin-bottom: 64px; + /*background: #d0e0ff;*/ + background: rgba(208,224,255,0.9); + padding-top: 16px; + padding-bottom: 16px; +} + +img#logo { + /*display: block; + margin-left: auto; + margin-right: auto; + position: relative; + top: -40px;*/ + position:absolute; + top: 4px; +} + +body > div > h1 { + text-align:center; +} + +body > div > div { + width: 90%; + margin: auto; +} + +div#motto { + text-align: right; + font-style: italic; + font-size: larger; + margin-bottom: 28pt; +} + +div#abstract { + letter-spacing: 0.06em; + /*font-size: larger;*/ + font-style: italic; + font-family: Georgia; +} + +div.fold>h2, div.fold>h3, div.fold>h4 { + cursor: pointer; +} + +[bullet='open']:before { + content: "⊟ "; +} + +[bullet='closed']:before { + content: "⊞ "; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/swift.css tribler-6.2.0/Tribler/SwiftEngine/doc/swift.css --- tribler-6.2.0/Tribler/SwiftEngine/doc/swift.css 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/swift.css 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,121 @@ +* { +font-family: Georgia, serif; +} + +a { +text-decoration: none; +} + +img#logo { + position:absolute; + top: 40px; +} + +a:hover { +text-decoration: underline; +} + +body { +background: #A3CDEA; +margin: 0; +padding: 0px; +} + +h1, h2, h3, h4 { +font-family: Trebuchet MS, Arial, sans-serif; +padding: 0; +} + +h1 { +color: #000; +font-size: 200%; +font-weight: bold; +margin: 0; +} + +h2 { +border-top: 1px dotted #aaa; +font-size: 150%; +margin: 30px 0 10px 0; +padding: 20px 0 0 0; +} + +div > h2:first-child { +border-top: none; +margin: 0 0 10px 0; +padding: 0; +} + +h3 { +font-weight: normal; +margin: 30px 0 10px 0; +} + +h4 { +font-size: 90%; +font-weight: bold; +margin: 20px 0 10px 0; +} + + +li { +font-size: 12px; +line-height: 1.5em; +} + +p { +font-size: 12px; +line-height: 1.5em; +} + +ul { +list-style: square; +margin: 0; +padding: 0 0 0 15px; +} + + + +div#container { +margin: 0 auto 0 auto; +width: 700px; +} + +div#header { + font: 32pt Verdana bold; + color: white; + background: #44a; + padding: 5px 30px 15px 0; + text-align: center; +} + +div#header img { +vertical-align: middle; +} + +div#intro { +background: #fff; +border-bottom: 1px dotted #aaa; +padding: 25px 20px 10px 20px; +} + +div#intro p { +font-size: 16px; +} + +div#content { +background: #f6f6f6; +border-bottom: 1px solid black; +padding: 20px 20px 20px 20px; +} + +div#contact { +background: #fff; +padding: 20px 20px 20px 20px; +} + +div#footer { +background: #444; +color: #aaa; +padding: 20px 20px 20px 20px; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/AUTHORS tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/AUTHORS --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/AUTHORS 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/AUTHORS 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,2 @@ +Author : +Andrew Keating diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/CMakeLists.txt tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/CMakeLists.txt --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/CMakeLists.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/CMakeLists.txt 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,66 @@ +# CMakeLists.txt +# +# $Id: CMakeLists.txt 31995 2010-02-24 22:32:10Z jmayer $ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# + +set(DISSECTOR_SRC + packet-swift.c +) + +set(PLUGIN_FILES + plugin.c + ${DISSECTOR_SRC} +) + +set(CLEAN_FILES + ${PLUGIN_FILES} +) + +if (WERROR) + set_source_files_properties( + ${CLEAN_FILES} + PROPERTIES + COMPILE_FLAGS -Werror + ) +endif() + +include_directories(${CMAKE_CURRENT_SOURCE_DIR}) + +register_dissector_files(plugin.c + plugin + ${DISSECTOR_SRC} +) + +add_library(swift ${LINK_MODE_MODULE} + ${PLUGIN_FILES} +) +set_target_properties(swift PROPERTIES PREFIX "") +set_target_properties(swift PROPERTIES SOVERSION ${CPACK_PACKAGE_VERSION}) +set_target_properties(swift PROPERTIES LINK_FLAGS ${WS_LINK_FLAGS}) + +target_link_libraries(swift epan) + +install(TARGETS swift + LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}/@CPACK_PACKAGE_NAME@/plugins/${CPACK_PACKAGE_VERSION} NAMELINK_SKIP + RUNTIME DESTINATION ${CMAKE_INSTALL_LIBDIR}/@CPACK_PACKAGE_NAME@/plugins/${CPACK_PACKAGE_VERSION} + ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}/@CPACK_PACKAGE_NAME@/plugins/${CPACK_PACKAGE_VERSION} +) + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/COPYING tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/COPYING --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/COPYING 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/COPYING 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,340 @@ + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc. + 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Library General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + , 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General +Public License instead of this License. diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,828 @@ +# Makefile.in generated by automake 1.11.1 from Makefile.am. +# plugins/swift/Makefile. Generated from Makefile.in by configure. + +# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, +# 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, +# Inc. +# This Makefile.in is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY, to the extent permitted by law; without +# even the implied warranty of MERCHANTABILITY or FITNESS FOR A +# PARTICULAR PURPOSE. + + + +# Makefile.am +# Automake file for swift plugin +# By Andrew Keating +# Copyright 2011 Andrew Keating +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# + +# Makefile.common for Interlink plugin +# Contains the stuff from Makefile.am and Makefile.nmake that is +# a) common to both files and +# b) portable between both files +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + + +pkgdatadir = $(datadir)/wireshark +pkgincludedir = $(includedir)/wireshark +pkglibdir = $(libdir)/wireshark +pkglibexecdir = $(libexecdir)/wireshark +am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd +install_sh_DATA = $(install_sh) -c -m 644 +install_sh_PROGRAM = $(install_sh) -c +install_sh_SCRIPT = $(install_sh) -c +INSTALL_HEADER = $(INSTALL_DATA) +transform = $(program_transform_name) +NORMAL_INSTALL = : +PRE_INSTALL = : +POST_INSTALL = : +NORMAL_UNINSTALL = : +PRE_UNINSTALL = : +POST_UNINSTALL = : +build_triplet = x86_64-unknown-linux-gnu +host_triplet = x86_64-unknown-linux-gnu +target_triplet = x86_64-unknown-linux-gnu +DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.common \ + $(srcdir)/Makefile.in AUTHORS COPYING ChangeLog +subdir = plugins/swift +ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 +am__aclocal_m4_deps = $(top_srcdir)/aclocal-fallback/glib-2.0.m4 \ + $(top_srcdir)/aclocal-fallback/gtk-2.0.m4 \ + $(top_srcdir)/aclocal-fallback/libgcrypt.m4 \ + $(top_srcdir)/aclocal-fallback/libsmi.m4 \ + $(top_srcdir)/acinclude.m4 $(top_srcdir)/configure.in +am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ + $(ACLOCAL_M4) +mkinstalldirs = $(install_sh) -d +CONFIG_HEADER = $(top_builddir)/config.h +CONFIG_CLEAN_FILES = +CONFIG_CLEAN_VPATH_FILES = +am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; +am__vpath_adj = case $$p in \ + $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ + *) f=$$p;; \ + esac; +am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; +am__install_max = 40 +am__nobase_strip_setup = \ + srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` +am__nobase_strip = \ + for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" +am__nobase_list = $(am__nobase_strip_setup); \ + for p in $$list; do echo "$$p $$p"; done | \ + sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ + $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ + if (++n[$$2] == $(am__install_max)) \ + { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ + END { for (dir in files) print dir, files[dir] }' +am__base_list = \ + sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ + sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' +am__installdirs = "$(DESTDIR)$(plugindir)" +LTLIBRARIES = $(plugin_LTLIBRARIES) +swift_la_DEPENDENCIES = +am__objects_1 = packet-swift.lo +am__objects_2 = +am_swift_la_OBJECTS = plugin.lo $(am__objects_1) $(am__objects_2) +swift_la_OBJECTS = $(am_swift_la_OBJECTS) +AM_V_lt = $(am__v_lt_$(V)) +am__v_lt_ = $(am__v_lt_$(AM_DEFAULT_VERBOSITY)) +am__v_lt_0 = --silent +swift_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ + $(swift_la_LDFLAGS) $(LDFLAGS) -o $@ +DEFAULT_INCLUDES = -I. -I$(top_builddir) +depcomp = $(SHELL) $(top_srcdir)/depcomp +am__depfiles_maybe = depfiles +am__mv = mv -f +COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ + $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) +LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ + $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ + $(AM_CFLAGS) $(CFLAGS) +AM_V_CC = $(am__v_CC_$(V)) +am__v_CC_ = $(am__v_CC_$(AM_DEFAULT_VERBOSITY)) +am__v_CC_0 = @echo " CC " $@; +AM_V_at = $(am__v_at_$(V)) +am__v_at_ = $(am__v_at_$(AM_DEFAULT_VERBOSITY)) +am__v_at_0 = @ +CCLD = $(CC) +LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ + $(AM_LDFLAGS) $(LDFLAGS) -o $@ +AM_V_CCLD = $(am__v_CCLD_$(V)) +am__v_CCLD_ = $(am__v_CCLD_$(AM_DEFAULT_VERBOSITY)) +am__v_CCLD_0 = @echo " CCLD " $@; +AM_V_GEN = $(am__v_GEN_$(V)) +am__v_GEN_ = $(am__v_GEN_$(AM_DEFAULT_VERBOSITY)) +am__v_GEN_0 = @echo " GEN " $@; +SOURCES = $(swift_la_SOURCES) +DIST_SOURCES = $(swift_la_SOURCES) +ETAGS = etags +CTAGS = ctags +DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) +ACLOCAL = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run aclocal-1.11 +ADNS_LIBS = +AMTAR = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run tar +AM_DEFAULT_VERBOSITY = 1 +AR = ar +AUTOCONF = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run autoconf +AUTOHEADER = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run autoheader +AUTOMAKE = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run automake-1.11 +AWK = gawk +CC = gcc +CCDEPMODE = depmode=gcc3 +CC_FOR_BUILD = gcc +CFLAGS = -DINET6 -D_U_="__attribute__((unused))" -g -O2 -Wall -W -Wextra -Wdeclaration-after-statement -Wendif-labels -Wpointer-arith -Wno-pointer-sign -Warray-bounds -Wcast-align -Wformat-security -I/usr/local/include -pthread -D_REENTRANT -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/gio-unix-2.0/ -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/directfb -I/usr/include/libpng12 -I/usr/include/pcap +CORESERVICES_FRAMEWORKS = +CPP = gcc -E +CPPFLAGS = -I/usr/local/include -I/usr/include/pcap '-DPLUGIN_DIR="$(plugindir)"' +CXX = g++ +CXXCPP = g++ -E +CXXDEPMODE = depmode=gcc3 +CXXFLAGS = -g -O2 -pthread -D_REENTRANT -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/gio-unix-2.0/ -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/directfb -I/usr/include/libpng12 +CYGPATH_W = echo +C_ARES_LIBS = +DEFS = -DHAVE_CONFIG_H +DEPDIR = .deps +DOXYGEN = +DSYMUTIL = +DUMPBIN = +DUMPCAP_GROUP = +ECHO_C = +ECHO_N = -n +ECHO_T = +EGREP = /bin/grep -E +ELINKS = +ENABLE_STATIC = +EXEEXT = +FGREP = /bin/grep -F +FLEX_PATH = +FOP = +GEOIP_LIBS = +GETOPT_LO = +GLIB_CFLAGS = -pthread -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include +GLIB_GENMARSHAL = glib-genmarshal +GLIB_LIBS = -Wl,--export-dynamic -pthread -lgmodule-2.0 -lrt -lglib-2.0 +GLIB_MKENUMS = glib-mkenums +GOBJECT_QUERY = gobject-query +GREP = /bin/grep +GTK_CFLAGS = -pthread -D_REENTRANT -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/gio-unix-2.0/ -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/directfb -I/usr/include/libpng12 +GTK_LIBS = -pthread -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lgdk_pixbuf-2.0 -lm -lpangocairo-1.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lrt -lglib-2.0 +HAVE_BLESS = no +HAVE_DOXYGEN = no +HAVE_DPKG_BUILDPACKAGE = yes +HAVE_ELINKS = no +HAVE_FOP = no +HAVE_HDIUTIL = no +HAVE_HHC = no +HAVE_LYNX = no +HAVE_OSX_PACKAGING = no +HAVE_PKGMK = no +HAVE_PKGPROTO = no +HAVE_PKGTRANS = no +HAVE_RPM = +HAVE_SVR4_PACKAGING = no +HAVE_XCODEBUILD = no +HAVE_XMLLINT = yes +HAVE_XSLTPROC = yes +HHC = +HTML_VIEWER = /usr/bin/xdg-open +INET_ATON_LO = +INET_NTOP_LO = +INET_PTON_LO = +INSTALL = /usr/bin/install -c +INSTALL_DATA = ${INSTALL} -m 644 +INSTALL_PROGRAM = ${INSTALL} +INSTALL_SCRIPT = ${INSTALL} +INSTALL_STRIP_PROGRAM = $(install_sh) -c -s +KRB5_CONFIG = +KRB5_LIBS = +LAUNCHSERVICES_FRAMEWORKS = +LD = /usr/bin/ld -m elf_x86_64 +LDFLAGS = -Wl,--as-needed -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib +LDFLAGS_SHAREDLIB = +LEX = /usr/bin/flex +LEXLIB = -lfl +LEX_OUTPUT_ROOT = lex.yy +LIBCAP_LIBS = +LIBGCRYPT_CFLAGS = +LIBGCRYPT_CONFIG = no +LIBGCRYPT_LIBS = +LIBGNUTLS_CFLAGS = +LIBGNUTLS_LIBS = +LIBOBJS = + +# Libs must be cleared, or else libtool won't create a shared module. +# If your module needs to be linked against any particular libraries, +# add them here. +LIBS = +LIBSMI_CFLAGS = +LIBSMI_LDFLAGS = +LIBSMI_VERSION = +LIBTOOL = $(SHELL) $(top_builddir)/libtool +LIBTOOL_DEPS = ./ltmain.sh +LIPO = +LN_S = ln -s +LTLIBOBJS = +LUA_INCLUDES = +LUA_LIBS = +LYNX = +MAKEINFO = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/missing --run makeinfo +MKDIR_P = /bin/mkdir -p +NM = /usr/bin/nm -B +NMEDIT = +NSL_LIBS = +OBJDUMP = objdump +OBJEXT = o +OTOOL = +OTOOL64 = +PACKAGE = wireshark +PACKAGE_BUGREPORT = +PACKAGE_NAME = wireshark +PACKAGE_STRING = wireshark 1.4.7 +PACKAGE_TARNAME = wireshark +PACKAGE_URL = +PACKAGE_VERSION = 1.4.7 +PATH_SEPARATOR = : +PCAP_CONFIG = +PCAP_LIBS = -lpcap +PCRE_LIBS = +PERL = /usr/bin/perl +PKG_CONFIG = /usr/bin/pkg-config +PLUGIN_LIBS = +POD2HTML = /usr/bin/pod2html +POD2MAN = /usr/bin/pod2man +PORTAUDIO_INCLUDES = +PORTAUDIO_LIBS = +PYTHON = /usr/bin/python +PY_CFLAGS = +PY_LIBS = +RANLIB = ranlib +SED = /bin/sed +SETCAP = /sbin/setcap +SET_MAKE = +SHELL = /bin/bash +SOCKET_LIBS = +SSL_LIBS = +STRERROR_LO = +STRIP = strip +STRNCASECMP_LO = +STRPTIME_C = +STRPTIME_LO = +VERSION = 1.4.7 +XMLLINT = /usr/bin/xmllint +XSLTPROC = /usr/bin/xsltproc +YACC = bison -y +YACCDUMMY = /usr/bin/bison +YFLAGS = +abs_builddir = /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/plugins/swift +abs_srcdir = /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/plugins/swift +abs_top_builddir = /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7 +abs_top_srcdir = /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7 +ac_ct_CC = gcc +ac_ct_CXX = g++ +ac_ct_DUMPBIN = +ac_cv_wireshark_have_rpm = no +ac_ws_python_config = +am__include = include +am__leading_dot = . +am__quote = +am__tar = tar --format=ustar -chf - "$$tardir" +am__untar = tar -xf - +bindir = ${exec_prefix}/bin +build = x86_64-unknown-linux-gnu +build_alias = +build_cpu = x86_64 +build_os = linux-gnu +build_vendor = unknown +builddir = . +capinfos_bin = capinfos$(EXEEXT) +capinfos_man = capinfos.1 +datadir = ${datarootdir} +datarootdir = ${prefix}/share +dftest_bin = dftest$(EXEEXT) +dftest_man = dftest.1 +docdir = /root/build/root/share/doc/wireshark +dumpcap_bin = dumpcap$(EXEEXT) +dumpcap_man = dumpcap.1 +dvidir = ${docdir} +editcap_bin = editcap$(EXEEXT) +editcap_man = editcap.1 +exec_prefix = ${prefix} +host = x86_64-unknown-linux-gnu +host_alias = +host_cpu = x86_64 +host_os = linux-gnu +host_vendor = unknown +htmldir = ${docdir} +idl2wrs_bin = idl2wrs +idl2wrs_man = idl2wrs.1 +includedir = ${prefix}/include +infodir = ${datarootdir}/info +install_sh = ${SHELL} /home/andrewk/Documents/P2P/dissector/wireshark-1.4.7/install-sh +libdir = ${exec_prefix}/lib +libexecdir = ${exec_prefix}/libexec +localedir = ${datarootdir}/locale +localstatedir = ${prefix}/var +lt_ECHO = echo +mandir = ${datarootdir}/man +mergecap_bin = mergecap$(EXEEXT) +mergecap_man = mergecap.1 +mkdir_p = /bin/mkdir -p +oldincludedir = /usr/include +pdfdir = ${docdir} +plugindir = ${libdir}/wireshark/plugins/${VERSION} +prefix = /root/build/root +program_transform_name = s,x,x, +psdir = ${docdir} +pythondir = +randpkt_bin = randpkt$(EXEEXT) +randpkt_man = randpkt.1 +rawshark_bin = rawshark$(EXEEXT) +rawshark_man = rawshark.1 +sbindir = ${exec_prefix}/sbin +sharedstatedir = ${prefix}/com +srcdir = . +sysconfdir = ${prefix}/etc +target = x86_64-unknown-linux-gnu +target_alias = +target_cpu = x86_64 +target_os = linux-gnu +target_vendor = unknown +text2pcap_bin = text2pcap$(EXEEXT) +text2pcap_man = text2pcap.1 +top_build_prefix = ../../ +top_builddir = ../.. +top_srcdir = ../.. +tshark_bin = tshark$(EXEEXT) +tshark_man = tshark.1 +wireshark_SUBDIRS = codecs gtk +wireshark_bin = wireshark$(EXEEXT) +wireshark_man = wireshark.1 +wiresharkfilter_man = wireshark-filter.4 +INCLUDES = -I$(top_srcdir) -I$(includedir) + +# the name of the plugin +PLUGIN_NAME = swift + +# the dissector sources (without any helpers) +DISSECTOR_SRC = \ + packet-swift.c + + +# Dissector helpers. They're included in the source files in this +# directory, but they're not dissectors themselves, i.e. they're not +# used to generate "plugin.c". +DISSECTOR_SUPPORT_SRC = +#AM_CFLAGS = -Werror +plugin_LTLIBRARIES = swift.la +swift_la_SOURCES = \ + plugin.c \ + moduleinfo.h \ + $(DISSECTOR_SRC) \ + $(DISSECTOR_SUPPORT_SRC) \ + $(DISSECTOR_INCLUDES) + +swift_la_LDFLAGS = -module -avoid-version +swift_la_LIBADD = + +# +# Currently plugin.c can be included in the distribution because +# we always build all protocol dissectors. We used to have to check +# whether or not to build the snmp dissector. If we again need to +# variably build something, making plugin.c non-portable, uncomment +# the dist-hook line below. +# +# Oh, yuk. We don't want to include "plugin.c" in the distribution, as +# its contents depend on the configuration, and therefore we want it +# to be built when the first "make" is done; however, Automake insists +# on putting *all* source into the distribution. +# +# We work around this by having a "dist-hook" rule that deletes +# "plugin.c", so that "dist" won't pick it up. +# +#dist-hook: +# @rm -f $(distdir)/plugin.c +CLEANFILES = \ + swift \ + *~ + +MAINTAINERCLEANFILES = \ + Makefile.in \ + plugin.c + +EXTRA_DIST = \ + Makefile.common \ + Makefile.nmake \ + moduleinfo.nmake \ + plugin.rc.in \ + CMakeLists.txt + +all: all-am + +.SUFFIXES: +.SUFFIXES: .c .lo .o .obj +$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(srcdir)/Makefile.common $(am__configure_deps) + @for dep in $?; do \ + case '$(am__configure_deps)' in \ + *$$dep*) \ + ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ + && { if test -f $@; then exit 0; else break; fi; }; \ + exit 1;; \ + esac; \ + done; \ + echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu plugins/swift/Makefile'; \ + $(am__cd) $(top_srcdir) && \ + $(AUTOMAKE) --gnu plugins/swift/Makefile +.PRECIOUS: Makefile +Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status + @case '$?' in \ + *config.status*) \ + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ + *) \ + echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ + cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ + esac; + +$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh + +$(top_srcdir)/configure: $(am__configure_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(ACLOCAL_M4): $(am__aclocal_m4_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(am__aclocal_m4_deps): +install-pluginLTLIBRARIES: $(plugin_LTLIBRARIES) + @$(NORMAL_INSTALL) + test -z "$(plugindir)" || $(MKDIR_P) "$(DESTDIR)$(plugindir)" + @list='$(plugin_LTLIBRARIES)'; test -n "$(plugindir)" || list=; \ + list2=; for p in $$list; do \ + if test -f $$p; then \ + list2="$$list2 $$p"; \ + else :; fi; \ + done; \ + test -z "$$list2" || { \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(plugindir)'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(plugindir)"; \ + } + +uninstall-pluginLTLIBRARIES: + @$(NORMAL_UNINSTALL) + @list='$(plugin_LTLIBRARIES)'; test -n "$(plugindir)" || list=; \ + for p in $$list; do \ + $(am__strip_dir) \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(plugindir)/$$f'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(plugindir)/$$f"; \ + done + +clean-pluginLTLIBRARIES: + -test -z "$(plugin_LTLIBRARIES)" || rm -f $(plugin_LTLIBRARIES) + @list='$(plugin_LTLIBRARIES)'; for p in $$list; do \ + dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ + test "$$dir" != "$$p" || dir=.; \ + echo "rm -f \"$${dir}/so_locations\""; \ + rm -f "$${dir}/so_locations"; \ + done +swift.la: $(swift_la_OBJECTS) $(swift_la_DEPENDENCIES) + $(AM_V_CCLD)$(swift_la_LINK) -rpath $(plugindir) $(swift_la_OBJECTS) $(swift_la_LIBADD) $(LIBS) + +mostlyclean-compile: + -rm -f *.$(OBJEXT) + +distclean-compile: + -rm -f *.tab.c + +include ./$(DEPDIR)/packet-swift.Plo +include ./$(DEPDIR)/plugin.Plo + +.c.o: + $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< + $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po +# $(AM_V_CC) \ +# source='$<' object='$@' libtool=no \ +# DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) \ +# $(COMPILE) -c $< + +.c.obj: + $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` + $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po +# $(AM_V_CC) \ +# source='$<' object='$@' libtool=no \ +# DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) \ +# $(COMPILE) -c `$(CYGPATH_W) '$<'` + +.c.lo: + $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< + $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo +# $(AM_V_CC) \ +# source='$<' object='$@' libtool=yes \ +# DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) \ +# $(LTCOMPILE) -c -o $@ $< + +mostlyclean-libtool: + -rm -f *.lo + +clean-libtool: + -rm -rf .libs _libs + +ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + mkid -fID $$unique +tags: TAGS + +TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ + $(TAGS_FILES) $(LISP) + set x; \ + here=`pwd`; \ + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + shift; \ + if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ + test -n "$$unique" || unique=$$empty_fix; \ + if test $$# -gt 0; then \ + $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ + "$$@" $$unique; \ + else \ + $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ + $$unique; \ + fi; \ + fi +ctags: CTAGS +CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ + $(TAGS_FILES) $(LISP) + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + test -z "$(CTAGS_ARGS)$$unique" \ + || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ + $$unique + +GTAGS: + here=`$(am__cd) $(top_builddir) && pwd` \ + && $(am__cd) $(top_srcdir) \ + && gtags -i $(GTAGS_ARGS) "$$here" + +distclean-tags: + -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags + +distdir: $(DISTFILES) + @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + list='$(DISTFILES)'; \ + dist_files=`for file in $$list; do echo $$file; done | \ + sed -e "s|^$$srcdirstrip/||;t" \ + -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ + case $$dist_files in \ + */*) $(MKDIR_P) `echo "$$dist_files" | \ + sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ + sort -u` ;; \ + esac; \ + for file in $$dist_files; do \ + if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ + if test -d $$d/$$file; then \ + dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ + if test -d "$(distdir)/$$file"; then \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ + cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ + else \ + test -f "$(distdir)/$$file" \ + || cp -p $$d/$$file "$(distdir)/$$file" \ + || exit 1; \ + fi; \ + done +check-am: all-am +check: check-am +all-am: Makefile $(LTLIBRARIES) +installdirs: + for dir in "$(DESTDIR)$(plugindir)"; do \ + test -z "$$dir" || $(MKDIR_P) "$$dir"; \ + done +install: install-am +install-exec: install-exec-am +install-data: install-data-am +uninstall: uninstall-am + +install-am: all-am + @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am + +installcheck: installcheck-am +install-strip: + $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ + install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ + `test -z '$(STRIP)' || \ + echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install +mostlyclean-generic: + +clean-generic: + -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) + +distclean-generic: + -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) + -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) + +maintainer-clean-generic: + @echo "This command is intended for maintainers to use" + @echo "it deletes files that may require special tools to rebuild." + -test -z "$(MAINTAINERCLEANFILES)" || rm -f $(MAINTAINERCLEANFILES) +clean: clean-am + +clean-am: clean-generic clean-libtool clean-pluginLTLIBRARIES \ + mostlyclean-am + +distclean: distclean-am + -rm -rf ./$(DEPDIR) + -rm -f Makefile +distclean-am: clean-am distclean-compile distclean-generic \ + distclean-tags + +dvi: dvi-am + +dvi-am: + +html: html-am + +html-am: + +info: info-am + +info-am: + +install-data-am: install-pluginLTLIBRARIES + +install-dvi: install-dvi-am + +install-dvi-am: + +install-exec-am: + +install-html: install-html-am + +install-html-am: + +install-info: install-info-am + +install-info-am: + +install-man: + +install-pdf: install-pdf-am + +install-pdf-am: + +install-ps: install-ps-am + +install-ps-am: + +installcheck-am: + +maintainer-clean: maintainer-clean-am + -rm -rf ./$(DEPDIR) + -rm -f Makefile +maintainer-clean-am: distclean-am maintainer-clean-generic + +mostlyclean: mostlyclean-am + +mostlyclean-am: mostlyclean-compile mostlyclean-generic \ + mostlyclean-libtool + +pdf: pdf-am + +pdf-am: + +ps: ps-am + +ps-am: + +uninstall-am: uninstall-pluginLTLIBRARIES + +.MAKE: install-am install-strip + +.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \ + clean-libtool clean-pluginLTLIBRARIES ctags distclean \ + distclean-compile distclean-generic distclean-libtool \ + distclean-tags distdir dvi dvi-am html html-am info info-am \ + install install-am install-data install-data-am install-dvi \ + install-dvi-am install-exec install-exec-am install-html \ + install-html-am install-info install-info-am install-man \ + install-pdf install-pdf-am install-pluginLTLIBRARIES \ + install-ps install-ps-am install-strip installcheck \ + installcheck-am installdirs maintainer-clean \ + maintainer-clean-generic mostlyclean mostlyclean-compile \ + mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ + tags uninstall uninstall-am uninstall-pluginLTLIBRARIES + + +# +# Build plugin.c, which contains the plugin version[] string, a +# function plugin_register() that calls the register routines for all +# protocols, and a function plugin_reg_handoff() that calls the handoff +# registration routines for all protocols. +# +# We do this by scanning sources. If that turns out to be too slow, +# maybe we could just require every .o file to have an register routine +# of a given name (packet-aarp.o -> proto_register_aarp, etc.). +# +# Formatting conventions: The name of the proto_register_* routines an +# proto_reg_handoff_* routines must start in column zero, or must be +# preceded only by "void " starting in column zero, and must not be +# inside #if. +# +# DISSECTOR_SRC is assumed to have all the files that need to be scanned. +# +# For some unknown reason, having a big "for" loop in the Makefile +# to scan all the files doesn't work with some "make"s; they seem to +# pass only the first few names in the list to the shell, for some +# reason. +# +# Therefore, we have a script to generate the plugin.c file. +# The shell script runs slowly, as multiple greps and seds are run +# for each input file; this is especially slow on Windows. Therefore, +# if Python is present (as indicated by PYTHON being defined), we run +# a faster Python script to do that work instead. +# +# The first argument is the directory in which the source files live. +# The second argument is "plugin", to indicate that we should build +# a plugin.c file for a plugin. +# All subsequent arguments are the files to scan. +# +plugin.c: $(DISSECTOR_SRC) $(top_srcdir)/tools/make-dissector-reg \ + $(top_srcdir)/tools/make-dissector-reg.py + @if test -n "$(PYTHON)"; then \ + echo Making plugin.c with python ; \ + $(PYTHON) $(top_srcdir)/tools/make-dissector-reg.py $(srcdir) \ + plugin $(DISSECTOR_SRC) ; \ + else \ + echo Making plugin.c with shell script ; \ + $(top_srcdir)/tools/make-dissector-reg $(srcdir) \ + $(plugin_src) plugin $(DISSECTOR_SRC) ; \ + fi + +checkapi: + $(PERL) $(top_srcdir)/tools/checkAPIs.pl -g abort -g termoutput $(DISSECTOR_SRC) $(DISSECTOR_INCLUDES) + +# Tell versions [3.59,3.63) of GNU make to not export all variables. +# Otherwise a system limit (for SysV at least) may be exceeded. +.NOEXPORT: diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.am tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.am --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.am 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.am 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,131 @@ +# Makefile.am +# Automake file for swift plugin +# By Andrew Keating +# Copyright 2011 Andrew Keating +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# + +INCLUDES = -I$(top_srcdir) -I$(includedir) + +include Makefile.common + +if HAVE_WARNINGS_AS_ERRORS +AM_CFLAGS = -Werror +endif + +plugindir = @plugindir@ + +plugin_LTLIBRARIES = swift.la +swift_la_SOURCES = \ + plugin.c \ + moduleinfo.h \ + $(DISSECTOR_SRC) \ + $(DISSECTOR_SUPPORT_SRC) \ + $(DISSECTOR_INCLUDES) +swift_la_LDFLAGS = -module -avoid-version +swift_la_LIBADD = @PLUGIN_LIBS@ + +# Libs must be cleared, or else libtool won't create a shared module. +# If your module needs to be linked against any particular libraries, +# add them here. +LIBS = + +# +# Build plugin.c, which contains the plugin version[] string, a +# function plugin_register() that calls the register routines for all +# protocols, and a function plugin_reg_handoff() that calls the handoff +# registration routines for all protocols. +# +# We do this by scanning sources. If that turns out to be too slow, +# maybe we could just require every .o file to have an register routine +# of a given name (packet-aarp.o -> proto_register_aarp, etc.). +# +# Formatting conventions: The name of the proto_register_* routines an +# proto_reg_handoff_* routines must start in column zero, or must be +# preceded only by "void " starting in column zero, and must not be +# inside #if. +# +# DISSECTOR_SRC is assumed to have all the files that need to be scanned. +# +# For some unknown reason, having a big "for" loop in the Makefile +# to scan all the files doesn't work with some "make"s; they seem to +# pass only the first few names in the list to the shell, for some +# reason. +# +# Therefore, we have a script to generate the plugin.c file. +# The shell script runs slowly, as multiple greps and seds are run +# for each input file; this is especially slow on Windows. Therefore, +# if Python is present (as indicated by PYTHON being defined), we run +# a faster Python script to do that work instead. +# +# The first argument is the directory in which the source files live. +# The second argument is "plugin", to indicate that we should build +# a plugin.c file for a plugin. +# All subsequent arguments are the files to scan. +# +plugin.c: $(DISSECTOR_SRC) $(top_srcdir)/tools/make-dissector-reg \ + $(top_srcdir)/tools/make-dissector-reg.py + @if test -n "$(PYTHON)"; then \ + echo Making plugin.c with python ; \ + $(PYTHON) $(top_srcdir)/tools/make-dissector-reg.py $(srcdir) \ + plugin $(DISSECTOR_SRC) ; \ + else \ + echo Making plugin.c with shell script ; \ + $(top_srcdir)/tools/make-dissector-reg $(srcdir) \ + $(plugin_src) plugin $(DISSECTOR_SRC) ; \ + fi + +# +# Currently plugin.c can be included in the distribution because +# we always build all protocol dissectors. We used to have to check +# whether or not to build the snmp dissector. If we again need to +# variably build something, making plugin.c non-portable, uncomment +# the dist-hook line below. +# +# Oh, yuk. We don't want to include "plugin.c" in the distribution, as +# its contents depend on the configuration, and therefore we want it +# to be built when the first "make" is done; however, Automake insists +# on putting *all* source into the distribution. +# +# We work around this by having a "dist-hook" rule that deletes +# "plugin.c", so that "dist" won't pick it up. +# +#dist-hook: +# @rm -f $(distdir)/plugin.c + +CLEANFILES = \ + swift \ + *~ + +MAINTAINERCLEANFILES = \ + Makefile.in \ + plugin.c + +EXTRA_DIST = \ + Makefile.common \ + Makefile.nmake \ + moduleinfo.nmake \ + plugin.rc.in \ + CMakeLists.txt + +checkapi: + $(PERL) $(top_srcdir)/tools/checkAPIs.pl -g abort -g termoutput $(DISSECTOR_SRC) $(DISSECTOR_INCLUDES) diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.common tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.common --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.common 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.common 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,36 @@ +# Makefile.common for Interlink plugin +# Contains the stuff from Makefile.am and Makefile.nmake that is +# a) common to both files and +# b) portable between both files +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + +# the name of the plugin +PLUGIN_NAME = swift + +# the dissector sources (without any helpers) +DISSECTOR_SRC = \ + packet-swift.c + +# Dissector helpers. They're included in the source files in this +# directory, but they're not dissectors themselves, i.e. they're not +# used to generate "plugin.c". +DISSECTOR_SUPPORT_SRC = diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.in tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.in --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.in 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.in 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,828 @@ +# Makefile.in generated by automake 1.11.1 from Makefile.am. +# @configure_input@ + +# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, +# 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, +# Inc. +# This Makefile.in is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY, to the extent permitted by law; without +# even the implied warranty of MERCHANTABILITY or FITNESS FOR A +# PARTICULAR PURPOSE. + +@SET_MAKE@ + +# Makefile.am +# Automake file for swift plugin +# By Andrew Keating +# Copyright 2011 Andrew Keating +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# + +# Makefile.common for Interlink plugin +# Contains the stuff from Makefile.am and Makefile.nmake that is +# a) common to both files and +# b) portable between both files +# +# $Id$ +# +# Wireshark - Network traffic analyzer +# By Gerald Combs +# Copyright 1998 Gerald Combs +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation; either version 2 +# of the License, or (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + +VPATH = @srcdir@ +pkgdatadir = $(datadir)/@PACKAGE@ +pkgincludedir = $(includedir)/@PACKAGE@ +pkglibdir = $(libdir)/@PACKAGE@ +pkglibexecdir = $(libexecdir)/@PACKAGE@ +am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd +install_sh_DATA = $(install_sh) -c -m 644 +install_sh_PROGRAM = $(install_sh) -c +install_sh_SCRIPT = $(install_sh) -c +INSTALL_HEADER = $(INSTALL_DATA) +transform = $(program_transform_name) +NORMAL_INSTALL = : +PRE_INSTALL = : +POST_INSTALL = : +NORMAL_UNINSTALL = : +PRE_UNINSTALL = : +POST_UNINSTALL = : +build_triplet = @build@ +host_triplet = @host@ +target_triplet = @target@ +DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.common \ + $(srcdir)/Makefile.in AUTHORS COPYING ChangeLog +subdir = plugins/swift +ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 +am__aclocal_m4_deps = $(top_srcdir)/aclocal-fallback/glib-2.0.m4 \ + $(top_srcdir)/aclocal-fallback/gtk-2.0.m4 \ + $(top_srcdir)/aclocal-fallback/libgcrypt.m4 \ + $(top_srcdir)/aclocal-fallback/libsmi.m4 \ + $(top_srcdir)/acinclude.m4 $(top_srcdir)/configure.in +am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ + $(ACLOCAL_M4) +mkinstalldirs = $(install_sh) -d +CONFIG_HEADER = $(top_builddir)/config.h +CONFIG_CLEAN_FILES = +CONFIG_CLEAN_VPATH_FILES = +am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; +am__vpath_adj = case $$p in \ + $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ + *) f=$$p;; \ + esac; +am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; +am__install_max = 40 +am__nobase_strip_setup = \ + srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` +am__nobase_strip = \ + for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" +am__nobase_list = $(am__nobase_strip_setup); \ + for p in $$list; do echo "$$p $$p"; done | \ + sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ + $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ + if (++n[$$2] == $(am__install_max)) \ + { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ + END { for (dir in files) print dir, files[dir] }' +am__base_list = \ + sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ + sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' +am__installdirs = "$(DESTDIR)$(plugindir)" +LTLIBRARIES = $(plugin_LTLIBRARIES) +swift_la_DEPENDENCIES = +am__objects_1 = packet-swift.lo +am__objects_2 = +am_swift_la_OBJECTS = plugin.lo $(am__objects_1) $(am__objects_2) +swift_la_OBJECTS = $(am_swift_la_OBJECTS) +AM_V_lt = $(am__v_lt_$(V)) +am__v_lt_ = $(am__v_lt_$(AM_DEFAULT_VERBOSITY)) +am__v_lt_0 = --silent +swift_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ + $(swift_la_LDFLAGS) $(LDFLAGS) -o $@ +DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir) +depcomp = $(SHELL) $(top_srcdir)/depcomp +am__depfiles_maybe = depfiles +am__mv = mv -f +COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ + $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) +LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ + $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ + $(AM_CFLAGS) $(CFLAGS) +AM_V_CC = $(am__v_CC_$(V)) +am__v_CC_ = $(am__v_CC_$(AM_DEFAULT_VERBOSITY)) +am__v_CC_0 = @echo " CC " $@; +AM_V_at = $(am__v_at_$(V)) +am__v_at_ = $(am__v_at_$(AM_DEFAULT_VERBOSITY)) +am__v_at_0 = @ +CCLD = $(CC) +LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ + $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ + $(AM_LDFLAGS) $(LDFLAGS) -o $@ +AM_V_CCLD = $(am__v_CCLD_$(V)) +am__v_CCLD_ = $(am__v_CCLD_$(AM_DEFAULT_VERBOSITY)) +am__v_CCLD_0 = @echo " CCLD " $@; +AM_V_GEN = $(am__v_GEN_$(V)) +am__v_GEN_ = $(am__v_GEN_$(AM_DEFAULT_VERBOSITY)) +am__v_GEN_0 = @echo " GEN " $@; +SOURCES = $(swift_la_SOURCES) +DIST_SOURCES = $(swift_la_SOURCES) +ETAGS = etags +CTAGS = ctags +DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) +ACLOCAL = @ACLOCAL@ +ADNS_LIBS = @ADNS_LIBS@ +AMTAR = @AMTAR@ +AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ +AR = @AR@ +AUTOCONF = @AUTOCONF@ +AUTOHEADER = @AUTOHEADER@ +AUTOMAKE = @AUTOMAKE@ +AWK = @AWK@ +CC = @CC@ +CCDEPMODE = @CCDEPMODE@ +CC_FOR_BUILD = @CC_FOR_BUILD@ +CFLAGS = @CFLAGS@ +CORESERVICES_FRAMEWORKS = @CORESERVICES_FRAMEWORKS@ +CPP = @CPP@ +CPPFLAGS = @CPPFLAGS@ +CXX = @CXX@ +CXXCPP = @CXXCPP@ +CXXDEPMODE = @CXXDEPMODE@ +CXXFLAGS = @CXXFLAGS@ +CYGPATH_W = @CYGPATH_W@ +C_ARES_LIBS = @C_ARES_LIBS@ +DEFS = @DEFS@ +DEPDIR = @DEPDIR@ +DOXYGEN = @DOXYGEN@ +DSYMUTIL = @DSYMUTIL@ +DUMPBIN = @DUMPBIN@ +DUMPCAP_GROUP = @DUMPCAP_GROUP@ +ECHO_C = @ECHO_C@ +ECHO_N = @ECHO_N@ +ECHO_T = @ECHO_T@ +EGREP = @EGREP@ +ELINKS = @ELINKS@ +ENABLE_STATIC = @ENABLE_STATIC@ +EXEEXT = @EXEEXT@ +FGREP = @FGREP@ +FLEX_PATH = @FLEX_PATH@ +FOP = @FOP@ +GEOIP_LIBS = @GEOIP_LIBS@ +GETOPT_LO = @GETOPT_LO@ +GLIB_CFLAGS = @GLIB_CFLAGS@ +GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ +GLIB_LIBS = @GLIB_LIBS@ +GLIB_MKENUMS = @GLIB_MKENUMS@ +GOBJECT_QUERY = @GOBJECT_QUERY@ +GREP = @GREP@ +GTK_CFLAGS = @GTK_CFLAGS@ +GTK_LIBS = @GTK_LIBS@ +HAVE_BLESS = @HAVE_BLESS@ +HAVE_DOXYGEN = @HAVE_DOXYGEN@ +HAVE_DPKG_BUILDPACKAGE = @HAVE_DPKG_BUILDPACKAGE@ +HAVE_ELINKS = @HAVE_ELINKS@ +HAVE_FOP = @HAVE_FOP@ +HAVE_HDIUTIL = @HAVE_HDIUTIL@ +HAVE_HHC = @HAVE_HHC@ +HAVE_LYNX = @HAVE_LYNX@ +HAVE_OSX_PACKAGING = @HAVE_OSX_PACKAGING@ +HAVE_PKGMK = @HAVE_PKGMK@ +HAVE_PKGPROTO = @HAVE_PKGPROTO@ +HAVE_PKGTRANS = @HAVE_PKGTRANS@ +HAVE_RPM = @HAVE_RPM@ +HAVE_SVR4_PACKAGING = @HAVE_SVR4_PACKAGING@ +HAVE_XCODEBUILD = @HAVE_XCODEBUILD@ +HAVE_XMLLINT = @HAVE_XMLLINT@ +HAVE_XSLTPROC = @HAVE_XSLTPROC@ +HHC = @HHC@ +HTML_VIEWER = @HTML_VIEWER@ +INET_ATON_LO = @INET_ATON_LO@ +INET_NTOP_LO = @INET_NTOP_LO@ +INET_PTON_LO = @INET_PTON_LO@ +INSTALL = @INSTALL@ +INSTALL_DATA = @INSTALL_DATA@ +INSTALL_PROGRAM = @INSTALL_PROGRAM@ +INSTALL_SCRIPT = @INSTALL_SCRIPT@ +INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ +KRB5_CONFIG = @KRB5_CONFIG@ +KRB5_LIBS = @KRB5_LIBS@ +LAUNCHSERVICES_FRAMEWORKS = @LAUNCHSERVICES_FRAMEWORKS@ +LD = @LD@ +LDFLAGS = @LDFLAGS@ +LDFLAGS_SHAREDLIB = @LDFLAGS_SHAREDLIB@ +LEX = @LEX@ +LEXLIB = @LEXLIB@ +LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@ +LIBCAP_LIBS = @LIBCAP_LIBS@ +LIBGCRYPT_CFLAGS = @LIBGCRYPT_CFLAGS@ +LIBGCRYPT_CONFIG = @LIBGCRYPT_CONFIG@ +LIBGCRYPT_LIBS = @LIBGCRYPT_LIBS@ +LIBGNUTLS_CFLAGS = @LIBGNUTLS_CFLAGS@ +LIBGNUTLS_LIBS = @LIBGNUTLS_LIBS@ +LIBOBJS = @LIBOBJS@ + +# Libs must be cleared, or else libtool won't create a shared module. +# If your module needs to be linked against any particular libraries, +# add them here. +LIBS = +LIBSMI_CFLAGS = @LIBSMI_CFLAGS@ +LIBSMI_LDFLAGS = @LIBSMI_LDFLAGS@ +LIBSMI_VERSION = @LIBSMI_VERSION@ +LIBTOOL = @LIBTOOL@ +LIBTOOL_DEPS = @LIBTOOL_DEPS@ +LIPO = @LIPO@ +LN_S = @LN_S@ +LTLIBOBJS = @LTLIBOBJS@ +LUA_INCLUDES = @LUA_INCLUDES@ +LUA_LIBS = @LUA_LIBS@ +LYNX = @LYNX@ +MAKEINFO = @MAKEINFO@ +MKDIR_P = @MKDIR_P@ +NM = @NM@ +NMEDIT = @NMEDIT@ +NSL_LIBS = @NSL_LIBS@ +OBJDUMP = @OBJDUMP@ +OBJEXT = @OBJEXT@ +OTOOL = @OTOOL@ +OTOOL64 = @OTOOL64@ +PACKAGE = @PACKAGE@ +PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ +PACKAGE_NAME = @PACKAGE_NAME@ +PACKAGE_STRING = @PACKAGE_STRING@ +PACKAGE_TARNAME = @PACKAGE_TARNAME@ +PACKAGE_URL = @PACKAGE_URL@ +PACKAGE_VERSION = @PACKAGE_VERSION@ +PATH_SEPARATOR = @PATH_SEPARATOR@ +PCAP_CONFIG = @PCAP_CONFIG@ +PCAP_LIBS = @PCAP_LIBS@ +PCRE_LIBS = @PCRE_LIBS@ +PERL = @PERL@ +PKG_CONFIG = @PKG_CONFIG@ +PLUGIN_LIBS = @PLUGIN_LIBS@ +POD2HTML = @POD2HTML@ +POD2MAN = @POD2MAN@ +PORTAUDIO_INCLUDES = @PORTAUDIO_INCLUDES@ +PORTAUDIO_LIBS = @PORTAUDIO_LIBS@ +PYTHON = @PYTHON@ +PY_CFLAGS = @PY_CFLAGS@ +PY_LIBS = @PY_LIBS@ +RANLIB = @RANLIB@ +SED = @SED@ +SETCAP = @SETCAP@ +SET_MAKE = @SET_MAKE@ +SHELL = @SHELL@ +SOCKET_LIBS = @SOCKET_LIBS@ +SSL_LIBS = @SSL_LIBS@ +STRERROR_LO = @STRERROR_LO@ +STRIP = @STRIP@ +STRNCASECMP_LO = @STRNCASECMP_LO@ +STRPTIME_C = @STRPTIME_C@ +STRPTIME_LO = @STRPTIME_LO@ +VERSION = @VERSION@ +XMLLINT = @XMLLINT@ +XSLTPROC = @XSLTPROC@ +YACC = @YACC@ +YACCDUMMY = @YACCDUMMY@ +YFLAGS = @YFLAGS@ +abs_builddir = @abs_builddir@ +abs_srcdir = @abs_srcdir@ +abs_top_builddir = @abs_top_builddir@ +abs_top_srcdir = @abs_top_srcdir@ +ac_ct_CC = @ac_ct_CC@ +ac_ct_CXX = @ac_ct_CXX@ +ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ +ac_cv_wireshark_have_rpm = @ac_cv_wireshark_have_rpm@ +ac_ws_python_config = @ac_ws_python_config@ +am__include = @am__include@ +am__leading_dot = @am__leading_dot@ +am__quote = @am__quote@ +am__tar = @am__tar@ +am__untar = @am__untar@ +bindir = @bindir@ +build = @build@ +build_alias = @build_alias@ +build_cpu = @build_cpu@ +build_os = @build_os@ +build_vendor = @build_vendor@ +builddir = @builddir@ +capinfos_bin = @capinfos_bin@ +capinfos_man = @capinfos_man@ +datadir = @datadir@ +datarootdir = @datarootdir@ +dftest_bin = @dftest_bin@ +dftest_man = @dftest_man@ +docdir = @docdir@ +dumpcap_bin = @dumpcap_bin@ +dumpcap_man = @dumpcap_man@ +dvidir = @dvidir@ +editcap_bin = @editcap_bin@ +editcap_man = @editcap_man@ +exec_prefix = @exec_prefix@ +host = @host@ +host_alias = @host_alias@ +host_cpu = @host_cpu@ +host_os = @host_os@ +host_vendor = @host_vendor@ +htmldir = @htmldir@ +idl2wrs_bin = @idl2wrs_bin@ +idl2wrs_man = @idl2wrs_man@ +includedir = @includedir@ +infodir = @infodir@ +install_sh = @install_sh@ +libdir = @libdir@ +libexecdir = @libexecdir@ +localedir = @localedir@ +localstatedir = @localstatedir@ +lt_ECHO = @lt_ECHO@ +mandir = @mandir@ +mergecap_bin = @mergecap_bin@ +mergecap_man = @mergecap_man@ +mkdir_p = @mkdir_p@ +oldincludedir = @oldincludedir@ +pdfdir = @pdfdir@ +plugindir = @plugindir@ +prefix = @prefix@ +program_transform_name = @program_transform_name@ +psdir = @psdir@ +pythondir = @pythondir@ +randpkt_bin = @randpkt_bin@ +randpkt_man = @randpkt_man@ +rawshark_bin = @rawshark_bin@ +rawshark_man = @rawshark_man@ +sbindir = @sbindir@ +sharedstatedir = @sharedstatedir@ +srcdir = @srcdir@ +sysconfdir = @sysconfdir@ +target = @target@ +target_alias = @target_alias@ +target_cpu = @target_cpu@ +target_os = @target_os@ +target_vendor = @target_vendor@ +text2pcap_bin = @text2pcap_bin@ +text2pcap_man = @text2pcap_man@ +top_build_prefix = @top_build_prefix@ +top_builddir = @top_builddir@ +top_srcdir = @top_srcdir@ +tshark_bin = @tshark_bin@ +tshark_man = @tshark_man@ +wireshark_SUBDIRS = @wireshark_SUBDIRS@ +wireshark_bin = @wireshark_bin@ +wireshark_man = @wireshark_man@ +wiresharkfilter_man = @wiresharkfilter_man@ +INCLUDES = -I$(top_srcdir) -I$(includedir) + +# the name of the plugin +PLUGIN_NAME = swift + +# the dissector sources (without any helpers) +DISSECTOR_SRC = \ + packet-swift.c + + +# Dissector helpers. They're included in the source files in this +# directory, but they're not dissectors themselves, i.e. they're not +# used to generate "plugin.c". +DISSECTOR_SUPPORT_SRC = +@HAVE_WARNINGS_AS_ERRORS_TRUE@AM_CFLAGS = -Werror +plugin_LTLIBRARIES = swift.la +swift_la_SOURCES = \ + plugin.c \ + moduleinfo.h \ + $(DISSECTOR_SRC) \ + $(DISSECTOR_SUPPORT_SRC) \ + $(DISSECTOR_INCLUDES) + +swift_la_LDFLAGS = -module -avoid-version +swift_la_LIBADD = @PLUGIN_LIBS@ + +# +# Currently plugin.c can be included in the distribution because +# we always build all protocol dissectors. We used to have to check +# whether or not to build the snmp dissector. If we again need to +# variably build something, making plugin.c non-portable, uncomment +# the dist-hook line below. +# +# Oh, yuk. We don't want to include "plugin.c" in the distribution, as +# its contents depend on the configuration, and therefore we want it +# to be built when the first "make" is done; however, Automake insists +# on putting *all* source into the distribution. +# +# We work around this by having a "dist-hook" rule that deletes +# "plugin.c", so that "dist" won't pick it up. +# +#dist-hook: +# @rm -f $(distdir)/plugin.c +CLEANFILES = \ + swift \ + *~ + +MAINTAINERCLEANFILES = \ + Makefile.in \ + plugin.c + +EXTRA_DIST = \ + Makefile.common \ + Makefile.nmake \ + moduleinfo.nmake \ + plugin.rc.in \ + CMakeLists.txt + +all: all-am + +.SUFFIXES: +.SUFFIXES: .c .lo .o .obj +$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(srcdir)/Makefile.common $(am__configure_deps) + @for dep in $?; do \ + case '$(am__configure_deps)' in \ + *$$dep*) \ + ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ + && { if test -f $@; then exit 0; else break; fi; }; \ + exit 1;; \ + esac; \ + done; \ + echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu plugins/swift/Makefile'; \ + $(am__cd) $(top_srcdir) && \ + $(AUTOMAKE) --gnu plugins/swift/Makefile +.PRECIOUS: Makefile +Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status + @case '$?' in \ + *config.status*) \ + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ + *) \ + echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ + cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ + esac; + +$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh + +$(top_srcdir)/configure: $(am__configure_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(ACLOCAL_M4): $(am__aclocal_m4_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(am__aclocal_m4_deps): +install-pluginLTLIBRARIES: $(plugin_LTLIBRARIES) + @$(NORMAL_INSTALL) + test -z "$(plugindir)" || $(MKDIR_P) "$(DESTDIR)$(plugindir)" + @list='$(plugin_LTLIBRARIES)'; test -n "$(plugindir)" || list=; \ + list2=; for p in $$list; do \ + if test -f $$p; then \ + list2="$$list2 $$p"; \ + else :; fi; \ + done; \ + test -z "$$list2" || { \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(plugindir)'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(plugindir)"; \ + } + +uninstall-pluginLTLIBRARIES: + @$(NORMAL_UNINSTALL) + @list='$(plugin_LTLIBRARIES)'; test -n "$(plugindir)" || list=; \ + for p in $$list; do \ + $(am__strip_dir) \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(plugindir)/$$f'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(plugindir)/$$f"; \ + done + +clean-pluginLTLIBRARIES: + -test -z "$(plugin_LTLIBRARIES)" || rm -f $(plugin_LTLIBRARIES) + @list='$(plugin_LTLIBRARIES)'; for p in $$list; do \ + dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ + test "$$dir" != "$$p" || dir=.; \ + echo "rm -f \"$${dir}/so_locations\""; \ + rm -f "$${dir}/so_locations"; \ + done +swift.la: $(swift_la_OBJECTS) $(swift_la_DEPENDENCIES) + $(AM_V_CCLD)$(swift_la_LINK) -rpath $(plugindir) $(swift_la_OBJECTS) $(swift_la_LIBADD) $(LIBS) + +mostlyclean-compile: + -rm -f *.$(OBJEXT) + +distclean-compile: + -rm -f *.tab.c + +@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/packet-swift.Plo@am__quote@ +@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/plugin.Plo@am__quote@ + +.c.o: +@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< +@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po +@am__fastdepCC_FALSE@ $(AM_V_CC) @AM_BACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ +@am__fastdepCC_FALSE@ $(COMPILE) -c $< + +.c.obj: +@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` +@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po +@am__fastdepCC_FALSE@ $(AM_V_CC) @AM_BACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ +@am__fastdepCC_FALSE@ $(COMPILE) -c `$(CYGPATH_W) '$<'` + +.c.lo: +@am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< +@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo +@am__fastdepCC_FALSE@ $(AM_V_CC) @AM_BACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ +@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ +@am__fastdepCC_FALSE@ $(LTCOMPILE) -c -o $@ $< + +mostlyclean-libtool: + -rm -f *.lo + +clean-libtool: + -rm -rf .libs _libs + +ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + mkid -fID $$unique +tags: TAGS + +TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ + $(TAGS_FILES) $(LISP) + set x; \ + here=`pwd`; \ + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + shift; \ + if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ + test -n "$$unique" || unique=$$empty_fix; \ + if test $$# -gt 0; then \ + $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ + "$$@" $$unique; \ + else \ + $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ + $$unique; \ + fi; \ + fi +ctags: CTAGS +CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ + $(TAGS_FILES) $(LISP) + list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ + unique=`for i in $$list; do \ + if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ + done | \ + $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ + END { if (nonempty) { for (i in files) print i; }; }'`; \ + test -z "$(CTAGS_ARGS)$$unique" \ + || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ + $$unique + +GTAGS: + here=`$(am__cd) $(top_builddir) && pwd` \ + && $(am__cd) $(top_srcdir) \ + && gtags -i $(GTAGS_ARGS) "$$here" + +distclean-tags: + -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags + +distdir: $(DISTFILES) + @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + list='$(DISTFILES)'; \ + dist_files=`for file in $$list; do echo $$file; done | \ + sed -e "s|^$$srcdirstrip/||;t" \ + -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ + case $$dist_files in \ + */*) $(MKDIR_P) `echo "$$dist_files" | \ + sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ + sort -u` ;; \ + esac; \ + for file in $$dist_files; do \ + if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ + if test -d $$d/$$file; then \ + dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ + if test -d "$(distdir)/$$file"; then \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ + cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ + else \ + test -f "$(distdir)/$$file" \ + || cp -p $$d/$$file "$(distdir)/$$file" \ + || exit 1; \ + fi; \ + done +check-am: all-am +check: check-am +all-am: Makefile $(LTLIBRARIES) +installdirs: + for dir in "$(DESTDIR)$(plugindir)"; do \ + test -z "$$dir" || $(MKDIR_P) "$$dir"; \ + done +install: install-am +install-exec: install-exec-am +install-data: install-data-am +uninstall: uninstall-am + +install-am: all-am + @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am + +installcheck: installcheck-am +install-strip: + $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ + install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ + `test -z '$(STRIP)' || \ + echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install +mostlyclean-generic: + +clean-generic: + -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) + +distclean-generic: + -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) + -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) + +maintainer-clean-generic: + @echo "This command is intended for maintainers to use" + @echo "it deletes files that may require special tools to rebuild." + -test -z "$(MAINTAINERCLEANFILES)" || rm -f $(MAINTAINERCLEANFILES) +clean: clean-am + +clean-am: clean-generic clean-libtool clean-pluginLTLIBRARIES \ + mostlyclean-am + +distclean: distclean-am + -rm -rf ./$(DEPDIR) + -rm -f Makefile +distclean-am: clean-am distclean-compile distclean-generic \ + distclean-tags + +dvi: dvi-am + +dvi-am: + +html: html-am + +html-am: + +info: info-am + +info-am: + +install-data-am: install-pluginLTLIBRARIES + +install-dvi: install-dvi-am + +install-dvi-am: + +install-exec-am: + +install-html: install-html-am + +install-html-am: + +install-info: install-info-am + +install-info-am: + +install-man: + +install-pdf: install-pdf-am + +install-pdf-am: + +install-ps: install-ps-am + +install-ps-am: + +installcheck-am: + +maintainer-clean: maintainer-clean-am + -rm -rf ./$(DEPDIR) + -rm -f Makefile +maintainer-clean-am: distclean-am maintainer-clean-generic + +mostlyclean: mostlyclean-am + +mostlyclean-am: mostlyclean-compile mostlyclean-generic \ + mostlyclean-libtool + +pdf: pdf-am + +pdf-am: + +ps: ps-am + +ps-am: + +uninstall-am: uninstall-pluginLTLIBRARIES + +.MAKE: install-am install-strip + +.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \ + clean-libtool clean-pluginLTLIBRARIES ctags distclean \ + distclean-compile distclean-generic distclean-libtool \ + distclean-tags distdir dvi dvi-am html html-am info info-am \ + install install-am install-data install-data-am install-dvi \ + install-dvi-am install-exec install-exec-am install-html \ + install-html-am install-info install-info-am install-man \ + install-pdf install-pdf-am install-pluginLTLIBRARIES \ + install-ps install-ps-am install-strip installcheck \ + installcheck-am installdirs maintainer-clean \ + maintainer-clean-generic mostlyclean mostlyclean-compile \ + mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ + tags uninstall uninstall-am uninstall-pluginLTLIBRARIES + + +# +# Build plugin.c, which contains the plugin version[] string, a +# function plugin_register() that calls the register routines for all +# protocols, and a function plugin_reg_handoff() that calls the handoff +# registration routines for all protocols. +# +# We do this by scanning sources. If that turns out to be too slow, +# maybe we could just require every .o file to have an register routine +# of a given name (packet-aarp.o -> proto_register_aarp, etc.). +# +# Formatting conventions: The name of the proto_register_* routines an +# proto_reg_handoff_* routines must start in column zero, or must be +# preceded only by "void " starting in column zero, and must not be +# inside #if. +# +# DISSECTOR_SRC is assumed to have all the files that need to be scanned. +# +# For some unknown reason, having a big "for" loop in the Makefile +# to scan all the files doesn't work with some "make"s; they seem to +# pass only the first few names in the list to the shell, for some +# reason. +# +# Therefore, we have a script to generate the plugin.c file. +# The shell script runs slowly, as multiple greps and seds are run +# for each input file; this is especially slow on Windows. Therefore, +# if Python is present (as indicated by PYTHON being defined), we run +# a faster Python script to do that work instead. +# +# The first argument is the directory in which the source files live. +# The second argument is "plugin", to indicate that we should build +# a plugin.c file for a plugin. +# All subsequent arguments are the files to scan. +# +plugin.c: $(DISSECTOR_SRC) $(top_srcdir)/tools/make-dissector-reg \ + $(top_srcdir)/tools/make-dissector-reg.py + @if test -n "$(PYTHON)"; then \ + echo Making plugin.c with python ; \ + $(PYTHON) $(top_srcdir)/tools/make-dissector-reg.py $(srcdir) \ + plugin $(DISSECTOR_SRC) ; \ + else \ + echo Making plugin.c with shell script ; \ + $(top_srcdir)/tools/make-dissector-reg $(srcdir) \ + $(plugin_src) plugin $(DISSECTOR_SRC) ; \ + fi + +checkapi: + $(PERL) $(top_srcdir)/tools/checkAPIs.pl -g abort -g termoutput $(DISSECTOR_SRC) $(DISSECTOR_INCLUDES) + +# Tell versions [3.59,3.63) of GNU make to not export all variables. +# Otherwise a system limit (for SysV at least) may be exceeded. +.NOEXPORT: diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.nmake tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.nmake --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.nmake 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/Makefile.nmake 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,103 @@ +# Makefile.nmake +# nmake file for Wireshark plugin +# +# $Id: Makefile.nmake 29883 2009-09-13 19:48:22Z morriss $ +# + +include ..\..\config.nmake +include moduleinfo.nmake + +include Makefile.common + +CFLAGS=/WX /DHAVE_CONFIG_H /I../.. $(GLIB_CFLAGS) \ + /I$(PCAP_DIR)\include -D_U_="" $(LOCAL_CFLAGS) + +.c.obj:: + $(CC) $(CFLAGS) -Fd.\ -c $< + +LDFLAGS = $(PLUGIN_LDFLAGS) + +!IFDEF ENABLE_LIBWIRESHARK +LINK_PLUGIN_WITH=..\..\epan\libwireshark.lib +CFLAGS=/D_NEED_VAR_IMPORT_ $(CFLAGS) + +DISSECTOR_OBJECTS = $(DISSECTOR_SRC:.c=.obj) + +DISSECTOR_SUPPORT_OBJECTS = $(DISSECTOR_SUPPORT_SRC:.c=.obj) + +OBJECTS = $(DISSECTOR_OBJECTS) $(DISSECTOR_SUPPORT_OBJECTS) plugin.obj + +RESOURCE=$(PLUGIN_NAME).res + +all: $(PLUGIN_NAME).dll + +$(PLUGIN_NAME).rc : moduleinfo.nmake + sed -e s/@PLUGIN_NAME@/$(PLUGIN_NAME)/ \ + -e s/@RC_MODULE_VERSION@/$(RC_MODULE_VERSION)/ \ + -e s/@RC_VERSION@/$(RC_VERSION)/ \ + -e s/@MODULE_VERSION@/$(MODULE_VERSION)/ \ + -e s/@PACKAGE@/$(PACKAGE)/ \ + -e s/@VERSION@/$(VERSION)/ \ + -e s/@MSVC_VARIANT@/$(MSVC_VARIANT)/ \ + < plugin.rc.in > $@ + +$(PLUGIN_NAME).dll $(PLUGIN_NAME).exp $(PLUGIN_NAME).lib : $(OBJECTS) $(LINK_PLUGIN_WITH) $(RESOURCE) + link -dll /out:$(PLUGIN_NAME).dll $(LDFLAGS) $(OBJECTS) $(LINK_PLUGIN_WITH) \ + $(GLIB_LIBS) $(RESOURCE) + +# +# Build plugin.c, which contains the plugin version[] string, a +# function plugin_register() that calls the register routines for all +# protocols, and a function plugin_reg_handoff() that calls the handoff +# registration routines for all protocols. +# +# We do this by scanning sources. If that turns out to be too slow, +# maybe we could just require every .o file to have an register routine +# of a given name (packet-aarp.o -> proto_register_aarp, etc.). +# +# Formatting conventions: The name of the proto_register_* routines an +# proto_reg_handoff_* routines must start in column zero, or must be +# preceded only by "void " starting in column zero, and must not be +# inside #if. +# +# DISSECTOR_SRC is assumed to have all the files that need to be scanned. +# +# For some unknown reason, having a big "for" loop in the Makefile +# to scan all the files doesn't work with some "make"s; they seem to +# pass only the first few names in the list to the shell, for some +# reason. +# +# Therefore, we have a script to generate the plugin.c file. +# The shell script runs slowly, as multiple greps and seds are run +# for each input file; this is especially slow on Windows. Therefore, +# if Python is present (as indicated by PYTHON being defined), we run +# a faster Python script to do that work instead. +# +# The first argument is the directory in which the source files live. +# The second argument is "plugin", to indicate that we should build +# a plugin.c file for a plugin. +# All subsequent arguments are the files to scan. +# +!IFDEF PYTHON +plugin.c: $(DISSECTOR_SRC) moduleinfo.h ../../tools/make-dissector-reg.py + @echo Making plugin.c (using python) + @$(PYTHON) "../../tools/make-dissector-reg.py" . plugin $(DISSECTOR_SRC) +!ELSE +plugin.c: $(DISSECTOR_SRC) moduleinfo.h ../../tools/make-dissector-reg + @echo Making plugin.c (using sh) + @$(SH) ../../tools/make-dissector-reg . plugin $(DISSECTOR_SRC) +!ENDIF + +!ENDIF + +clean: + rm -f $(OBJECTS) $(RESOURCE) plugin.c *.pdb \ + $(PLUGIN_NAME).dll $(PLUGIN_NAME).dll.manifest $(PLUGIN_NAME).lib \ + $(PLUGIN_NAME).exp $(PLUGIN_NAME).rc + +distclean: clean + +maintainer-clean: distclean + +checkapi: + $(PERL) ../../tools/checkAPIs.pl -g abort -g termoutput $(DISSECTOR_SRC) $(DISSECTOR_INCLUDES) diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/README tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/README --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/README 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/README 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,10 @@ +This is a protocol dissector for swift: the multiparty transport protocol +(http://libswift.org) + +For instructions on how to include this plugin in a Wireshark build, see +Wireshark's /doc/README.developer + +If you are new to Wireshark protocol dissectors, take a look at +http://www.wireshark.org/docs/wsdg_html_chunked/ChDissectAdd.html + +Author: Andrew Keating diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.h tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.h --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,17 @@ +/* Included *after* config.h, in order to re-define these macros */ + +#ifdef PACKAGE +#undef PACKAGE +#endif + +/* Name of package */ +#define PACKAGE "swift" + + +#ifdef VERSION +#undef VERSION +#endif + +/* Version number of package */ +#define VERSION "0.0.1" + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.nmake tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.nmake --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.nmake 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/moduleinfo.nmake 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,28 @@ +# +# $Id$ +# + +# The name +PACKAGE=swift + +# The version +MODULE_VERSION_MAJOR=0 +MODULE_VERSION_MINOR=0 +MODULE_VERSION_MICRO=1 +MODULE_VERSION_EXTRA=0 + +# +# The RC_VERSION should be comma-separated, not dot-separated, +# as per Graham Bloice's message in +# +# http://www.ethereal.com/lists/ethereal-dev/200303/msg00283.html +# +# "The RC_VERSION variable in config.nmake should be comma separated. +# This allows the resources to be built correctly and the version +# number to be correctly displayed in the explorer properties dialog +# for the executables, and XP's tooltip, rather than 0.0.0.0." +# + +MODULE_VERSION=$(MODULE_VERSION_MAJOR).$(MODULE_VERSION_MINOR).$(MODULE_VERSION_MICRO).$(MODULE_VERSION_EXTRA) +RC_MODULE_VERSION=$(MODULE_VERSION_MAJOR),$(MODULE_VERSION_MINOR),$(MODULE_VERSION_MICRO),$(MODULE_VERSION_EXTRA) + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/packet-swift.c tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/packet-swift.c --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/packet-swift.c 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/packet-swift.c 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,383 @@ +/* packet-swift.c + * Routines for swift protocol packet disassembly + * By Andrew Keating + * Copyright 2011 Andrew Keating + * + * $Id$ + * + * Wireshark - Network traffic analyzer + * By Gerald Combs + * Copyright 1998 Gerald Combs + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + */ + +#ifdef HAVE_CONFIG_H +# include "config.h" +#endif + +#include + +static int proto_swift = -1; + +/* Global fields */ +static int hf_swift_receiving_channel = -1; +static int hf_swift_message_type = -1; + +/* 00 Handshake fields */ +static int hf_swift_handshake_channel = -1; + +/* 01 Data fields */ +static int hf_swift_data_bin_id = -1; +static int hf_swift_data_payload = -1; + +/* 02 Ack fields */ +static int hf_swift_ack_bin_id = -1; +static int hf_swift_ack_timestamp = -1; + +/* 03 Have fields */ +static int hf_swift_have_bin_id = -1; + +/* 04 Hash fields */ +static int hf_swift_hash_bin_id = -1; +static int hf_swift_hash_value = -1; + +/* 05 PEX+ fields */ +static int hf_swift_pexplus_ip = -1; +static int hf_swift_pexplus_port = -1; + +/* 06 PEX- fields */ +static int hf_swift_pexminus_ip = -1; +static int hf_swift_pexminus_port = -1; + +/* 07 Signed hash fields */ +static int hf_swift_signed_hash_bin_id = -1; +static int hf_swift_signed_hash_value = -1; +static int hf_swift_signed_hash_signature = -1; + +/* 08 Hint fields */ +static int hf_swift_hint_bin_id = -1; + +static gint ett_swift = -1; + +static void dissect_swift(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree); +static gboolean dissect_swift_heur(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree); + +static const value_string message_type_names[] = { + { 0, "Handshake" }, + { 1, "Data" }, + { 2, "Ack" }, + { 3, "Have" }, + { 4, "Hash" }, + { 5, "PEX+" }, + { 6, "PEX-" }, + { 7, "Signed Hash" }, + { 8, "Hint" }, + { 9, "SWIFT_MSGTYPE_RCVD" }, + { 10, "SWIFT_MESSAGE_COUNT" }, + { 0, NULL} +}; + + +void +proto_register_swift(void) +{ + static hf_register_info hf[] = { + /* Global */ + { &hf_swift_receiving_channel, + { "Receiving Channel", "swift.receiving.channel", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_message_type, + { "Message Type", "swift.message.type", + FT_UINT8, BASE_DEC, + VALS(message_type_names), 0x0, + NULL, HFILL } + }, + + /* 00 Handshake */ + { &hf_swift_handshake_channel, + { "Handshake Channel", "swift.handshake.channel", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + + /* 01 Data */ + { &hf_swift_data_bin_id, + { "Data Bin ID", "swift.data.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_data_payload, + { "Data Payload", "swift.data.payload", + FT_BYTES, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + + /* 02 Ack */ + { &hf_swift_ack_bin_id, + { "Ack Bin ID", "swift.ack.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_ack_timestamp, + { "Timestamp", "swift.ack.timestamp", + FT_UINT64, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + + /* 03 Have */ + { &hf_swift_have_bin_id, + { "Have Bin ID", "swift.have.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + + /* 04 Hash */ + { &hf_swift_hash_bin_id, + { "Hash Bin ID", "swift.hash.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_hash_value, + { "Hash Value", "swift.hash.value", + FT_BYTES, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + + /* 05 PEX+ */ + { &hf_swift_pexplus_ip, + { "PEX+ IP Address", "swift.pex_plus.ip", + FT_IPv4, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_pexplus_port, + { "PEX+ Port", "swift.pex_plus.port", + FT_UINT16, BASE_DEC, + NULL, 0x0, + NULL, HFILL } + }, + + /* 06 PEX- */ + { &hf_swift_pexminus_ip, + { "PEX- IP Address", "swift.pex_minus.ip", + FT_IPv4, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_pexminus_port, + { "PEX- Port", "swift.pex_minus.port", + FT_UINT16, BASE_DEC, + NULL, 0x0, + NULL, HFILL } + }, + + /* 07 Signed Hash */ + { &hf_swift_signed_hash_bin_id, + { "Signed Hash Bin ID", "swift.signed_hash.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_signed_hash_value, + { "Signed Hash Value", "swift.signed_hash.value", + FT_BYTES, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + { &hf_swift_signed_hash_signature, + { "Signed Hash Signature", "swift.signed_hash.signature", + FT_BYTES, BASE_NONE, + NULL, 0x0, + NULL, HFILL } + }, + + /* 08 Hint */ + { &hf_swift_hint_bin_id, + { "Hint Bin ID", "swift.hint.bin_id", + FT_UINT32, BASE_HEX, + NULL, 0x0, + NULL, HFILL } + }, + }; + + /* Setup protocol subtree array */ + static gint *ett[] = { + &ett_swift + }; + + proto_swift = proto_register_protocol ( + "swift: the multiparty transport protocol", /* name */ + "swift", /* short name */ + "swift" /* abbrev */ + ); + + proto_register_field_array(proto_swift, hf, array_length(hf)); + proto_register_subtree_array(ett, array_length(ett)); + register_dissector("swift", dissect_swift, proto_swift); +} + +void +proto_reg_handoff_swift(void) +{ + dissector_handle_t swift_handle; + swift_handle = find_dissector("swift"); + + /* Allow "Decode As" with any UDP packet. */ + dissector_add_handle("udp.port", swift_handle); + + /* Add our heuristic packet finder. */ + heur_dissector_add("udp", dissect_swift_heur, proto_swift); +} + +/* This heuristic is somewhat ambiguous, but for research purposes, it should be fine */ +static gboolean +dissect_swift_heur(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree) +{ + guint message_length; + message_length = tvb_length(tvb); + /* If the fifth byte isn't one of the supported packet types, it's not swift (except keep-alives) */ + if(message_length != 4) { + guint8 message_type; + message_type = tvb_get_guint8(tvb, 4); + if(message_type > 10) { + return FALSE; + } + } + + dissect_swift(tvb, pinfo, tree); + return TRUE; +} + +static void +dissect_swift(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree) +{ + gint offset = 0; + col_set_str(pinfo->cinfo, COL_PROTOCOL, "swift"); + /* Clear out stuff in the info column */ + col_clear(pinfo->cinfo,COL_INFO); + + if (tree) { /* we are being asked for details */ + proto_item *ti; + ti = proto_tree_add_item(tree, proto_swift, tvb, 0, -1, FALSE); + + proto_tree *swift_tree; + swift_tree = proto_item_add_subtree(ti, ett_swift); + + /* All messages start with the receiving channel, so we can pull it out here */ + proto_tree_add_item(swift_tree, hf_swift_receiving_channel, tvb, offset, 4, FALSE); offset += 4; + + /* Loop until there is nothing left to read in the packet */ + while(tvb_bytes_exist(tvb, offset, 1)) { + guint8 message_type; + guint dat_len; + message_type = tvb_get_guint8(tvb, offset); + proto_tree_add_item(swift_tree, hf_swift_message_type, tvb, offset, 1, FALSE); + offset += 1; + + /* Add message type to the info column */ + if(offset > 5) { + col_append_fstr(pinfo->cinfo, COL_INFO, ", "); + } + col_append_fstr(pinfo->cinfo, COL_INFO, "%s", + val_to_str(message_type, message_type_names, "Unknown (0x%02x)")); + + /* Add it to the dissection window as well */ + proto_item_append_text(ti, ", %s", + val_to_str(message_type, message_type_names, "Unknown (0x%02x)")); + + switch(message_type) { + case 0: /* Handshake */ + proto_tree_add_item(swift_tree, hf_swift_handshake_channel, tvb, offset, 4, FALSE); + offset += 4; + break; + case 1: /* Data */ + proto_tree_add_item(swift_tree, hf_swift_data_bin_id, tvb, offset, 4, FALSE); + offset += 4; + /* We assume that the data field comprises the rest of this packet */ + dat_len = tvb_length(tvb) - offset; + proto_tree_add_item(swift_tree, hf_swift_data_payload, tvb, offset, dat_len, FALSE); + offset += dat_len; + break; + case 2: /* Ack */ + proto_tree_add_item(swift_tree, hf_swift_ack_bin_id, tvb, offset, 4, FALSE); + offset += 4; + proto_tree_add_item(swift_tree, hf_swift_ack_timestamp, tvb, offset, 8, FALSE); + offset += 8; + break; + case 3: /* Have */ + proto_tree_add_item(swift_tree, hf_swift_have_bin_id, tvb, offset, 4, FALSE); + offset += 4; + break; + case 4: /* Hash */ + proto_tree_add_item(swift_tree, hf_swift_hash_bin_id, tvb, offset, 4, FALSE); + offset += 4; + proto_tree_add_item(swift_tree, hf_swift_hash_value, tvb, offset, 20, FALSE); + offset += 20; + break; + case 5: /* PEX+ */ + proto_tree_add_item(swift_tree, hf_swift_pexplus_ip, tvb, offset, 4, FALSE); + offset += 4; + proto_tree_add_item(swift_tree, hf_swift_pexplus_port, tvb, offset, 2, FALSE); + offset += 2; + break; + case 6: /* PEX- */ + proto_tree_add_item(swift_tree, hf_swift_pexminus_ip, tvb, offset, 4, FALSE); + offset += 4; + proto_tree_add_item(swift_tree, hf_swift_pexminus_port, tvb, offset, 2, FALSE); + offset += 2; + break; + case 7: /* Signed Hash */ + proto_tree_add_item(swift_tree, hf_swift_signed_hash_bin_id, tvb, offset, 4, FALSE); + offset += 4; + proto_tree_add_item(swift_tree, hf_swift_signed_hash_value, tvb, offset, 20, FALSE); + offset += 20; + /* It is not entirely clear what size the public key will be, so we allow any size + For this to work, we must assume there aren't any more messages in the packet */ + dat_len = tvb_length(tvb) - offset; + proto_tree_add_item(swift_tree, hf_swift_signed_hash_signature, tvb, offset, dat_len, FALSE); + offset += dat_len; + break; + case 8: /* Hint */ + proto_tree_add_item(swift_tree, hf_swift_hint_bin_id, tvb, offset, 4, FALSE); + offset += 4; + break; + case 9: /* SWIFT_MSGTYPE_RCVD */ + break; + case 10: /* SWIFT_MESSAGE_COUNT */ + break; + default: + break; + } + } + /* If the offset is still 4 here, the message is a keep-alive */ + if(offset == 4) { + col_append_fstr(pinfo->cinfo, COL_INFO, "Keep-Alive"); + proto_item_append_text(ti, ", Keep-Alive"); + } + } +} + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.c tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.c --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.c 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.c 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,31 @@ +/* + * Do not modify this file. + * + * It is created automatically by Makefile or Makefile.nmake. + */ + +#ifdef HAVE_CONFIG_H +# include "config.h" +#endif + +#include + +#include "moduleinfo.h" + +#ifndef ENABLE_STATIC +G_MODULE_EXPORT const gchar version[] = VERSION; + +/* Start the functions we need for the plugin stuff */ + +G_MODULE_EXPORT void +plugin_register (void) +{ + {extern void proto_register_swift (void); proto_register_swift ();} +} + +G_MODULE_EXPORT void +plugin_reg_handoff(void) +{ + {extern void proto_reg_handoff_swift (void); proto_reg_handoff_swift ();} +} +#endif diff -Nru tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.rc.in tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.rc.in --- tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.rc.in 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/doc/wireshark-dissector/plugin.rc.in 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,34 @@ +#include "winver.h" + +VS_VERSION_INFO VERSIONINFO + FILEVERSION @RC_MODULE_VERSION@ + PRODUCTVERSION @RC_VERSION@ + FILEFLAGSMASK 0x0L +#ifdef _DEBUG + FILEFLAGS VS_FF_DEBUG +#else + FILEFLAGS 0 +#endif + FILEOS VOS_NT_WINDOWS32 + FILETYPE VFT_DLL +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904b0" + BEGIN + VALUE "CompanyName", "The Wireshark developer community, http://www.wireshark.org/\0" + VALUE "FileDescription", "@PACKAGE@ dissector\0" + VALUE "FileVersion", "@MODULE_VERSION@\0" + VALUE "InternalName", "@PACKAGE@ @MODULE_VERSION@\0" + VALUE "LegalCopyright", "Copyright © 1998 Gerald Combs , Gilbert Ramirez and others\0" + VALUE "OriginalFilename", "@PLUGIN_NAME@.dll\0" + VALUE "ProductName", "Wireshark\0" + VALUE "ProductVersion", "@VERSION@\0" + VALUE "Comments", "Build with @MSVC_VARIANT@\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x409, 1200 + END +END diff -Nru tribler-6.2.0/Tribler/SwiftEngine/ext/seq_picker.cpp tribler-6.2.0/Tribler/SwiftEngine/ext/seq_picker.cpp --- tribler-6.2.0/Tribler/SwiftEngine/ext/seq_picker.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/ext/seq_picker.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,81 @@ +/* + * seq_picker.cpp + * swift + * + * Created by Victor Grishchenko on 10/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "swift.h" +#include + +using namespace swift; + + +/** Picks pieces nearly sequentialy; some local randomization (twisting) + is introduced to prevent synchronization among multiple channels. */ +class SeqPiecePicker : public PiecePicker { + + binmap_t ack_hint_out_; + tbqueue hint_out_; + FileTransfer* transfer_; + uint64_t twist_; + bin_t range_; + +public: + + SeqPiecePicker (FileTransfer* file_to_pick_from) : ack_hint_out_(), + transfer_(file_to_pick_from), twist_(0), range_(bin_t::ALL) { + binmap_t::copy(ack_hint_out_, *(hashtree()->ack_out())); + } + virtual ~SeqPiecePicker() {} + + HashTree * hashtree() { + return transfer_->hashtree(); + } + + virtual void Randomize (uint64_t twist) { + twist_ = twist; + } + + virtual void LimitRange (bin_t range) { + range_ = range; + } + + virtual bin_t Pick (binmap_t& offer, uint64_t max_width, tint expires) { + while (hint_out_.size() && hint_out_.front().timeack_out()), hint_out_.front().bin); + hint_out_.pop_front(); + } + if (!hashtree()->size()) { + return bin_t(0,0); // whoever sends it first + // Arno, 2011-06-28: Partial fix by Victor. exact_size_known() missing + //} else if (!hashtree()->exact_size_known()) { + // return bin64_t(0,(hashtree()->size()>>10)-1); // dirty + } + retry: // bite me + twist_ &= (hashtree()->peak(0).toUInt()) & ((1<<6)-1); + + bin_t hint = binmap_t::find_complement(ack_hint_out_, offer, twist_); + if (hint.is_none()) { + return hint; // TODO: end-game mode + } + + if (!hashtree()->ack_out()->is_empty(hint)) { // unhinted/late data + binmap_t::copy(ack_hint_out_, *(hashtree()->ack_out()), hint); + goto retry; + } + while (hint.base_length()>max_width) + hint = hint.left(); + assert(ack_hint_out_.is_empty(hint)); + ack_hint_out_.set(hint); + hint_out_.push_back(tintbin(NOW,hint)); + return hint; + } + + int Seek(bin_t offbin, int whence) + { + return -1; + } +}; diff -Nru tribler-6.2.0/Tribler/SwiftEngine/ext/simple_selector.cpp tribler-6.2.0/Tribler/SwiftEngine/ext/simple_selector.cpp --- tribler-6.2.0/Tribler/SwiftEngine/ext/simple_selector.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/ext/simple_selector.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,38 @@ +/* + * simple_selector.cpp + * swift + * + * Created by Victor Grishchenko on 10/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include +#include "swift.h" + +using namespace swift; + +class SimpleSelector : public PeerSelector { + typedef std::pair memo_t; + typedef std::deque peer_queue_t; + peer_queue_t peers; +public: + SimpleSelector () { + } + void AddPeer (const Address& addr, const Sha1Hash& root) { + peers.push_front(memo_t(addr,root)); //,root.fingerprint() !!! + } + Address GetPeer (const Sha1Hash& for_root) { + //uint32_t fp = for_root.fingerprint(); + for(peer_queue_t::iterator i=peers.begin(); i!=peers.end(); i++) + if (i->second==for_root) { + i->second = Sha1Hash::ZERO; // horror TODO rewrite + sockaddr_in ret = i->first; + while (peers.begin()->second==Sha1Hash::ZERO) + peers.pop_front(); + return ret; + } + return Address(); + } +}; + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/ext/vod_picker.cpp tribler-6.2.0/Tribler/SwiftEngine/ext/vod_picker.cpp --- tribler-6.2.0/Tribler/SwiftEngine/ext/vod_picker.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/ext/vod_picker.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,323 @@ +/* + * vod_picker.cpp + * swift + * + * Created by Riccardo Petrocco. + * Copyright 2009-2012 Delft University of Technology. All rights reserved. + * + */ + +#include "swift.h" +#include + +using namespace swift; + +#define HIGHPRIORITYWINDOW 45000; // initial high priority window in bin unit +#define MIDPRIORITYWINDOW 4; // proportion of the mid priority window compared to the high pri. one + +/** Picks pieces in VoD fashion. The stream is divided in three priority + * sets based on the current playback position. In the high priority set + * bins are selected in order, while on the medium and low priority sets + * in a rarest fist fashion */ +class VodPiecePicker : public PiecePicker { + + binmap_t ack_hint_out_; + tbqueue hint_out_; + FileTransfer* transfer_; + Availability* avail_; + uint64_t twist_; + bin_t range_; + int playback_pos_; // playback position in KB + int high_pri_window_; + bin_t initseq_; // Hack by Arno to avoid large hints at startup + +public: + + VodPiecePicker (FileTransfer* file_to_pick_from) : ack_hint_out_(), + transfer_(file_to_pick_from), twist_(0), range_(bin_t::ALL), initseq_(0,0) + { + avail_ = &(transfer_->availability()); + binmap_t::copy(ack_hint_out_, *(hashtree()->ack_out())); + playback_pos_ = -1; + high_pri_window_ = HIGHPRIORITYWINDOW; + } + + virtual ~VodPiecePicker() {} + + HashTree * hashtree() { + return transfer_->hashtree(); + } + + virtual void Randomize (uint64_t twist) { + // Arno, 2012-03-21: After consulting with Riccardo, disable + // twisting for VOD PP. Randomization of peers over the bitmap + // is already guaranteed by the ack_hint_out_ (prevents double requesting) + // and rarest first. + } + + virtual void LimitRange (bin_t range) { + range_ = range; + } + + + bin_t getTopBin(bin_t bin, uint64_t start, uint64_t size) + { + while (bin.parent().base_length() <= size && bin.parent().base_left() >= bin_t(start)) + { + bin.to_parent(); + } + return bin; + } + + + bin_t pickUrgent (binmap_t& offer, uint64_t max_width, uint64_t size) { + + bin_t curr = bin_t((playback_pos_+1)<<1); // the base bin will be indexed by the double of the value (bin(4) == bin(0,2)) + bin_t hint = bin_t::NONE; + uint64_t examined = 0; + binmap_t binmap; + + // report the first bin we find + while (hint.is_none() && examined < size) + { + curr = getTopBin(curr, (playback_pos_+1)<<1, size-examined); + if (!ack_hint_out_.is_filled(curr)) + { + binmap.fill(offer); + binmap_t::copy(binmap, ack_hint_out_, curr); + hint = binmap_t::find_complement(binmap, offer, twist_); + binmap.clear(); + } + examined += curr.base_length(); + curr = bin_t(0, curr.base_right().layer_offset()+1 ); + } + + if (!hint.is_none()) + while (hint.base_length()>max_width && !hint.is_base()) // Arno,2012-01-17: stop! + hint.to_left(); + + return hint; + } + + + bin_t pickRarest (binmap_t& offer, uint64_t max_width, uint64_t start, uint64_t size) { + + //fprintf(stderr,"%s #1 Picker -> choosing from mid/low priority \n",tintstr()); + bin_t curr = bin_t(start<<1); + bin_t hint = bin_t::NONE; + uint64_t examined = 0; + //uint64_t size = end-start; + bin_t rarest_hint = bin_t::NONE; + // TODO remove.. + binmap_t binmap; + + // TODO.. this is the dummy version... put some logic in deciding what to DL + while (examined < size) + { + curr = getTopBin(curr, start<<1, size-examined); + + if (!ack_hint_out_.is_filled(curr)) + { + // remove + //binmap_t::copy(binmap, offer); + //binmap.reset(curr); + + binmap.fill(offer); + binmap_t::copy(binmap, ack_hint_out_, curr); + //hint = binmap_t::find_complement(ack_hint_out_, offer, curr, twist_); + hint = binmap_t::find_complement(binmap, offer, twist_); + binmap.clear(); + + if (!hint.is_none()) + { + if (avail_->size()) + { + rarest_hint = avail_->get(rarest_hint) < avail_->get(hint) ? rarest_hint : hint; + } + else + { + examined = size; + rarest_hint = hint; + } + } + } + + examined += curr.base_length(); + curr = bin_t(0, curr.base_right().layer_offset()+1 ); + + } + + if (!rarest_hint.is_none()) + { + if (avail_->size()) + rarest_hint = avail_->getRarest(rarest_hint, max_width); + else + while (rarest_hint.base_length()>max_width && !rarest_hint.is_base()) // Arno,2012-01-17: stop! + rarest_hint.to_left(); + } + + return rarest_hint; + } + + + virtual bin_t Pick (binmap_t& offer, uint64_t max_width, tint expires) + { + bin_t hint; + bool retry; + char tmp[32]; + char set = 'X'; // TODO remove set var, only used for debug + + // TODO check... the seconds should depend on previous speed of the peer + while (hint_out_.size() && hint_out_.front().timeack_out()), hint_out_.front().bin); + hint_out_.pop_front(); + } + + // get the first piece to estimate the size, whoever sends it first + if (!hashtree()->size()) { + + return bin_t(0,0); + } + else if (hashtree()->ack_out()->is_empty(bin_t(0,0))) + { + // Arno, 2012-05-03: big initial hint avoidance hack: + // Just ask sequentially till first chunk in. + initseq_ = bin_t(initseq_.layer(),initseq_.layer_offset()+1); + return initseq_; + } + + do { + uint64_t max_size = hashtree()->size_in_chunks() - playback_pos_ - 1; + max_size = high_pri_window_ < max_size ? high_pri_window_ : max_size; + + // check the high priority window for data we r missing + hint = pickUrgent(offer, max_width, max_size); + + // check the mid priority window + uint64_t start = (1 + playback_pos_) + HIGHPRIORITYWINDOW; // start in KB + if (hint.is_none() && start < hashtree()->size_in_chunks()) + { + int mid = MIDPRIORITYWINDOW; + int size = mid * HIGHPRIORITYWINDOW; // size of window in KB + // check boundaries + max_size = hashtree()->size_in_chunks() - start; + max_size = size < max_size ? size : max_size; + + hint = pickRarest(offer, max_width, start, max_size); + + //check low priority + start += max_size; + if (hint.is_none() && start < hashtree()->size_in_chunks()) + { + size = hashtree()->size_in_chunks() - start; + hint = pickRarest(offer, max_width, start, size); + set = 'L'; + } + else + set = 'M'; + } + else + set = 'H'; + + // unhinted/late data + if (!hashtree()->ack_out()->is_empty(hint)) { + binmap_t::copy(ack_hint_out_, *(hashtree()->ack_out()), hint); + retry = true; + } + else + retry = false; + + } while (retry); + + + if (hint.is_none()) { + // TODO, control if we want: check for missing hints (before playback pos.) + hint = binmap_t::find_complement(ack_hint_out_, offer, twist_); + // TODO: end-game mode + if (hint.is_none()) + return hint; + else + while (hint.base_length()>max_width && !hint.is_base()) // Arno,2012-01-17: stop! + hint.to_left(); + + + } + + assert(ack_hint_out_.is_empty(hint)); + ack_hint_out_.set(hint); + hint_out_.push_back(tintbin(NOW,hint)); + + + // TODO clean ... printing percentage of completeness for the priority sets + //status(); + + //fprintf(stderr,"%s #1 Picker -> picked %s\t from %c set\t max width %lu \n",tintstr(), hint.str(tmp), set, max_width ); + //if (avail_->size()) + return hint; + } + + int Seek(bin_t offbin, int whence) + { + char binstr[32]; + fprintf(stderr,"vodpp: seek: %s whence %d\n", offbin.str(binstr), whence ); + + if (whence != SEEK_SET) + return -1; + + // TODO: convert playback_pos_ to a bin number + uint64_t cid = offbin.toUInt()/2; + if (cid > 0) + cid--; // Riccardo assumes playbackpos is already in. + + //fprintf(stderr,"vodpp: pos in K %llu size %llu\n", cid, hashtree()->size_in_chunks() ); + + if (cid > hashtree()->size_in_chunks()) + return -1; + + playback_pos_ = cid; + return 0; + } + + void status() + { + int t = 0; + int x = HIGHPRIORITYWINDOW; + int y = MIDPRIORITYWINDOW; + int i = playback_pos_ + 1; + int end_high = (x+playback_pos_)<<1; + int end_mid = ((x*y)+x+playback_pos_)<<1; + int total = 0; + + + while (i<=end_high) + { + if (!hashtree()->ack_out()->is_empty(bin_t(i))) + t++; + i++; + } + total = t; + t = t*100/((x<<1)-1); + fprintf(stderr, "low %u, ", t); + t = 0; + while (i<=end_mid) + { + if (!hashtree()->ack_out()->is_empty(bin_t(i))) + t++; + i++; + } + total += t; + t = t*100/((x*y)<<1); + fprintf(stderr, "mid %u, ", t); + t = 0; + while (i<=hashtree()->size_in_chunks()<<1) + { + if (!hashtree()->ack_out()->is_empty(bin_t(i))) + t++; + i++; + } + total += t; + t = t*100/((hashtree()->size_in_chunks()-(x*y+playback_pos_))<<1); + fprintf(stderr, "low %u -> in total: %i\t pp: %i\n", t, total, playback_pos_); + } + +}; diff -Nru tribler-6.2.0/Tribler/SwiftEngine/getopt.c tribler-6.2.0/Tribler/SwiftEngine/getopt.c --- tribler-6.2.0/Tribler/SwiftEngine/getopt.c 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/getopt.c 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,122 @@ +/* + * Copyright (c) 1987, 1993, 1994 + * The Regents of the University of California. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. All advertising materials mentioning features or use of this software + * must display the following acknowledgement: + * This product includes software developed by the University of + * California, Berkeley and its contributors. + * 4. Neither the name of the University nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +/*#if defined(LIBC_SCCS) && !defined(lint) +static char sccsid[] = "@(#)getopt.c 8.3 (Berkeley) 4/27/95"; +#endif /* LIBC_SCCS and not lint +#include +//__FBSDID("$FreeBSD: src/lib/libc/stdlib/getopt.c,v 1.6 2002/03/29 22:43:42 markm Exp $"); + +#include "namespace.h"*/ +#include +#include +#include +/*#include "un-namespace.h"*/ + +/*#include "libc_private.h"*/ + +int opterr = 1, /* if error message should be printed */ + optind = 1, /* index into parent argv vector */ + optopt, /* character checked for validity */ + optreset; /* reset getopt */ +char *optarg; /* argument associated with option */ + +#define BADCH (int)'?' +#define BADARG (int)':' +#define EMSG "" + +/* + * getopt -- + * Parse argc/argv argument vector. + */ +int +getopt(nargc, nargv, ostr) + int nargc; + char * const *nargv; + const char *ostr; +{ + static char *place = EMSG; /* option letter processing */ + char *oli; /* option letter list index */ + + if (optreset || !*place) { /* update scanning pointer */ + optreset = 0; + if (optind >= nargc || *(place = nargv[optind]) != '-') { + place = EMSG; + return (-1); + } + if (place[1] && *++place == '-') { /* found "--" */ + ++optind; + place = EMSG; + return (-1); + } + } /* option letter okay? */ + if ((optopt = (int)*place++) == (int)':' || + !(oli = strchr(ostr, optopt))) { + /* + * if the user didn't specify '-' as an option, + * assume it means -1. + */ + if (optopt == (int)'-') + return (-1); + if (!*place) + ++optind; + if (opterr && *ostr != ':' && optopt != BADCH) + (void)fprintf(stderr, "%s: illegal option -- %c\n", + "progname", optopt); + return (BADCH); + } + if (*++oli != ':') { /* don't need argument */ + optarg = NULL; + if (!*place) + ++optind; + } + else { /* need an argument */ + if (*place) /* no white space */ + optarg = place; + else if (nargc <= ++optind) { /* no arg */ + place = EMSG; + if (*ostr == ':') + return (BADARG); + if (opterr) + (void)fprintf(stderr, + "%s: option requires an argument -- %c\n", + "progname", optopt); + return (BADCH); + } + else /* white space */ + optarg = nargv[optind]; + place = EMSG; + ++optind; + } + return (optopt); /* dump back option letter */ +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/getopt_long.c tribler-6.2.0/Tribler/SwiftEngine/getopt_long.c --- tribler-6.2.0/Tribler/SwiftEngine/getopt_long.c 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/getopt_long.c 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,548 @@ +/* $NetBSD: getopt_long.c,v 1.15 2002/01/31 22:43:40 tv Exp $ */ +/* $FreeBSD: src/lib/libc/stdlib/getopt_long.c,v 1.2 2002/10/16 22:18:42 alfred Exp $ */ + +/*- + * Copyright (c) 2000 The NetBSD Foundation, Inc. + * All rights reserved. + * + * This code is derived from software contributed to The NetBSD Foundation + * by Dieter Baron and Thomas Klausner. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. All advertising materials mentioning features or use of this software + * must display the following acknowledgement: + * This product includes software developed by the NetBSD + * Foundation, Inc. and its contributors. + * 4. Neither the name of The NetBSD Foundation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED + * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + + +#include "getopt_win.h" +#include +#include + +#ifdef _WIN32 + +/* Windows needs warnx(). We change the definition though: + * 1. (another) global is defined, opterrmsg, which holds the error message + * 2. errors are always printed out on stderr w/o the program name + * Note that opterrmsg always gets set no matter what opterr is set to. The + * error message will not be printed if opterr is 0 as usual. + */ + +#include +#include + +GETOPT_API extern char opterrmsg[128]; +char opterrmsg[128]; /* last error message is stored here */ + +static void warnx(int print_error, const char *fmt, ...) +{ + va_list ap; + va_start(ap, fmt); + if (fmt != NULL) + _vsnprintf(opterrmsg, 128, fmt, ap); + else + opterrmsg[0]='\0'; + va_end(ap); + if (print_error) { + fprintf(stderr, opterrmsg); + fprintf(stderr, "\n"); + } +} +#else +#include +#endif /*_WIN32*/ + +/* not part of the original file */ +#ifndef _DIAGASSERT +#define _DIAGASSERT(X) +#endif + +#if HAVE_CONFIG_H && !HAVE_GETOPT_LONG && !HAVE_DECL_OPTIND +#define REPLACE_GETOPT +#endif + +#ifdef REPLACE_GETOPT +#ifdef __weak_alias +__weak_alias(getopt,_getopt) +#endif +int opterr = 1; /* if error message should be printed */ +int optind = 1; /* index into parent argv vector */ +int optopt = '?'; /* character checked for validity */ +int optreset; /* reset getopt */ +char *optarg; /* argument associated with option */ +#elif HAVE_CONFIG_H && !HAVE_DECL_OPTRESET +static int optreset; +#endif + +#ifdef __weak_alias +__weak_alias(getopt_long,_getopt_long) +#endif + +#if !HAVE_GETOPT_LONG +#define IGNORE_FIRST (*options == '-' || *options == '+') +#define PRINT_ERROR ((opterr) && ((*options != ':') \ + || (IGNORE_FIRST && options[1] != ':'))) +#define IS_POSIXLY_CORRECT (getenv("POSIXLY_CORRECT") != NULL) +#define PERMUTE (!IS_POSIXLY_CORRECT && !IGNORE_FIRST) +/* XXX: GNU ignores PC if *options == '-' */ +#define IN_ORDER (!IS_POSIXLY_CORRECT && *options == '-') + +/* return values */ +#define BADCH (int)'?' +#define BADARG ((IGNORE_FIRST && options[1] == ':') \ + || (*options == ':') ? (int)':' : (int)'?') +#define INORDER (int)1 + +#define EMSG "" + +static int getopt_internal(int, char * const *, const char *); +static int gcd(int, int); +static void permute_args(int, int, int, char * const *); + +static char *place = EMSG; /* option letter processing */ + +/* XXX: set optreset to 1 rather than these two */ +static int nonopt_start = -1; /* first non option argument (for permute) */ +static int nonopt_end = -1; /* first option after non options (for permute) */ + +/* Error messages */ +static const char recargchar[] = "option requires an argument -- %c"; +static const char recargstring[] = "option requires an argument -- %s"; +static const char ambig[] = "ambiguous option -- %.*s"; +static const char noarg[] = "option doesn't take an argument -- %.*s"; +static const char illoptchar[] = "unknown option -- %c"; +static const char illoptstring[] = "unknown option -- %s"; + + +/* + * Compute the greatest common divisor of a and b. + */ +static int +gcd(a, b) + int a; + int b; +{ + int c; + + c = a % b; + while (c != 0) { + a = b; + b = c; + c = a % b; + } + + return b; +} + +/* + * Exchange the block from nonopt_start to nonopt_end with the block + * from nonopt_end to opt_end (keeping the same order of arguments + * in each block). + */ +static void +permute_args(panonopt_start, panonopt_end, opt_end, nargv) + int panonopt_start; + int panonopt_end; + int opt_end; + char * const *nargv; +{ + int cstart, cyclelen, i, j, ncycle, nnonopts, nopts, pos; + char *swap; + + _DIAGASSERT(nargv != NULL); + + /* + * compute lengths of blocks and number and size of cycles + */ + nnonopts = panonopt_end - panonopt_start; + nopts = opt_end - panonopt_end; + ncycle = gcd(nnonopts, nopts); + cyclelen = (opt_end - panonopt_start) / ncycle; + + for (i = 0; i < ncycle; i++) { + cstart = panonopt_end+i; + pos = cstart; + for (j = 0; j < cyclelen; j++) { + if (pos >= panonopt_end) + pos -= nnonopts; + else + pos += nopts; + swap = nargv[pos]; + /* LINTED const cast */ + ((char **) nargv)[pos] = nargv[cstart]; + /* LINTED const cast */ + ((char **)nargv)[cstart] = swap; + } + } +} + +/* + * getopt_internal -- + * Parse argc/argv argument vector. Called by user level routines. + * Returns -2 if -- is found (can be long option or end of options marker). + */ +static int +getopt_internal(nargc, nargv, options) + int nargc; + char * const *nargv; + const char *options; +{ + char *oli; /* option letter list index */ + int optchar; + + _DIAGASSERT(nargv != NULL); + _DIAGASSERT(options != NULL); + + optarg = NULL; + + /* + * XXX Some programs (like rsyncd) expect to be able to + * XXX re-initialize optind to 0 and have getopt_long(3) + * XXX properly function again. Work around this braindamage. + */ + if (optind == 0) + optind = 1; + + if (optreset) + nonopt_start = nonopt_end = -1; +start: + if (optreset || !*place) { /* update scanning pointer */ + optreset = 0; + if (optind >= nargc) { /* end of argument vector */ + place = EMSG; + if (nonopt_end != -1) { + /* do permutation, if we have to */ + permute_args(nonopt_start, nonopt_end, + optind, nargv); + optind -= nonopt_end - nonopt_start; + } + else if (nonopt_start != -1) { + /* + * If we skipped non-options, set optind + * to the first of them. + */ + optind = nonopt_start; + } + nonopt_start = nonopt_end = -1; + return -1; + } + if ((*(place = nargv[optind]) != '-') + || (place[1] == '\0')) { /* found non-option */ + place = EMSG; + if (IN_ORDER) { + /* + * GNU extension: + * return non-option as argument to option 1 + */ + optarg = nargv[optind++]; + return INORDER; + } + if (!PERMUTE) { + /* + * if no permutation wanted, stop parsing + * at first non-option + */ + return -1; + } + /* do permutation */ + if (nonopt_start == -1) + nonopt_start = optind; + else if (nonopt_end != -1) { + permute_args(nonopt_start, nonopt_end, + optind, nargv); + nonopt_start = optind - + (nonopt_end - nonopt_start); + nonopt_end = -1; + } + optind++; + /* process next argument */ + goto start; + } + if (nonopt_start != -1 && nonopt_end == -1) + nonopt_end = optind; + if (place[1] && *++place == '-') { /* found "--" */ + place++; + return -2; + } + } + if ((optchar = (int)*place++) == (int)':' || + (oli = strchr(options + (IGNORE_FIRST ? 1 : 0), optchar)) == NULL) { + /* option letter unknown or ':' */ + if (!*place) + ++optind; +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(illoptchar, optchar); +#else + warnx(PRINT_ERROR, illoptchar, optchar); +#endif + optopt = optchar; + return BADCH; + } + if (optchar == 'W' && oli[1] == ';') { /* -W long-option */ + /* XXX: what if no long options provided (called by getopt)? */ + if (*place) + return -2; + + if (++optind >= nargc) { /* no arg */ + place = EMSG; +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(recargchar, optchar); +#else + warnx(PRINT_ERROR, recargchar, optchar); +#endif + optopt = optchar; + return BADARG; + } else /* white space */ + place = nargv[optind]; + /* + * Handle -W arg the same as --arg (which causes getopt to + * stop parsing). + */ + return -2; + } + if (*++oli != ':') { /* doesn't take argument */ + if (!*place) + ++optind; + } else { /* takes (optional) argument */ + optarg = NULL; + if (*place) /* no white space */ + optarg = place; + /* XXX: disable test for :: if PC? (GNU doesn't) */ + else if (oli[1] != ':') { /* arg not optional */ + if (++optind >= nargc) { /* no arg */ + place = EMSG; +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(recargchar, optchar); +#else + warnx(PRINT_ERROR, recargchar, optchar); +#endif + optopt = optchar; + return BADARG; + } else + optarg = nargv[optind]; + } + place = EMSG; + ++optind; + } + /* dump back option letter */ + return optchar; +} + +#ifdef REPLACE_GETOPT +/* + * getopt -- + * Parse argc/argv argument vector. + * + * [eventually this will replace the real getopt] + */ +int +getopt(nargc, nargv, options) + int nargc; + char * const *nargv; + const char *options; +{ + int retval; + + _DIAGASSERT(nargv != NULL); + _DIAGASSERT(options != NULL); + + if ((retval = getopt_internal(nargc, nargv, options)) == -2) { + ++optind; + /* + * We found an option (--), so if we skipped non-options, + * we have to permute. + */ + if (nonopt_end != -1) { + permute_args(nonopt_start, nonopt_end, optind, + nargv); + optind -= nonopt_end - nonopt_start; + } + nonopt_start = nonopt_end = -1; + retval = -1; + } + return retval; +} +#endif + +/* + * getopt_long -- + * Parse argc/argv argument vector. + */ +int +getopt_long(nargc, nargv, options, long_options, idx) + int nargc; + char * const *nargv; + const char *options; + const struct option *long_options; + int *idx; +{ + int retval; + + _DIAGASSERT(nargv != NULL); + _DIAGASSERT(options != NULL); + _DIAGASSERT(long_options != NULL); + /* idx may be NULL */ + + if ((retval = getopt_internal(nargc, nargv, options)) == -2) { + char *current_argv, *has_equal; + size_t current_argv_len; + int i, match; + + current_argv = place; + match = -1; + + optind++; + place = EMSG; + + if (*current_argv == '\0') { /* found "--" */ + /* + * We found an option (--), so if we skipped + * non-options, we have to permute. + */ + if (nonopt_end != -1) { + permute_args(nonopt_start, nonopt_end, + optind, nargv); + optind -= nonopt_end - nonopt_start; + } + nonopt_start = nonopt_end = -1; + return -1; + } + if ((has_equal = strchr(current_argv, '=')) != NULL) { + /* argument found (--option=arg) */ + current_argv_len = has_equal - current_argv; + has_equal++; + } else + current_argv_len = strlen(current_argv); + + for (i = 0; long_options[i].name; i++) { + /* find matching long option */ + if (strncmp(current_argv, long_options[i].name, + current_argv_len)) + continue; + + if (strlen(long_options[i].name) == + (unsigned)current_argv_len) { + /* exact match */ + match = i; + break; + } + if (match == -1) /* partial match */ + match = i; + else { + /* ambiguous abbreviation */ +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(ambig, (int)current_argv_len, + current_argv); +#else + warnx(PRINT_ERROR, ambig, (int)current_argv_len, + current_argv); +#endif + optopt = 0; + return BADCH; + } + } + if (match != -1) { /* option found */ + if (long_options[match].has_arg == no_argument + && has_equal) { +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(noarg, (int)current_argv_len, + current_argv); +#else + warnx(PRINT_ERROR, noarg, (int)current_argv_len, + current_argv); +#endif + /* + * XXX: GNU sets optopt to val regardless of + * flag + */ + if (long_options[match].flag == NULL) + optopt = long_options[match].val; + else + optopt = 0; + return BADARG; + } + if (long_options[match].has_arg == required_argument || + long_options[match].has_arg == optional_argument) { + if (has_equal) + optarg = has_equal; + else if (long_options[match].has_arg == + required_argument) { + /* + * optional argument doesn't use + * next nargv + */ + optarg = nargv[optind++]; + } + } + if ((long_options[match].has_arg == required_argument) + && (optarg == NULL)) { + /* + * Missing argument; leading ':' + * indicates no error should be generated + */ +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(recargstring, current_argv); +#else + warnx(PRINT_ERROR, recargstring, current_argv); +#endif + /* + * XXX: GNU sets optopt to val regardless + * of flag + */ + if (long_options[match].flag == NULL) + optopt = long_options[match].val; + else + optopt = 0; + --optind; + return BADARG; + } + } else { /* unknown option */ +#ifndef _WIN32 + if (PRINT_ERROR) + warnx(illoptstring, current_argv); +#else + warnx(PRINT_ERROR, illoptstring, current_argv); +#endif + optopt = 0; + return BADCH; + } + if (long_options[match].flag) { + *long_options[match].flag = long_options[match].val; + retval = 0; + } else + retval = long_options[match].val; + if (idx) + *idx = match; + } + return retval; +} +#endif /* !GETOPT_LONG */ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/getopt_win.h tribler-6.2.0/Tribler/SwiftEngine/getopt_win.h --- tribler-6.2.0/Tribler/SwiftEngine/getopt_win.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/getopt_win.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,110 @@ +/* $NetBSD: getopt.h,v 1.4 2000/07/07 10:43:54 ad Exp $ */ +/* $FreeBSD: src/include/getopt.h,v 1.1 2002/09/29 04:14:30 eric Exp $ */ + +/*- + * Copyright (c) 2000 The NetBSD Foundation, Inc. + * All rights reserved. + * + * This code is derived from software contributed to The NetBSD Foundation + * by Dieter Baron and Thomas Klausner. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. All advertising materials mentioning features or use of this software + * must display the following acknowledgement: + * This product includes software developed by the NetBSD + * Foundation, Inc. and its contributors. + * 4. Neither the name of The NetBSD Foundation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED + * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _GETOPT_H_ +#define _GETOPT_H_ + +#ifdef _WIN32 +/* from */ +# ifdef __cplusplus +# define __BEGIN_DECLS extern "C" { +# define __END_DECLS } +# else +# define __BEGIN_DECLS +# define __END_DECLS +# endif +# define __P(args) args +#endif + +/*#ifndef _WIN32 +#include +#include +#endif*/ + +#ifdef _WIN32 +# if !defined(GETOPT_API) +# define GETOPT_API __declspec(dllimport) +# endif +#endif + +/* + * Gnu like getopt_long() and BSD4.4 getsubopt()/optreset extensions + */ +#if !defined(_POSIX_SOURCE) && !defined(_XOPEN_SOURCE) +#define no_argument 0 +#define required_argument 1 +#define optional_argument 2 + +struct option { + /* name of long option */ + const char *name; + /* + * one of no_argument, required_argument, and optional_argument: + * whether option takes an argument + */ + int has_arg; + /* if not NULL, set *flag to val when option found */ + int *flag; + /* if flag not NULL, value to set *flag to; else return value */ + int val; +}; + +__BEGIN_DECLS +GETOPT_API int getopt_long __P((int, char * const *, const char *, + const struct option *, int *)); +__END_DECLS +#endif + +#ifdef _WIN32 +/* These are global getopt variables */ +__BEGIN_DECLS + +GETOPT_API extern int opterr, /* if error message should be printed */ + optind, /* index into parent argv vector */ + optopt, /* character checked for validity */ + optreset; /* reset getopt */ +GETOPT_API extern char* optarg; /* argument associated with option */ + +/* Original getopt */ +GETOPT_API int getopt __P((int, char * const *, const char *)); + +__END_DECLS +#endif + +#endif /* !_GETOPT_H_ */ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/hashtree.cpp tribler-6.2.0/Tribler/SwiftEngine/hashtree.cpp --- tribler-6.2.0/Tribler/SwiftEngine/hashtree.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/hashtree.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,655 @@ +/* + * hashtree.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "hashtree.h" +#include "bin_utils.h" +//#include +#include "sha1.h" +#include +#include +#include +#include +#include "compat.h" +#include "swift.h" + +#include + + +using namespace swift; + +const Sha1Hash Sha1Hash::ZERO = Sha1Hash(); + +void SHA1 (const void *data, size_t length, unsigned char *hash) { + blk_SHA_CTX ctx; + blk_SHA1_Init(&ctx); + blk_SHA1_Update(&ctx, data, length); + blk_SHA1_Final(hash, &ctx); +} + +Sha1Hash::Sha1Hash(const Sha1Hash& left, const Sha1Hash& right) { + blk_SHA_CTX ctx; + blk_SHA1_Init(&ctx); + blk_SHA1_Update(&ctx, left.bits,SIZE); + blk_SHA1_Update(&ctx, right.bits,SIZE); + blk_SHA1_Final(bits, &ctx); +} + +Sha1Hash::Sha1Hash(const char* data, size_t length) { + if (length==-1) + length = strlen(data); + SHA1((unsigned char*)data,length,bits); +} + +Sha1Hash::Sha1Hash(const uint8_t* data, size_t length) { + SHA1(data,length,bits); +} + +Sha1Hash::Sha1Hash(bool hex, const char* hash) { + if (hex) { + int val; + for(int i=0; ihex()==std::string(hash)); + } else + memcpy(bits,hash,SIZE); +} + +std::string Sha1Hash::hex() const { + char hex[HASHSZ*2+1]; + for(int i=0; iSetHashTree(this); + // If multi-file spec we know the exact size even before getting peaks+last chunk + int64_t sizefromspec = storage_->GetSizeFromSpec(); + if (sizefromspec != -1) + { + set_size(sizefromspec); + // Resize all files + (void)storage_->ResizeReserved(sizefromspec); + } + + // Arno: if user doesn't want to check hashes but no .mhash, check hashes anyway + bool actually_force_check_diskvshash = force_check_diskvshash; + bool mhash_exists=true; + int64_t mhash_size = file_size_by_path_utf8( hash_filename.c_str()); + if (mhash_size < 0) + mhash_exists = false; + // Arno, 2012-07-26: Quick fix against partial downloads without .mhash. + // Previously they would be Submit()ed and the root_hash_ would change. + // Now if the root_hash_ is set, we don't recompute the tree. More permanent + // solution is to hashcheck the content, and if it doesn't match the root + // hash, revert to a clean state. + // + if (root_hash_==Sha1Hash::ZERO && !mhash_exists) + actually_force_check_diskvshash = true; + + // Arno: if the remainder of the hashtree state is on disk we can + // hashcheck very quickly + bool binmap_exists=true; + int res = file_exists_utf8( binmap_filename.c_str() ); + if( res <= 0) + binmap_exists = false; + if (root_hash_==Sha1Hash::ZERO && !binmap_exists) + actually_force_check_diskvshash = true; + + //fprintf(stderr,"hashtree: hashchecking %s file %s want %s do %s mhash-on-disk %s binmap-on-disk %s\n", root_hash.hex().c_str(), storage_->GetOSPathName().c_str(), (force_check_diskvshash ? "yes" : "no"), (actually_force_check_diskvshash? "yes" : "no"), (mhash_exists? "yes" : "no"), (binmap_exists? "yes" : "no") ); + // Arno, 2012-07-27: Sanity check + if ((mhash_exists || binmap_exists) && storage_->GetReservedSize() == -1) + { + print_error("meta files present but not content"); + SetBroken(); + return; + } + + // Arno, 2012-09-19: Hash file created only when msgs incoming + if (mhash_exists) { + hash_fd_ = OpenHashFile(); + if (hash_fd_ < 0) + return; + } + + // Arno: if user wants to or no .mhash, and if root hash unknown (new file) and no checkpoint, (re)calc root hash + if (storage_->GetReservedSize() > storage_->GetMinimalReservedSize() && actually_force_check_diskvshash) { + // fresh submit, hash it + dprintf("%s hashtree full compute\n",tintstr()); + //assert(storage_->GetReservedSize()); + Submit(); + } else if (mhash_exists && binmap_exists && mhash_size > 0) { + // Arno: recreate hash tree without rereading content + dprintf("%s hashtree read from checkpoint\n",tintstr()); + FILE *fp = fopen_utf8(binmap_filename.c_str(),"rb"); + if (!fp) { + print_error("hashtree: cannot open .mbinmap file"); + SetBroken(); + return; + } + if (deserialize(fp) < 0) { + // Try to rebuild hashtree data + Submit(); + } + fclose(fp); + } else { + // Arno: no data on disk, or mhash on disk, but no binmap. In latter + // case recreate binmap by reading content again. Historic optimization + // of Submit. + dprintf("%s hashtree empty or partial recompute\n",tintstr()); + RecoverProgress(); + } +} + + +MmapHashTree::MmapHashTree(bool dummy, std::string binmap_filename) : +HashTree(), root_hash_(Sha1Hash::ZERO), hashes_(NULL), peak_count_(0), hash_fd_(0), +hash_filename_(""), filename_(""), size_(0), sizec_(0), complete_(0), completec_(0), +chunk_size_(0), check_netwvshash_(false) +{ + FILE *fp = fopen_utf8(binmap_filename.c_str(),"rb"); + if (!fp) { + SetBroken(); + return; + } + if (partial_deserialize(fp) < 0) { + } + fclose(fp); +} + +int MmapHashTree::OpenHashFile() { + hash_fd_ = open_utf8(hash_filename_.c_str(),OPENFLAGS,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); + if (hash_fd_<0) { + hash_fd_ = -1; + print_error("cannot create/open hash file"); + SetBroken(); + } + return hash_fd_; +} + + +// Reads complete file and constructs hash tree +void MmapHashTree::Submit () { + size_ = storage_->GetReservedSize(); + sizec_ = (size_ + chunk_size_-1) / chunk_size_; + + //fprintf(stderr,"hashtree: submit: cs %i\n", chunk_size_); + + peak_count_ = gen_peaks(sizec_,peaks_); + int hashes_size = Sha1Hash::SIZE*sizec_*2; + dprintf("%s hashtree submit resizing hash file to %d\n",tintstr(), hashes_size ); + if (hashes_size == 0) { + SetBroken(); + return; + } + + // Arno, 2012-09-19: Hash file created only when msgs incoming + if (hash_fd_ == -1) { + hash_fd_ = OpenHashFile(); + if (hash_fd_ < 0) + return; + } + + file_resize(hash_fd_,hashes_size); + hashes_ = (Sha1Hash*) memory_map(hash_fd_,hashes_size); + if (!hashes_) { + size_ = sizec_ = complete_ = completec_ = 0; + print_error("mmap failed"); + SetBroken(); + return; + } + size_t last_piece_size = (sizec_ - 1) % (chunk_size_) + 1; + char *chunk = new char[chunk_size_]; + for (uint64_t i=0; iRead(chunk,chunk_size_,i*chunk_size_); + if (rd<(chunk_size_) && i!=sizec_-1) { + free(hashes_); + hashes_=NULL; + SetBroken(); + return; + } + bin_t pos(0,i); + hashes_[pos.toUInt()] = Sha1Hash(chunk,rd); + ack_out_.set(pos); + while (pos.is_right()){ + pos = pos.parent(); + hashes_[pos.toUInt()] = Sha1Hash(hashes_[pos.left().toUInt()],hashes_[pos.right().toUInt()]); + } + complete_+=rd; + completec_++; + } + delete chunk; + for (int p=0; pRead(buf,chunk_size_,p*chunk_size_); + if (rd!=(chunk_size_) && p!=size_in_chunks()-1) + break; + if (rd==(chunk_size_) && !memcmp(buf, zero_chunk, rd) && + hashes_[pos.toUInt()]!=zero_hash) // FIXME // Arno == don't have piece yet? + continue; + if (!OfferHash(pos, Sha1Hash(buf,rd)) ) + continue; + ack_out_.set(pos); + completec_++; + complete_+=rd; + if (rd!=(chunk_size_) && p==size_in_chunks()-1) // set the exact file size + size_ = ((sizec_-1)*chunk_size_) + rd; + } + delete[] buf; + delete[] zero_chunk; +} + +/** Precondition: root hash known */ +bool MmapHashTree::RecoverPeakHashes() +{ + int64_t ret = storage_->GetReservedSize(); + if (ret < 0) + return false; + + uint64_t size = ret; + uint64_t sizek = (size + chunk_size_-1) / chunk_size_; + + // Arno: Calc location of peak hashes, read them from hash file and check if + // they match to root hash. If so, load hashes into memory. + bin_t peaks[64]; + int peak_count = gen_peaks(sizek,peaks); + for(int i=0; isize()) + return false; // if no valid peak hashes found + + return true; +} + +int MmapHashTree::serialize(FILE *fp) +{ + fprintf_retiffail(fp,"version %i\n", 1 ); + fprintf_retiffail(fp,"root hash %s\n", root_hash_.hex().c_str() ); + fprintf_retiffail(fp,"chunk size %lu\n", chunk_size_ ); + fprintf_retiffail(fp,"complete %llu\n", complete_ ); + fprintf_retiffail(fp,"completec %llu\n", completec_ ); + return ack_out_.serialize(fp); +} + + +/** Arno: recreate hash tree from .mbinmap file without rereading content. + * Precondition: root hash known + */ +int MmapHashTree::deserialize(FILE *fp) { + return internal_deserialize(fp,true); +} + +int MmapHashTree::partial_deserialize(FILE *fp) { + return internal_deserialize(fp,false); +} + + +int MmapHashTree::internal_deserialize(FILE *fp,bool contentavail) { + + char hexhashstr[256]; + uint64_t c,cc; + size_t cs; + int version; + + fscanf_retiffail(fp,"version %i\n", &version ); + fscanf_retiffail(fp,"root hash %s\n", hexhashstr); + fscanf_retiffail(fp,"chunk size %lu\n", &cs); + fscanf_retiffail(fp,"complete %llu\n", &c ); + fscanf_retiffail(fp,"completec %llu\n", &cc ); + + if (ack_out_.deserialize(fp) < 0) + return -1; + root_hash_ = Sha1Hash(true, hexhashstr); + chunk_size_ = cs; + + // Arno, 2012-01-03: Hack to just get root hash + if (!contentavail) + return 2; + + if (!RecoverPeakHashes()) { + root_hash_ = Sha1Hash::ZERO; + ack_out_.clear(); + return -1; + } + + // Are reset by RecoverPeakHashes() for some reason. + complete_ = c; + completec_ = cc; + size_ = storage_->GetReservedSize(); + sizec_ = (size_ + chunk_size_-1) / chunk_size_; + + return 0; +} + + +bool MmapHashTree::OfferPeakHash (bin_t pos, const Sha1Hash& hash) { + char bin_name_buf[32]; + dprintf("%s hashtree offer peak %s\n",tintstr(),pos.str(bin_name_buf)); + + //assert(!size_); + if (peak_count_) { + bin_t last_peak = peaks_[peak_count_-1]; + if ( pos.layer()>=last_peak.layer() || + pos.base_offset()!=last_peak.base_offset()+last_peak.base_length() ) + peak_count_ = 0; + } + peaks_[peak_count_] = pos; + peak_hashes_[peak_count_] = hash; + peak_count_++; + // check whether peak hash candidates add up to the root hash + Sha1Hash mustbe_root = DeriveRoot(); + if (mustbe_root!=root_hash_) + return false; + for(int i=0; iGetReservedSize(); + if ( cur_size<=(sizec_-1)*chunk_size_ || cur_size>sizec_*chunk_size_ ) { + dprintf("%s hashtree offerpeak resizing file\n",tintstr() ); + if (storage_->ResizeReserved(size_)) { + print_error("cannot set file size\n"); + size_=0; // remain in the 0-state + return false; + } + } + + // Arno, 2012-09-19: Hash file created only when msgs incoming + if (hash_fd_ == -1) { + hash_fd_ = OpenHashFile(); + if (hash_fd_ < 0) + return false; + } + + // mmap the hash file into memory + uint64_t expected_size = sizeof(Sha1Hash)*sizec_*2; + // Arno, 2011-10-18: on Windows we could optimize this away, + //CreateFileMapping, see compat.cpp will resize the file for us with + // the right params. + // + if ( file_size(hash_fd_) != expected_size ) { + dprintf("%s hashtree offerpeak resizing hash file\n",tintstr() ); + file_resize (hash_fd_, expected_size); + } + + hashes_ = (Sha1Hash*) memory_map(hash_fd_,expected_size); + if (!hashes_) { + size_ = sizec_ = complete_ = completec_ = 0; + print_error("mmap failed"); + return false; + } + + for(int i=0; i= 0) { + if (p.is_left()) { + p = p.parent(); + hash = Sha1Hash(hash,Sha1Hash::ZERO); + } else { + if (c<0 || peaks_[c]!=p.sibling()) + return Sha1Hash::ZERO; + hash = Sha1Hash(peak_hashes_[c],hash); + p = p.parent(); + c--; + } + } + + //fprintf(stderr,"hashtree: derive: root hash is %s\n", hash.hex().c_str() ); + + //fprintf(stderr,"root bin is %lli covers %lli\n", p.toUInt(), p.base_length() ); + return hash; +} + + +/** For live streaming: appends the data, adjusts the tree. + @ return the number of fresh (tail) peak hashes */ +int MmapHashTree::AppendData (char* data, int length) { + return 0; +} + + +bin_t MmapHashTree::peak_for (bin_t pos) const { + int pi=0; + while (piWrite(data,length,pos.base_offset()*chunk_size_) < 0) + print_error("pwrite failed"); + complete_ += length; + completec_++; + if (pos.base_offset()==sizec_-1) { + size_ = ((sizec_-1)*chunk_size_) + length; + if (storage_->GetReservedSize()!=size_) + storage_->ResizeReserved(size_); + } + return true; +} + + +uint64_t MmapHashTree::seq_complete (int64_t offset) { + + uint64_t seqc = 0; + if (offset == 0) + { + uint64_t seqc = ack_out_.find_empty().base_offset(); + if (seqc==sizec_) + return size_; + else + return seqc*chunk_size_; + } + else + { + // SEEK: Calc sequentially complete bytes from an offset + bin_t binoff = bin_t(0,(offset - (offset % chunk_size_)) / chunk_size_); + bin_t nextempty = ack_out_.find_empty(binoff); + if (nextempty == bin_t::NONE || nextempty.base_offset() * chunk_size_ > size_) + return size_-offset; // All filled from offset + + bin_t::uint_t diffc = nextempty.layer_offset() - binoff.layer_offset(); + uint64_t diffb = diffc * chunk_size_; + if (diffb > 0) + diffb -= (offset % chunk_size_); + + return diffb; + } +} + + +MmapHashTree::~MmapHashTree () { + if (hashes_) + memory_unmap(hash_fd_, hashes_, sizec_*2*sizeof(Sha1Hash)); + if (hash_fd_ >= 0) + { + close(hash_fd_); + } +} + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/hashtree.h tribler-6.2.0/Tribler/SwiftEngine/hashtree.h --- tribler-6.2.0/Tribler/SwiftEngine/hashtree.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/hashtree.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,288 @@ +/* + * hashtree.h + * hashing, Merkle hash trees and data integrity + * + * Created by Victor Grishchenko on 3/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#ifndef SWIFT_SHA1_HASH_TREE_H +#define SWIFT_SHA1_HASH_TREE_H +#include +#include +#include "bin.h" +#include "binmap.h" +#include "operational.h" + +namespace swift { + +#define HASHSZ 20 + +/** SHA-1 hash, 20 bytes of data */ +struct Sha1Hash { + uint8_t bits[HASHSZ]; + + Sha1Hash() { memset(bits,0,HASHSZ); } + /** Make a hash of two hashes (for building Merkle hash trees). */ + Sha1Hash(const Sha1Hash& left, const Sha1Hash& right); + /** Hash an old plain string. */ + Sha1Hash(const char* str, size_t length=-1); + Sha1Hash(const uint8_t* data, size_t length); + /** Either parse hash from hex representation of read in raw format. */ + Sha1Hash(bool hex, const char* hash); + + std::string hex() const; + bool operator == (const Sha1Hash& b) const + { return 0==memcmp(bits,b.bits,SIZE); } + bool operator != (const Sha1Hash& b) const { return !(*this==b); } + const char* operator * () const { return (char*) bits; } + + const static Sha1Hash ZERO; + const static size_t SIZE = HASHSZ; +}; + +// Arno: The chunk size parameter can now be configured via the constructor, +// for values up to 8192. Above that you'll have to edit the +// SWIFT_MAX_SEND_DGRAM_SIZE in swift.h +// +#define SWIFT_DEFAULT_CHUNK_SIZE 1024 + + +class Storage; + + +/** This class controls data integrity of some file; hash tree is put to + an auxilliary file next to it. The hash tree file is mmap'd for + performance reasons. Actually, I'd like the data file itself to be + mmap'd, but 32-bit platforms do not allow that for bigger files. + + There are two variants of the general workflow: either a MmapHashTree + is initialized with a root hash and the rest of hashes and data is + spoon-fed later, OR a MmapHashTree is initialized with a data file, so + the hash tree is derived, including the root hash. + */ +class HashTree : public Operational { + public: + HashTree() : Operational() {} + /** Offer a hash; returns true if it verified; false otherwise. + Once it cannot be verified (no sibling or parent), the hash + is remembered, while returning false. */ + virtual bool OfferHash (bin_t pos, const Sha1Hash& hash) = 0; + /** Offer data; the behavior is the same as with a hash: + accept or remember or drop. Returns true => ACK is sent. */ + virtual bool OfferData (bin_t bin, const char* data, size_t length) = 0; + /** Returns the number of peaks (read on peak hashes). */ + virtual int peak_count () const = 0; + /** Returns the i-th peak's bin number. */ + virtual bin_t peak (int i) const = 0; + /** Returns peak hash #i. */ + virtual const Sha1Hash& peak_hash (int i) const = 0; + /** Return the peak bin the given bin belongs to. */ + virtual bin_t peak_for (bin_t pos) const = 0;; + /** Return a (Merkle) hash for the given bin. */ + virtual const Sha1Hash& hash (bin_t pos) const = 0; + /** Give the root hash, which is effectively an identifier of this file. */ + virtual const Sha1Hash& root_hash () const = 0; + /** Get file size, in bytes. */ + virtual uint64_t size () const = 0; + /** Get file size in chunks (in kilobytes, rounded up). */ + virtual uint64_t size_in_chunks () const = 0; + /** Number of bytes retrieved and checked. */ + virtual uint64_t complete () const = 0; + /** Number of chunks retrieved and checked. */ + virtual uint64_t chunks_complete () const = 0; + /** The number of bytes completed sequentially, i.e. from the beginning of + the file (or offset), uninterrupted. */ + virtual uint64_t seq_complete(int64_t offset) = 0;// SEEK + /** Whether the file is complete. */ + virtual bool is_complete () = 0; + /** The binmap of complete chunks. */ + virtual binmap_t * ack_out() = 0; + virtual uint32_t chunk_size() = 0; // CHUNKSIZE + + //NETWVSHASH + virtual bool get_check_netwvshash() = 0; + + + // for transfertest.cpp + virtual Storage * get_storage() = 0; + virtual void set_size(uint64_t size) = 0; + + virtual int TESTGetFD() = 0; + + virtual ~HashTree() {}; +}; + + +/** This class implements the HashTree interface via a memory mapped file. */ +class MmapHashTree : public HashTree, Serializable { + /** Merkle hash tree: root */ + Sha1Hash root_hash_; + Sha1Hash *hashes_; + /** Merkle hash tree: peak hashes */ + Sha1Hash peak_hashes_[64]; + bin_t peaks_[64]; + int peak_count_; + /** File descriptor to put hashes to */ + int hash_fd_; + std::string hash_filename_; + std::string filename_; // for easy serialization + /** Base size, as derived from the hashes. */ + uint64_t size_; + uint64_t sizec_; + /** Part of the tree currently checked. */ + uint64_t complete_; + uint64_t completec_; + /** Binmap of own chunk availability */ + binmap_t ack_out_; + + // CHUNKSIZE + /** Arno: configurable fixed chunk size in bytes */ + uint32_t chunk_size_; + + // LESSHASH + binmap_t is_hash_verified_; // binmap being abused as bitmap, only layer 0 used + // FAXME: make is_hash_verified_ part of persistent state? + + //MULTIFILE + Storage * storage_; + + int internal_deserialize(FILE *fp,bool contentavail=true); + + //NETWVSHASH + bool check_netwvshash_; + +protected: + + int OpenHashFile(); + void Submit(); + void RecoverProgress(); + bool RecoverPeakHashes(); + Sha1Hash DeriveRoot(); + bool OfferPeakHash (bin_t pos, const Sha1Hash& hash); + + +public: + + MmapHashTree (Storage *storage, const Sha1Hash& root=Sha1Hash::ZERO, uint32_t chunk_size=SWIFT_DEFAULT_CHUNK_SIZE, + std::string hash_filename=NULL, bool force_check_diskvshash=true, bool check_netwvshash=true, std::string binmap_filename=NULL); + + // Arno, 2012-01-03: Hack to quickly learn root hash from a checkpoint + MmapHashTree (bool dummy, std::string binmap_filename); + + bool OfferHash (bin_t pos, const Sha1Hash& hash); + bool OfferData (bin_t bin, const char* data, size_t length); + /** For live streaming. Not implemented yet. */ + int AppendData (char* data, int length) ; + + int peak_count () const { return peak_count_; } + bin_t peak (int i) const { return peaks_[i]; } + const Sha1Hash& peak_hash (int i) const { return peak_hashes_[i]; } + bin_t peak_for (bin_t pos) const; + const Sha1Hash& hash (bin_t pos) const {return hashes_[pos.toUInt()];} + const Sha1Hash& root_hash () const { return root_hash_; } + uint64_t size () const { return size_; } + uint64_t size_in_chunks () const { return sizec_; } + uint64_t complete () const { return complete_; } + uint64_t chunks_complete () const { return completec_; } + uint64_t seq_complete(int64_t offset); // SEEK + bool is_complete () { return size_ && complete_==size_; } + binmap_t * ack_out () { return &ack_out_; } + uint32_t chunk_size() { return chunk_size_; } // CHUNKSIZE + ~MmapHashTree (); + + // for transfertest.cpp + Storage * get_storage() { return storage_; } + void set_size(uint64_t size) { size_ = size; } + + // Arno: persistent storage for state other than hashes (which are in .mhash) + int serialize(FILE *fp); + int deserialize(FILE *fp); + int partial_deserialize(FILE *fp); + + //NETWVSHASH + bool get_check_netwvshash() { return check_netwvshash_; } + + int TESTGetFD() { return hash_fd_; } +}; + + + + +/** This class implements the HashTree interface by reading directly from disk */ +class ZeroHashTree : public HashTree { + /** Merkle hash tree: root */ + Sha1Hash root_hash_; + /** Merkle hash tree: peak hashes */ + //Sha1Hash peak_hashes_[64]; // now read from disk live too + bin_t peaks_[64]; + int peak_count_; + /** File descriptor to put hashes to */ + int hash_fd_; + /** Base size, as derived from the hashes. */ + uint64_t size_; + uint64_t sizec_; + /** Part of the tree currently checked. */ + uint64_t complete_; + uint64_t completec_; + + // CHUNKSIZE + /** Arno: configurable fixed chunk size in bytes */ + uint32_t chunk_size_; + + //MULTIFILE + Storage * storage_; + +protected: + + bool RecoverPeakHashes(); + Sha1Hash DeriveRoot(); + bool OfferPeakHash (bin_t pos, const Sha1Hash& hash); + +public: + + ZeroHashTree (Storage *storage, const Sha1Hash& root=Sha1Hash::ZERO, uint32_t chunk_size=SWIFT_DEFAULT_CHUNK_SIZE, + std::string hash_filename=NULL, std::string binmap_filename=NULL); + + // Arno, 2012-01-03: Hack to quickly learn root hash from a checkpoint + ZeroHashTree (bool dummy, std::string binmap_filename); + + bool OfferHash (bin_t pos, const Sha1Hash& hash); + bool OfferData (bin_t bin, const char* data, size_t length); + /** For live streaming. Not implemented yet. */ + int AppendData (char* data, int length) ; + + int peak_count () const { return peak_count_; } + bin_t peak (int i) const { return peaks_[i]; } + const Sha1Hash& peak_hash (int i) const; + bin_t peak_for (bin_t pos) const; + const Sha1Hash& hash (bin_t pos) const; + const Sha1Hash& root_hash () const { return root_hash_; } + uint64_t size () const { return size_; } + uint64_t size_in_chunks () const { return sizec_; } + uint64_t complete () const { return complete_; } + uint64_t chunks_complete () const { return completec_; } + uint64_t seq_complete(int64_t offset); // SEEK + bool is_complete () { return size_ && complete_==size_; } + binmap_t * ack_out () { return NULL; } + uint32_t chunk_size() { return chunk_size_; } // CHUNKSIZE + ~ZeroHashTree (); + + // for transfertest.cpp + Storage * get_storage() { return storage_; } + void set_size(uint64_t size) { size_ = size; } + + //NETWVSHASH + bool get_check_netwvshash() { return true; } + + int TESTGetFD() { return hash_fd_; } +}; + + + + + + +} + +#endif diff -Nru tribler-6.2.0/Tribler/SwiftEngine/httpgw.cpp tribler-6.2.0/Tribler/SwiftEngine/httpgw.cpp --- tribler-6.2.0/Tribler/SwiftEngine/httpgw.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/httpgw.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,877 @@ +/* + * httpgw.cpp + * gateway for serving swift content via HTTP, libevent2 based. + * + * Created by Victor Grishchenko, Arno Bakker + * Copyright 2010-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include "swift.h" +#include +#include +#include + +using namespace swift; + +#define HTTPGW_PROGRESS_STEP_BYTES (256*1024) +// For best performance make bigger than HTTPGW_PROGRESS_STEP_BYTES +#define HTTPGW_MAX_WRITE_BYTES (512*1024) + +// Report swift download progress every 2^layer * chunksize bytes (so 0 = report every chunk) +#define HTTPGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER 0 + +// Arno: libevent2 has a liberal understanding of socket writability, +// that may result in tens of megabytes being cached in memory. Limit that +// amount at app level. +#define HTTPGW_MAX_PREBUF_BYTES (2*1024*1024) + +#define HTTPGW_MAX_REQUEST 128 + +struct http_gw_t { + int id; + uint64_t offset; + uint64_t tosend; + int transfer; + uint64_t lastcpoffset; // last offset at which we checkpointed + struct evhttp_request *sinkevreq; + struct event *sinkevwrite; + std::string mfspecname; // (optional) name from multi-file spec + std::string xcontentdur; + bool closing; + uint64_t startoff; // MULTIFILE: starting offset in content range of desired file + uint64_t endoff; // MULTIFILE: ending offset (careful, for an e.g. 100 byte interval this is 99) + int replycode; // HTTP status code + int64_t rangefirst; // First byte wanted in HTTP GET Range request or -1 + int64_t rangelast; // Last byte wanted in HTTP GET Range request (also 99 for 100 byte interval) or -1 + +} http_requests[HTTPGW_MAX_REQUEST]; + + +int http_gw_reqs_open = 0; +int http_gw_reqs_count = 0; +struct evhttp *http_gw_event; +struct evhttp_bound_socket *http_gw_handle; +uint32_t httpgw_chunk_size = SWIFT_DEFAULT_CHUNK_SIZE; // Copy of cmdline param +double *httpgw_maxspeed = NULL; // Copy of cmdline param + +// Arno, 2010-11-30: for SwarmPlayer 3000 backend autoquit when no HTTP req is received +bool sawhttpconn = false; + + +http_gw_t *HttpGwFindRequestByEV(struct evhttp_request *evreq) { + for (int httpc=0; httpctransfer); + if (ft == NULL) + continue; + if (ft->root_hash() == wanthash) + return req; + } + return NULL; +} + + +void HttpGwCloseConnection (http_gw_t* req) { + dprintf("%s @%i cleanup http request evreq %p\n",tintstr(),req->id, req->sinkevreq); + + struct evhttp_connection *evconn = evhttp_request_get_connection(req->sinkevreq); + + req->closing = true; + if (req->offset > req->startoff) + evhttp_send_reply_end(req->sinkevreq); //WARNING: calls HttpGwLibeventCloseCallback + else + evhttp_request_free(req->sinkevreq); + + req->sinkevreq = NULL; + + // Note: for some reason calling conn_free here prevents the last chunks + // to be sent to the requester? + // evhttp_connection_free(evconn); // WARNING: calls HttpGwLibeventCloseCallback + + // Current close policy: checkpoint and DO NOT close transfer, keep on + // seeding forever. More sophisticated clients should use CMD GW and issue + // REMOVE. + swift::Checkpoint(req->transfer); + + // Arno, 2012-05-04: MULTIFILE: once the selected file has been downloaded + // swift will download all content that comes afterwards too. Poor man's + // fix to avoid this: seek to end of content when HTTP done. VOD PiecePicker + // will then no download anything. Better would be to seek to end when + // swift partial download is done, not the serving via HTTP. + // + swift::Seek(req->transfer,swift::Size(req->transfer)-1,SEEK_CUR); + + //swift::Close(req->transfer); + + *req = http_requests[--http_gw_reqs_open]; +} + + +void HttpGwLibeventCloseCallback(struct evhttp_connection *evconn, void *evreqvoid) { + // Called by libevent on connection close, either when the other side closes + // or when we close (because we call evhttp_connection_free()). To prevent + // doing cleanup twice, we see if there is a http_gw_req that has the + // passed evreqvoid as sinkevreq. If so, clean up, if not, ignore. + // I.e. evhttp_request * is used as sort of request ID + // + fprintf(stderr,"HttpGwLibeventCloseCallback: called\n"); + http_gw_t * req = HttpGwFindRequestByEV((struct evhttp_request *)evreqvoid); + if (req == NULL) + dprintf("%s http conn already closed\n",tintstr() ); + else { + dprintf("%s T%i http close conn\n",tintstr(),req->transfer); + if (req->closing) + dprintf("%s http conn already closing\n",tintstr() ); + else + { + dprintf("%s http conn being closed\n",tintstr() ); + HttpGwCloseConnection(req); + } + } +} + + + + +void HttpGwMayWriteCallback (int transfer) { + // Write some data to client + + http_gw_t* req = HttpGwFindRequestByTransfer(transfer); + if (req == NULL) { + print_error("httpgw: MayWrite: can't find req for transfer"); + return; + } + + // SEEKTODO: stop downloading when file complete + + // Update endoff as size becomes less fuzzy + if (swift::Size(req->transfer) < req->endoff) + req->endoff = swift::Size(req->transfer)-1; + + uint64_t relcomplete = swift::SeqComplete(req->transfer,req->startoff); + if (relcomplete > req->endoff) + relcomplete = req->endoff+1-req->startoff; + int64_t avail = relcomplete-(req->offset-req->startoff); + + dprintf("%s @%i http write: avail %lld relcomp %llu offset %llu start %llu end %llu tosend %llu\n",tintstr(),req->id, avail, relcomplete, req->offset, req->startoff, req->endoff, req->tosend ); + //fprintf(stderr,"offset %lli seqcomp %lli comp %lli\n",req->offset, complete, swift::Complete(req->transfer) ); + + struct evhttp_connection *evconn = evhttp_request_get_connection(req->sinkevreq); + struct bufferevent* buffy = evhttp_connection_get_bufferevent(evconn); + struct evbuffer *outbuf = bufferevent_get_output(buffy); + + //fprintf(stderr,"httpgw: MayWrite avail %i bufev outbuf %i\n",complete-req->offset, evbuffer_get_length(outbuf) ); + + if (avail > 0 && evbuffer_get_length(outbuf) < HTTPGW_MAX_PREBUF_BYTES) + { + // Received more than I pushed to player, send data + char buf[HTTPGW_MAX_WRITE_BYTES]; +// Arno, 2010-08-16, TODO +#ifdef WIN32 + uint64_t tosend = min(HTTPGW_MAX_WRITE_BYTES,avail); +#else + uint64_t tosend = std::min((int64_t)HTTPGW_MAX_WRITE_BYTES,avail); +#endif + size_t rd = swift::Read(transfer,buf,tosend,req->offset); // hope it is cached + if (rd<0) { + print_error("httpgw: MayWrite: error pread"); + HttpGwCloseConnection(req); + return; + } + + // Construct evbuffer and send incrementally + struct evbuffer *evb = evbuffer_new(); + int ret = evbuffer_add(evb,buf,rd); + if (ret < 0) { + print_error("httpgw: MayWrite: error evbuffer_add"); + evbuffer_free(evb); + HttpGwCloseConnection(req); + return; + } + + if (req->offset == req->startoff) { + // Not just for chunked encoding, see libevent2's http.c + evhttp_send_reply_start(req->sinkevreq, req->replycode, "OK"); + } + + evhttp_send_reply_chunk(req->sinkevreq, evb); + evbuffer_free(evb); + + int wn = rd; + dprintf("%s @%i http sent data %ib\n",tintstr(),req->id,(int)wn); + + req->offset += wn; + req->tosend -= wn; + + // PPPLUG + swift::Seek(req->transfer,req->offset,SEEK_CUR); + } + + // Arno, 2010-11-30: tosend is set to fuzzy len, so need extra/other test. + if (req->tosend==0 || req->offset == req->endoff+1) { + // done; wait for new HTTP request + dprintf("%s @%i done\n",tintstr(),req->id); + //fprintf(stderr,"httpgw: MayWrite: done, wait for buffer empty before send_end_reply\n" ); + + if (evbuffer_get_length(outbuf) == 0) { + //fprintf(stderr,"httpgw: MayWrite: done, buffer empty, end req\n" ); + dprintf("%s http write: done, buffer empty, end req\n",tintstr() ); + HttpGwCloseConnection(req); + } + } + else { + // wait for data + dprintf("%s @%i waiting for data\n",tintstr(),req->id); + } +} + +void HttpGwLibeventMayWriteCallback(evutil_socket_t fd, short events, void *evreqvoid ); + +void HttpGwSubscribeToWrite(http_gw_t * req) { + struct evhttp_connection *evconn = evhttp_request_get_connection(req->sinkevreq); + struct event_base *evbase = evhttp_connection_get_base(evconn); + struct bufferevent* evbufev = evhttp_connection_get_bufferevent(evconn); + + if (req->sinkevwrite != NULL) + event_free(req->sinkevwrite); // FAXME: clean in CloseConn + + req->sinkevwrite = event_new(evbase,bufferevent_getfd(evbufev),EV_WRITE,HttpGwLibeventMayWriteCallback,req->sinkevreq); + struct timeval t; + t.tv_sec = 10; + int ret = event_add(req->sinkevwrite,&t); + //fprintf(stderr,"httpgw: HttpGwSubscribeToWrite: added event\n"); +} + + +void HttpGwLibeventMayWriteCallback(evutil_socket_t fd, short events, void *evreqvoid ) +{ + //fprintf(stderr,"httpgw: MayWrite: %d events %d evreq is %p\n", fd, events, evreqvoid); + + http_gw_t * req = HttpGwFindRequestByEV((struct evhttp_request *)evreqvoid); + if (req != NULL) { + //fprintf(stderr,"httpgw: MayWrite: %d events %d httpreq is %p\n", fd, events, req); + HttpGwMayWriteCallback(req->transfer); + + + // Arno, 2011-12-20: No autoreschedule, let HttpGwSwiftProgressCallback do that + //if (req->sinkevreq != NULL) // Conn closed + // HttpGwSubscribeToWrite(req); + + //fprintf(stderr,"GOTO WRITE %lli >= %lli\n", swift::Complete(req->transfer)+HTTPGW_MAX_WRITE_BYTES, swift::Size(req->transfer) ); + + if (swift::Complete(req->transfer)+HTTPGW_MAX_WRITE_BYTES >= swift::Size(req->transfer)) { + + // We don't get progress callback for last chunk < chunk size, nor + // when all data is already on disk. In that case, just keep on + // subscribing to HTTP socket writability until all data is sent. + if (req->sinkevreq != NULL) // Conn closed + HttpGwSubscribeToWrite(req); + } + } +} + +void HttpGwSwiftProgressCallback (int transfer, bin_t bin) { + // Subsequent HTTPGW_PROGRESS_STEP_BYTES available + + dprintf("%s T%i http more progress\n",tintstr(),transfer); + http_gw_t* req = HttpGwFindRequestByTransfer(transfer); + if (req == NULL) + return; + + // Arno, 2011-12-20: We have new data to send, wait for HTTP socket writability + if (req->sinkevreq != NULL) { // Conn closed + HttpGwSubscribeToWrite(req); + } +} + + +bool HttpGwParseContentRangeHeader(http_gw_t *req,uint64_t filesize) +{ + struct evkeyvalq *reqheaders = evhttp_request_get_input_headers(req->sinkevreq); + struct evkeyvalq *repheaders = evhttp_request_get_output_headers(req->sinkevreq); + const char *contentrangecstr =evhttp_find_header(reqheaders,"Range"); + + if (contentrangecstr == NULL) { + req->rangefirst = -1; + req->rangelast = -1; + req->replycode = 200; + return true; + } + std::string range = contentrangecstr; + + // Handle RANGE query + bool bad = false; + int idx = range.find("="); + if (idx == std::string::npos) + return false; + std::string seek = range.substr(idx+1); + + dprintf("%s @%i http range request spec %s\n",tintstr(),req->id, seek.c_str() ); + + if (seek.find(",") != std::string::npos) { + // - Range header contains set, not supported at the moment + bad = true; + } else { + // Determine first and last bytes of requested range + idx = seek.find("-"); + + dprintf("%s @%i http range request idx %d\n", tintstr(),req->id, idx ); + + if (idx == std::string::npos) + return false; + if (idx == 0) { + // -444 format + req->rangefirst = -1; + } else { + std::istringstream(seek.substr(0,idx)) >> req->rangefirst; + } + + + dprintf("%s @%i http range request first %s %lld\n", tintstr(),req->id, seek.substr(0,idx).c_str(), req->rangefirst ); + + if (idx == seek.length()-1) + req->rangelast = -1; + else { + // 444- format + std::istringstream(seek.substr(idx+1)) >> req->rangelast; + } + + dprintf("%s @%i http range request last %s %lld\n", tintstr(),req->id, seek.substr(idx+1).c_str(), req->rangelast ); + + // Check sanity of range request + if (filesize == -1) { + // - No length (live) + bad = true; + } + else if (req->rangefirst == -1 && req->rangelast == -1) { + // - Invalid input + bad = true; + } + else if (req->rangefirst >= (int64_t)filesize) { + bad = true; + } + else if (req->rangelast >= (int64_t)filesize) { + if (req->rangefirst == -1) { + // If the entity is shorter than the specified + // suffix-length, the entire entity-body is used. + req->rangelast = filesize-1; + } + else + bad = true; + } + } + + if (bad) { + // Send 416 - Requested Range not satisfiable + std::ostringstream cross; + if (filesize == -1) + cross << "bytes */*"; + else + cross << "bytes */" << filesize; + evhttp_add_header(repheaders, "Content-Range", cross.str().c_str() ); + + evhttp_send_error(req->sinkevreq,416,"Malformed range specification"); + + dprintf("%s @%i http invalid range %lld-%lld\n",tintstr(),req->id,req->rangefirst,req->rangelast ); + return false; + } + + // Convert wildcards into actual values + if (req->rangefirst != -1 && req->rangelast == -1) { + // "100-" : byte 100 and further + req->rangelast = filesize - 1; + } + else if (req->rangefirst == -1 && req->rangelast != -1) { + // "-100" = last 100 bytes + req->rangefirst = filesize - req->rangelast; + req->rangelast = filesize - 1; + } + + // Generate header + std::ostringstream cross; + cross << "bytes " << req->rangefirst << "-" << req->rangelast << "/" << filesize; + evhttp_add_header(repheaders, "Content-Range", cross.str().c_str() ); + + // Reply is sent when content is avail + req->replycode = 206; + + dprintf("%s @%i http valid range %lld-%lld\n",tintstr(),req->id,req->rangefirst,req->rangelast ); + + return true; +} + + +void HttpGwFirstProgressCallback (int transfer, bin_t bin) { + // First chunk of data available + dprintf("%s T%i http first progress\n",tintstr(),transfer); + + // Need the first chunk + if (swift::SeqComplete(transfer) == 0) + { + dprintf("%s T%i first: not enough seqcomp\n",tintstr(),transfer ); + return; + } + + http_gw_t* req = HttpGwFindRequestByTransfer(transfer); + if (req == NULL) + { + dprintf("%s T%i first: req not found\n",tintstr(),transfer ); + return; + } + + // MULTIFILE + // Is storage ready? + FileTransfer *ft = FileTransfer::file(req->transfer); + if (ft == NULL) { + dprintf("%s T%i first: FileTransfer not found\n",tintstr(),transfer ); + evhttp_send_error(req->sinkevreq,500,"Internal error: Content not found although downloading it."); + return; + } + if (!ft->GetStorage()->IsReady()) + { + dprintf("%s T%i first: Storage not ready\n",tintstr(),transfer ); + return; // wait for some more data + } + + // Protection against spurious callback + if (req->tosend > 0) + { + dprintf("%s T%i first: already set tosend\n",tintstr(),transfer ); + return; + } + + // Good to go. Reconfigure callbacks + swift::RemoveProgressCallback(transfer,&HttpGwFirstProgressCallback); + int progresslayer = bytes2layer(HTTPGW_PROGRESS_STEP_BYTES,swift::ChunkSize(transfer)); + swift::AddProgressCallback(transfer,&HttpGwSwiftProgressCallback,progresslayer); + + // Send header of HTTP reply + uint64_t filesize = 0; + if (req->mfspecname != "") + { + // MULTIFILE + // Find out size of selected file + storage_files_t sfs = ft->GetStorage()->GetStorageFiles(); + storage_files_t::iterator iter; + bool found = false; + for (iter = sfs.begin(); iter < sfs.end(); iter++) + { + StorageFile *sf = *iter; + + dprintf("%s T%i first: mf: comp <%s> <%s>\n",tintstr(),transfer, sf->GetSpecPathName().c_str(), req->mfspecname.c_str() ); + + if (sf->GetSpecPathName() == req->mfspecname) + { + found = true; + req->startoff = sf->GetStart(); + req->endoff = sf->GetEnd(); + filesize = sf->GetSize(); + break; + } + } + if (!found) { + evhttp_send_error(req->sinkevreq,404,"Individual file not found in multi-file content."); + return; + } + } + else + { + // Single file + req->startoff = 0; + req->endoff = swift::Size(req->transfer)-1; + filesize = swift::Size(transfer); + } + + // Handle HTTP GET Range request, i.e. additional offset within content or file + if (!HttpGwParseContentRangeHeader(req,filesize)) + return; + + if (req->rangefirst != -1) + { + // Range request + req->startoff += req->rangefirst; + req->endoff = req->startoff + req->rangelast; + req->tosend = req->rangelast+1-req->rangefirst; + } + else + { + req->tosend = filesize; + } + req->offset = req->startoff; + + // SEEKTODO: concurrent requests to same resource + if (req->startoff != 0) + { + // Seek to multifile/range start + int ret = swift::Seek(req->transfer,req->startoff,SEEK_SET); + if (ret < 0) { + evhttp_send_error(req->sinkevreq,500,"Internal error: Cannot seek to file start in range request or multi-file content."); + return; + } + } + + // Convert size to string + std::ostringstream closs; + closs << req->tosend; + + struct evkeyvalq *reqheaders = evhttp_request_get_output_headers(req->sinkevreq); + //evhttp_add_header(reqheaders, "Connection", "keep-alive" ); + evhttp_add_header(reqheaders, "Connection", "close" ); + evhttp_add_header(reqheaders, "Content-Type", "video/ogg" ); + if (req->xcontentdur.length() > 0) + evhttp_add_header(reqheaders, "X-Content-Duration", req->xcontentdur.c_str() ); + evhttp_add_header(reqheaders, "Content-Length", closs.str().c_str() ); + //evhttp_add_header(reqheaders, "Accept-Ranges", "none" ); + + dprintf("%s @%i headers sent, size %lli\n",tintstr(),req->id,req->tosend); + + /* + * Arno, 2011-10-17: Swift ProgressCallbacks are only called when + * the data is downloaded, not when it is already on disk. So we need + * to handle the situation where all or part of the data is already + * on disk. Subscribing to writability of the socket works, + * but requires libevent2 >= 2.1 (or our backported version) + */ + HttpGwSubscribeToWrite(req); +} + + + + +bool swift::ParseURI(std::string uri,parseduri_t &map) +{ + // + // Format: tswift://tracker:port/roothash-in-hex/filename$chunksize@duration + // where the server part, filename, chunksize and duration may be optional + // + std::string scheme=""; + std::string server=""; + std::string path=""; + if (uri.substr(0,((std::string)SWIFT_URI_SCHEME).length()) == SWIFT_URI_SCHEME) + { + // scheme present + scheme = SWIFT_URI_SCHEME; + int sidx = uri.find("//"); + if (sidx != std::string::npos) + { + // server part present + int eidx = uri.find("/",sidx+2); + server = uri.substr(sidx+2,eidx-(sidx+2)); + path = uri.substr(eidx); + } + else + path = uri.substr(((std::string)SWIFT_URI_SCHEME).length()+1); + } + else + path = uri; + + + std::string hashstr=""; + std::string filename=""; + std::string modstr=""; + + int sidx = path.find("/",1); + int midx = path.find("$",1); + if (midx == std::string::npos) + midx = path.find("@",1); + if (sidx == std::string::npos && midx == std::string::npos) { + // No multi-file, no modifiers + hashstr = path.substr(1,path.length()); + } else if (sidx != std::string::npos && midx == std::string::npos) { + // multi-file, no modifiers + hashstr = path.substr(1,sidx-1); + filename = path.substr(sidx+1,path.length()-sidx); + } else if (sidx == std::string::npos && midx != std::string::npos) { + // No multi-file, modifiers + hashstr = path.substr(1,midx-1); + modstr = path.substr(midx,path.length()-midx); + } else { + // multi-file, modifiers + hashstr = path.substr(1,sidx-1); + filename = path.substr(sidx+1,midx-(sidx+1)); + modstr = path.substr(midx,path.length()-midx); + } + + + std::string durstr=""; + std::string chunkstr=""; + sidx = modstr.find("@"); + if (sidx == std::string::npos) + { + durstr = ""; + if (modstr.length() > 1) + chunkstr = modstr.substr(1); + } + else + { + if (sidx == 0) + { + // Only durstr + chunkstr = ""; + durstr = modstr.substr(sidx+1); + } + else + { + chunkstr = modstr.substr(1,sidx-1); + durstr = modstr.substr(sidx+1); + } + } + + map.insert(stringpair("scheme",scheme)); + map.insert(stringpair("server",server)); + map.insert(stringpair("path",path)); + // Derivatives + map.insert(stringpair("hash",hashstr)); + map.insert(stringpair("filename",filename)); + map.insert(stringpair("chunksizestr",chunkstr)); + map.insert(stringpair("durationstr",durstr)); + + return true; +} + + + +void HttpGwNewRequestCallback (struct evhttp_request *evreq, void *arg) { + + dprintf("%s @%i http new request\n",tintstr(),http_gw_reqs_count+1); + + if (evhttp_request_get_command(evreq) != EVHTTP_REQ_GET) { + return; + } + sawhttpconn = true; + + // 1. Get URI + // Format: /roothash[/multi-file][@duration] + // ARNOTODO: allow for chunk size to be set via URL? + std::string uri = evhttp_request_get_uri(evreq); + + struct evkeyvalq *reqheaders = evhttp_request_get_input_headers(evreq); + + // Arno, 2012-04-19: libevent adds "Connection: keep-alive" to reply headers + // if there is one in the request headers, even if a different Connection + // reply header has already been set. And we don't do persistent conns here. + // + evhttp_remove_header(reqheaders,"Connection"); // Remove Connection: keep-alive + + // 2. Parse URI + std::string hashstr = "", mfstr="", durstr=""; + + if (uri.length() == 1) { + evhttp_send_error(evreq,400,"Path must be root hash in hex, 40 bytes."); + return; + } + + parseduri_t puri; + if (!swift::ParseURI(uri,puri)) + { + evhttp_send_error(evreq,400,"Path format is /roothash-in-hex/filename$chunksize@duration"); + return; + } + hashstr = puri["hash"]; + mfstr = puri["filename"]; + durstr = puri["durationstr"]; + + dprintf("%s @%i demands %s %s %s\n",tintstr(),http_gw_reqs_open+1,hashstr.c_str(),mfstr.c_str(),durstr.c_str() ); + + + // 3. Check for concurrent requests, currently not supported. + Sha1Hash root_hash = Sha1Hash(true,hashstr.c_str()); + http_gw_t *existreq = HttpGwFindRequestByRoothash(root_hash); + if (existreq != NULL) + { + evhttp_send_error(evreq,409,"Conflict: server does not support concurrent requests to same swarm."); + return; + } + + // 4. Initiate transfer + int transfer = swift::Find(root_hash); + if (transfer==-1) { + transfer = swift::Open(hashstr,root_hash,Address(),false,true,httpgw_chunk_size); + dprintf("%s @%i trying to HTTP GET swarm %s that has not been STARTed\n",tintstr(),http_gw_reqs_open+1,hashstr.c_str()); + + // Arno, 2011-12-20: Only on new transfers, otherwise assume that CMD GW + // controls speed + FileTransfer *ft = FileTransfer::file(transfer); + ft->SetMaxSpeed(DDIR_DOWNLOAD,httpgw_maxspeed[DDIR_DOWNLOAD]); + ft->SetMaxSpeed(DDIR_UPLOAD,httpgw_maxspeed[DDIR_UPLOAD]); + } + + // 5. Record request + http_gw_t* req = http_requests + http_gw_reqs_open++; + req->id = ++http_gw_reqs_count; + req->sinkevreq = evreq; + + // Replace % escaped chars to 8-bit values as part of the UTF-8 encoding + char *decodedmf = evhttp_uridecode(mfstr.c_str(), 0, NULL); + req->mfspecname = std::string(decodedmf); + free(decodedmf); + req->xcontentdur = durstr; + req->offset = 0; + req->tosend = 0; + req->transfer = transfer; + req->lastcpoffset = 0; + req->sinkevwrite = NULL; + req->closing = false; + req->startoff = 0; + req->endoff = 0; + + fprintf(stderr,"httpgw: Opened %s\n",hashstr.c_str()); + + // We need delayed replying, so take ownership. + // See http://code.google.com/p/libevent-longpolling/source/browse/trunk/main.c + // Careful: libevent docs are broken. It doesn't say that evhttp_send_reply_send + // actually calls evhttp_request_free, i.e. releases ownership for you. + // + evhttp_request_own(evreq); + + // Register callback for connection close + struct evhttp_connection *evconn = evhttp_request_get_connection(req->sinkevreq); + evhttp_connection_set_closecb(evconn,HttpGwLibeventCloseCallback,req->sinkevreq); + + struct bufferevent* evbufev = evhttp_connection_get_bufferevent(evconn); + int sockfd = bufferevent_getfd(evbufev); + + if (swift::Size(transfer)) { + HttpGwFirstProgressCallback(transfer,bin_t(0,0)); + } else { + swift::AddProgressCallback(transfer,&HttpGwFirstProgressCallback,HTTPGW_FIRST_PROGRESS_BYTE_INTERVAL_AS_LAYER); + } +} + + +bool InstallHTTPGateway (struct event_base *evbase,Address bindaddr, uint32_t chunk_size, double *maxspeed) { + // Arno, 2011-10-04: From libevent's http-server.c example + + /* Create a new evhttp object to handle requests. */ + http_gw_event = evhttp_new(evbase); + if (!http_gw_event) { + print_error("httpgw: evhttp_new failed"); + return false; + } + + /* Install callback for all requests */ + evhttp_set_gencb(http_gw_event, HttpGwNewRequestCallback, NULL); + + /* Now we tell the evhttp what port to listen on */ + http_gw_handle = evhttp_bind_socket_with_handle(http_gw_event, bindaddr.ipv4str(), bindaddr.port()); + if (!http_gw_handle) { + print_error("httpgw: evhttp_bind_socket_with_handle failed"); + return false; + } + + httpgw_chunk_size = chunk_size; + httpgw_maxspeed = maxspeed; + return true; +} + + +uint64_t lastoffset=0; +uint64_t lastcomplete=0; +tint test_time = 0; + +/** For SwarmPlayer 3000's HTTP failover. We should exit if swift isn't + * delivering such that the extension can start talking HTTP to the backup. + */ +bool HTTPIsSending() +{ + if (http_gw_reqs_open > 0) + { + FileTransfer *ft = FileTransfer::file(http_requests[http_gw_reqs_open-1].transfer); + if (ft != NULL) { + fprintf(stderr,"httpgw: upload %lf\n",ft->GetCurrentSpeed(DDIR_UPLOAD)/1024.0); + fprintf(stderr,"httpgw: dwload %lf\n",ft->GetCurrentSpeed(DDIR_DOWNLOAD)/1024.0); + //fprintf(stderr,"httpgw: seqcmp %llu\n", swift::SeqComplete(http_requests[http_gw_reqs_open-1].transfer)); + } + } + return true; + + // TODO: reactivate when used in SwiftTransport / SwarmPlayer 3000. + + if (test_time == 0) + { + test_time = NOW; + return true; + } + + if (NOW > test_time+5*1000*1000) + { + fprintf(stderr,"http alive: httpc count is %d\n", http_gw_reqs_open ); + + if (http_gw_reqs_open == 0 && !sawhttpconn) + { + fprintf(stderr,"http alive: no HTTP activity ever, quiting\n"); + return false; + } + else + sawhttpconn = true; + + for (int httpc=0; httpc= 100000) + { + fprintf(stderr,"http alive: 100K sent, quit\n"); + return false; + } + else + { + fprintf(stderr,"http alive: sent %lli\n", http_requests[httpc].offset ); + return true; + } + */ + + // If + // a. don't know anything about content (i.e., size still 0) or + // b. not sending to HTTP client and not at end, and + // not downloading from P2P and not at end + // then stop. + if ( swift::Size(http_requests[httpc].transfer) == 0 || \ + (http_requests[httpc].offset == lastoffset && + http_requests[httpc].offset != swift::Size(http_requests[httpc].transfer) && \ + swift::Complete(http_requests[httpc].transfer) == lastcomplete && \ + swift::Complete(http_requests[httpc].transfer) != swift::Size(http_requests[httpc].transfer))) + { + fprintf(stderr,"http alive: no progress, quiting\n"); + //getchar(); + return false; + } + + /* + if (http_requests[httpc].offset == swift::Size(http_requests[httpc].transfer)) + { + // TODO: seed for a while. + fprintf(stderr,"http alive: data delivered to client, quiting\n"); + return false; + } + */ + + lastoffset = http_requests[httpc].offset; + lastcomplete = swift::Complete(http_requests[httpc].transfer); + } + test_time = NOW; + + return true; + } + else + return true; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/bash_profile tribler-6.2.0/Tribler/SwiftEngine/mfold/bash_profile --- tribler-6.2.0/Tribler/SwiftEngine/mfold/bash_profile 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/bash_profile 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,4 @@ +export PATH=$HOME/bin:$PATH +export CPPPATH=$CPPPATH:$HOME/include +export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/lib +export LIBPATH=$LD_LIBRARY_PATH diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/build.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/build.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/build.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/build.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,82 @@ +#!/bin/bash + +if [ -e ~/.building_swift ]; then + exit 0 +fi + +touch ~/.building_swift + +if ! which git || ! which g++ || ! which scons || ! which make ; then + sudo apt-get -y install make g++ scons git-core || exit 1 +fi + +if [ ! -e ~/include/event.h ]; then + echo installing libevent + mkdir tmp + cd tmp || exit 2 + wget -c http://monkey.org/~provos/libevent-2.0.7-rc.tar.gz || exit 3 + rm -rf libevent-2.0.7-rc + tar -xzf libevent-2.0.7-rc.tar.gz || exit 4 + cd libevent-2.0.7-rc/ || exit 5 + ./configure --prefix=$HOME || exit 6 + make || exit 7 + make install || exit 8 + cd ~/ + echo done libevent +fi + +if [ ! -e ~/include/gtest/gtest.h ]; then + echo installing gtest + mkdir tmp + cd tmp || exit 9 + wget -c http://googletest.googlecode.com/files/gtest-1.4.0.tar.bz2 || exit 10 + rm -rf gtest-1.4.0 + tar -xjf gtest-1.4.0.tar.bz2 || exit 11 + cd gtest-1.4.0 || exit 12 + ./configure --prefix=$HOME || exit 13 + make || exit 14 + make install || exit 15 + cd ~/ + echo done gtest +fi + +#if ! which pcregrep ; then +# echo installing pcregrep +# mkdir tmp +# cd tmp +# wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.01.tar.gz || exit 5 +# tar -xzf pcre-8.01.tar.gz +# cd pcre-8.01 +# ./configure --prefix=$HOME || exit 6 +# make -j4 || exit 7 +# make install || exit 8 +# echo done pcregrep +#fi + +if [ ! -e swift ]; then + echo clone the repo + git clone $ORIGIN || exit 16 +fi +cd swift +echo switching the branch +git checkout $BRANCH || exit 17 +echo pulling updates +git pull origin $BRANCH:$BRANCH || exit 18 + +echo building +INCL=~/include LIB=~/lib +CPPPATH=$INCL LIBPATH=$LIB scons -j4 || exit 19 +echo testing +LD_LIBRARY_PATH=$LIB tests/connecttest || exit 20 + +# TODO: one method +mv bingrep.cpp ext/ +if [ ! -e bin ]; then mkdir bin; fi +g++ -I. -I$INCL *.cpp ext/seq_picker.cpp -pg -o bin/swift-pg -L$LIB -levent & +g++ -I. -I$INCL *.cpp ext/seq_picker.cpp -g -o bin/swift-dbg -L$LIB -levent & +g++ -I. -I$INCL *.cpp ext/seq_picker.cpp -O2 -o bin/swift-o2 -L$LIB -levent & +wait + +rm ~/.building_swift + +echo done diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/clean.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/clean.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/clean.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/clean.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,10 @@ +#if [ $EMIF ]; then +# sudo tc qdisc del dev $EMIF ingress +# sudo tc qdisc del dev ifb0 root +#fi +#sudo iptables -F & +cd swift +rm -rf *chunk core *harvest ~/.building_swift ~/.dohrv_copying +killall swift-o2 +killall swift-dbg +echo DONE diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,3 @@ +#!/bin/bash + +killall -q leecher || true diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.node300.das2.ewi.tudelft.nl.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.node300.das2.ewi.tudelft.nl.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.node300.das2.ewi.tudelft.nl.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/cleanup.node300.das2.ewi.tudelft.nl.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1 @@ +killall seeder || true diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/compile.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/compile.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/compile.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/compile.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,20 @@ +if [ -e ~/.building_swift ]; then + exit 0 +fi + +touch ~/.building_swift + +cd swift || exit 1 +if [ ! -d bin ]; then mkdir bin; fi +git pull origin $BRANCH:$BRANCH || exit 2 +rm bin/swift-pg bin/swift-o3 bin/swift-dbg + +g++ -I. *.cpp ext/seq_picker.cpp -pg -o bin/swift-pg & +g++ -I. *.cpp ext/seq_picker.cpp -g -o bin/swift-dbg & +g++ -I. *.cpp ext/seq_picker.cpp -O3 -o bin/swift-o3 & +wait +if [ ! -e bin/swift-pg ]; then exit 4; fi +if [ ! -e bin/swift-dbg ]; then exit 5; fi +if [ ! -e bin/swift-o3 ]; then exit 6; fi + +rm ~/.building_swift diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/das2.txt tribler-6.2.0/Tribler/SwiftEngine/mfold/das2.txt --- tribler-6.2.0/Tribler/SwiftEngine/mfold/das2.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/das2.txt 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,27 @@ +130.161.211.200 node300 +130.161.211.201 node301 +130.161.211.202 node302 +130.161.211.203 node303 +130.161.211.204 node304 +130.161.211.205 node305 +130.161.211.206 node306 +130.161.211.208 node308 +130.161.211.209 node309 +130.161.211.210 node310 +130.161.211.212 node312 +130.161.211.213 node313 +130.161.211.214 node314 +130.161.211.215 node315 +130.161.211.217 node317 +130.161.211.219 node319 +130.161.211.220 node320 +130.161.211.222 node322 +130.161.211.223 node323 +130.161.211.224 node324 +130.161.211.225 node325 +130.161.211.226 node326 +130.161.211.227 node327 +130.161.211.228 node328 +130.161.211.229 node329 +130.161.211.230 node330 +130.161.211.231 node331 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/do-harvest.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/do-harvest.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/do-harvest.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/do-harvest.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,25 @@ +#!/bin/bash +# This script executes a chain of commands +# on all the member servers, in parallel. +# Commands are defined in .sh files (see +# docmd.sh); all failed executions are +# put to the FAILURES file +rm -f FAILURES + +if [ -z "$SERVERS" ]; then + SERVERS="das2.txt" +fi +HOSTS=`cat $SERVERS | awk '{print $1}'` + +for srv in $HOSTS; do + ( for cmd in $@; do + if ! ./docmd.sh $srv $cmd; then + echo $srv >> FAILURES + echo $src FAILED + break + fi + done ) & +done + +wait +echo DONE diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/doall tribler-6.2.0/Tribler/SwiftEngine/mfold/doall --- tribler-6.2.0/Tribler/SwiftEngine/mfold/doall 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/doall 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,34 @@ +#!/bin/bash +# This script executes a chain of commands +# on all the member servers, in parallel. +# Commands are defined in .sh files (see +# docmd.sh); all failed executions are +# put to the FAILURES file +rm -f FAILURES +if [ ! -d logs ]; then + mkdir logs +fi + +if [ -z "$SERVERS" ]; then + SERVERS="servers.txt" +fi + + +# Line format in $SERVERS: : +for srvstr in `grep -v '^#' $SERVERS`; do + ( + srv=${srvstr%:*} + port=${srvstr#*:} + if [[ $port && $srv == $port ]]; then + port= + fi + if ! ./docmd $srv $1 $port; then + echo $srv >> FAILURES + echo $srv FAILED + break + fi + ) & +done + +wait +echo DONE diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/docmd tribler-6.2.0/Tribler/SwiftEngine/mfold/docmd --- tribler-6.2.0/Tribler/SwiftEngine/mfold/docmd 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/docmd 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,39 @@ +#!/bin/bash + +HOST=$1 +CMD=$2 +PORT=$3 +ENV=env.default.sh + +if [ -e env.$HOST.sh ]; then + ENV="$ENV env.$HOST.sh" +fi + +if [ -e $CMD.$HOST.sh ] ; then + SHSC=$CMD.$HOST.sh ; +else + SHSC=$CMD.default.sh ; +fi + +ENVSTR="HOST=$HOST" + +if [ $PORT ]; then + ENVSTR=$ENVSTR"; export SWFTPORT=$PORT" +fi + +if [ ! -d logs ]; then mkdir logs; fi +if [ ! -e $SHSC ]; then + echo $HOST $CMD EMPTY + exit 0 +fi + +if ( (cat $ENV; echo $ENVSTR; cat $SHSC) | ssh -T $HOST ) > \ + logs/$HOST.$CMD.out 2> logs/$HOST.$CMD.err; then + echo $HOST $CMD OK + exit 0 +else + echo $HOST $CMD FAIL + cat $SHSC + cat logs/$HOST.$CMD.out logs/$HOST.$CMD.err + exit 1 +fi diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/dohrv tribler-6.2.0/Tribler/SwiftEngine/mfold/dohrv --- tribler-6.2.0/Tribler/SwiftEngine/mfold/dohrv 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/dohrv 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,117 @@ +#!/bin/bash +# The script downloads logs in parallel, +# feeds them into fifos; sort takes logs +# from fifos, merges and gzips them; +# the result is put into harvest/ +# +if [ ! $SERVERS ]; then + export SERVERS="servers.txt" +fi + +. env.default.sh + +if [ ! $TMPDIR ]; then + TMPDIR=/tmp +fi + +if [ ! $MAXHARVEST ]; then + MAXHARVEST=100 +fi + +mv harvest .hrv-old +rm -rf .hrv-old & +mkdir harvest + +i=0 +j=1 +for sstr in `grep -v '#' $SERVERS`; do + s=${sstr%:*} + mkfifo harvest/$s-$j.fifo + # yas, yes, yes + ( + if ssh $s \ + "if [ -e ~/.dohrv_copying ]; then exit 1; \ + else touch ~/.dohrv_copying; fi" ; then + scp ~/.ssh/config $s:.ssh/config > /dev/null + scp $SERVERS $s:swift/mfold/servers.txt > /dev/null + ssh $s "rm -f ~/.dohrv_copying" + fi + ) & + let i++ + if [ $i == $MAXHARVEST ]; then + wait + i=0 + mkfifo harvest/swpart$j.fifo + let j++ + fi +done +if [[ $i>0 ]]; then + wait + mkfifo harvest/swpart$j.fifo +fi + +echo 'Done making fifos and copying configs.' + +i=0 +j=1 +for sstr in `grep -v '#' $SERVERS`; do + s=${sstr%:*} + ( + if ssh $s \ + "cd swift/ && \ + rm -rf $s-harvest && mkdir $s-harvest && \ + ( zcat $s-lout.gz | ./mfold/logparse $s | gzip )" \ + | gunzip > harvest/$s-$j.fifo ; then + + ssh $s "cd swift/; tar cz $s-harvest" | tar xz + mv $s-harvest/* harvest/ + rmdir $s-harvest + echo $s harvest OK + + else + echo $s harvest FAIL + fi + ) & + let i++ + if [ $i == $MAXHARVEST ]; then + # Ensure your version of sort is recent enough + # batch-size is critical for performance + LC_ALL=C sort -m -s -T $TMPDIR --batch-size=64 --compress-program=gzip \ + harvest/*-$j.fifo | gzip > harvest/swpart$j.log.gz & + wait + i=0 + let j++ + fi +done +if [[ $i>0 ]]; then + LC_ALL=C sort -m -s -T $TMPDIR --batch-size=64 --compress-program=gzip \ + harvest/*-$j.fifo | gzip > harvest/swpart$j.log.gz & + wait + let j++ +fi + +echo 'Done sorting of swarm parts.' + +if [[ $j>2 ]]; then + for (( i=1; i harvest/swpart$i.fifo) & + done + LC_ALL=C sort -m -s -T $TMPDIR --batch-size=64 --compress-program=gzip \ + harvest/swpart*.fifo | gzip > harvest/swarm.log.gz & + wait +else + mv harvest/swpart1.log.gz harvest/swarm.log.gz +fi + +echo 'Done sorting of whole swarm.' + +rm harvest/*.fifo +rm harvest/swpart*.log.gz +./loggraphs +./logreport > harvest/index.html +#./logdistr + +cp report.css harvest +# scp -rq harvest mfold.libswift.org:/storage/mfold-granary/`date +%d%b_%H:%M`_`whoami` & + +echo DONE diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/dotop tribler-6.2.0/Tribler/SwiftEngine/mfold/dotop --- tribler-6.2.0/Tribler/SwiftEngine/mfold/dotop 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/dotop 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,9 @@ +#!/bin/bash + +while true; do + rm logs/*status.out + ( ./doall status > /dev/null ) & + wait + clear + cat logs/*status.out | sort +done diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/env.1mbit.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/env.1mbit.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/env.1mbit.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/env.1mbit.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,4 @@ +EMIF=eth0 +EMBW=1mbit +EMDELAY=50ms + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/env.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/env.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/env.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/env.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,19 @@ +# This script sets up shared environment variables +# at the servers and during harvesting +export SEEDER=130.161.211.198 +# SEEDERPORT must match with the port number on seeder line in +# $SERVERS +export SEEDERPORT=10004 +export HASH=66b9644bb01eaad09269354df00172c8a924773b +export BRANCH=master +export ORIGIN=git://github.com/gritzko/swift.git +# Temporary directory for sort (run by dohrv) +export TMPDIR=/home/jori/tmp +# Maximum number of peers to be parsed in parallel by dohrv +export MAXHARVEST=200 +# Maximum number of gnuplots to be run in parallel by loggraphs +export MAXGNUPLOTS=50 +# General HTB and Netem parameters. Overdriven by env..sh +EMIF=eth0 +EMBW=10Mbit +EMDELAY=10ms diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/env.lossy.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/env.lossy.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/env.lossy.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/env.lossy.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,4 @@ +EMIF=eth0 +EMBW=1mbit +EMDELAY=100ms +EMLOSS=5.0% diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/env.messy.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/env.messy.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/env.messy.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/env.messy.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,5 @@ +EMIF=eth0 +EMBW=1mbit +EMDELAY=100ms +EMJTTR=20ms + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/hosts.txt tribler-6.2.0/Tribler/SwiftEngine/mfold/hosts.txt --- tribler-6.2.0/Tribler/SwiftEngine/mfold/hosts.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/hosts.txt 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,27 @@ +130.161.211.200 node300 +130.161.211.201 node301 +130.161.211.202 node302 +130.161.211.203 node303 +130.161.211.204 node304 +130.161.211.205 node305 +130.161.211.206 node306 +130.161.211.208 node308 +130.161.211.209 node309 +130.161.211.210 node310 +130.161.211.212 node312 +130.161.211.213 node313 +130.161.211.214 node314 +130.161.211.215 node315 +130.161.211.217 node317 +130.161.211.219 node319 +130.161.211.220 node320 +130.161.211.222 node322 +130.161.211.223 node323 +130.161.211.224 node324 +130.161.211.225 node325 +130.161.211.226 node326 +130.161.211.227 node327 +130.161.211.228 node328 +130.161.211.229 node329 +130.161.211.230 node330 +130.161.211.231 node331 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/install.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/install.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/install.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/install.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1 @@ +echo TODO diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/loggraphs tribler-6.2.0/Tribler/SwiftEngine/mfold/loggraphs --- tribler-6.2.0/Tribler/SwiftEngine/mfold/loggraphs 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/loggraphs 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,99 @@ +#!/bin/bash + +if [ -z "$SERVERS" ]; then + SERVERS="servers.txt" +fi + +. env.default.sh + +if [ ! $MAXGNUPLOTS ]; then + MAXGNUPLOTS=200 +fi + +VERSION=`date`,`git log --summary | head -1` + +cd harvest + +# HEAD=`head -1 *.log | grep -v '^$' | cut -f1 | sort | head -1` +# TAIL=`tail -n1 -q *.log | grep -v '^$' | cut -f1 | sort | tail -n1 -q` + +i=0 +for fromstr in `grep -v '^#' ../$SERVERS`; do + from=${fromstr%:*} + for tostr in `grep -v '^#' ../$SERVERS`; do + to=${tostr%:*} + CWNDLOG="$from-$to-cwnd.log" + if [ ! -e $CWNDLOG ]; then + continue + fi + GP="$from-$to.gnuplot" + + echo "set term png large size 2048,768" > $GP + PNG="$from-$to.big.png" + if [ -e $PNG ]; then rm $PNG; fi + echo "set out '$PNG'" >> $GP + + echo "set y2tics" >> $GP + echo "set y2label 'packets'" >> $GP + echo "set ylabel 'microseconds'" >> $GP + echo "set xlabel 'run time millis'" >> $GP + echo "set title '$VERSION'" >> $GP + #echo "set xrange [$HEAD:$TAIL]" >> $GP + CWNDLOG="$from-$to-cwnd.log" + echo -ne "plot '$CWNDLOG' using 1:2 with lines lt rgb '#00aa00' title 'cwnd'"\ + " axis x1y2, "\ + " '$CWNDLOG' using 1:3 with lines lt rgb '#99ff99' title 'data out'"\ + " axis x1y2 "\ + >> $GP + RTTLOG="$from-$to-rtt.log" + if [ -e $RTTLOG ]; then + echo -ne ", '$RTTLOG' using 1:2 with lines lt rgb '#2833ff' title 'rtt' "\ + "axis x1y1, "\ + "'$RTTLOG' using 1:3 with lines lt rgb '#8844ff' title 'dev' "\ + "axis x1y1"\ + >> $GP + fi + OWDLOG="$from-$to-owd.log" + if [ -e $OWDLOG ]; then + echo -ne ", '$OWDLOG' using 1:2 with lines lt rgb '#ff00ee' title 'owd' "\ + "axis x1y1, "\ + "'$OWDLOG' using 1:3 with lines lw 2 lt rgb '#0044cc' title 'min owd'"\ + "axis x1y1, "\ + "'$OWDLOG' using 1:(\$3+25000) with lines lw 2 lt rgb '#0000ff' title 'target'"\ + "axis x1y1 "\ + >> $GP + fi + RDATALOG="$from-$to-rdata.log" + if [ -e $RDATALOG ]; then + echo -ne ", '$RDATALOG' using 1:(1) with points "\ + "lt rgb '#0f0000' title 'r-losses'"\ + >> $GP + fi + TDATALOG="$from-$to-tdata.log" + if [ -e $TDATALOG ]; then + echo -ne ", '$TDATALOG' using 1:(1) with points "\ + "lt rgb '#ff0000' title 't-losses'"\ + >> $GP + fi + echo >> $GP + + echo "set term png size 512,192" >> $GP + PNG="$from-$to.thumb.png" + if [ -e $PNG ]; then rm $PNG; fi + echo "set out '$PNG'" >> $GP + echo "unset title" >> $GP + echo "unset xlabel" >> $GP + echo "unset ylabel" >> $GP + echo "replot" >> $GP + + ( cat $GP | gnuplot ) & + let i++ + if [ $i == $MAXGNUPLOTS ]; then + wait + i=0 + fi + done +done + +wait +cd .. diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/logparse tribler-6.2.0/Tribler/SwiftEngine/mfold/logparse --- tribler-6.2.0/Tribler/SwiftEngine/mfold/logparse 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/logparse 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,185 @@ +#!/usr/bin/perl -w + +$SERVER=shift; +%PORTS = (); +%HOSTS = (); +%CHANN = ( "#0" => "none" ); +%EVENTS = ( "#0" => {"+hs"=>0} ); +%SENT = (); +%RCVD = (); +%DSENT = (); +%DRCVD = (); +%CWNDLOG = (); +%RTTLOG = (); +%OWDLOG = (); +%TDATALOG = (); +%RDATALOG = (); +%INDATALOG = (); +$SENTB = 0; +$RCVDB = 0; + +open(SRVP,$ENV{"HOME"}."/swift/mfold/servers.txt") or die; +while () { + /(\S+):(\d+)/ or next; + $PORTS{$1} = $2; +} +close SRVP; + +open(SRV,$ENV{"HOME"}."/.ssh/config") or die; +while () { + if (/Host (\S+)/) { + $srvname=$1; + $port = $PORTS{$srvname}; + } + $HOSTS{$1}{$port}=$srvname if /HostName (\S+)/ && $port; +} +close SRV; + +while (<>) { + /(\d+_\d+_\d+_\d+_\d+) (#\d+) (\S+) (.*)/ or next; + my $time = $1; + my $channel = $2; + my $event = $3; + my $rest = $4; + my $host = $CHANN{"$channel"}; + $host = "unknown" if not $host; + $time =~ /^(\d+)_(\d+)_(\d+)_(\d+)/; + my $ms=$1*60; $ms=($ms+$2)*60; $ms=($ms+$3)*1000; $ms+=$4; + if ($event eq "sent") { + $rest =~ /(\d+)b ([\d\.]+):(\d+):/; + $ip = $2; + $port = $3; + $host = $HOSTS{$ip}{$port}; + #$SENT{$h} = 0 if not exists $SENT{$h}; + $SENT{$host} += $1; + $SENTB += $1; + $DSENT{$host}++; + $CHANN{"$channel"} = $host; + } elsif ($event eq "recvd") { + $rest =~ /(\d+)/; + #$RCVD{$h} = 0 if not exists $RCVD{$h}; + $DRCVD{$host}++; + $RCVD{$host} += $1; + $RCVDB += $1; + } elsif ($event eq "sendctrl") { + if ($rest =~ /cwnd (\d+\.\d+).*data_out (\d+)/) { + if (not exists $CWNDLOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-cwnd.log") or die; + $CWNDLOG{$host} = $handle; + } + print {$CWNDLOG{$host}} "$ms\t$1\t$2\n"; + } elsif ($rest =~ /ledbat (\-?\d+)\-(\-?\d+)/) { + if (not exists $OWDLOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-owd.log") or die; + $OWDLOG{$host} = $handle; + } + print {$OWDLOG{$host}} "$ms\t$1\t$2\n"; + } elsif ($rest =~ /rtt (\d+) dev (\d+)/) { + if (not exists $RTTLOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-rtt.log") or die; + $RTTLOG{$host} = $handle; + } + print {$RTTLOG{$host}} "$ms\t$1\t$2\n"; + } + } elsif ($event eq "Tdata") { + if (not exists $TDATALOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-tdata.log") or die; + $TDATALOG{$host} = $handle; + } + print {$TDATALOG{$host}} "$ms\n"; + } elsif ($event eq "Rdata") { + if (not exists $RDATALOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-rdata.log") or die; + $RDATALOG{$host} = $handle; + } + print {$RDATALOG{$host}} "$ms\n"; + } elsif ($event eq "-data" && $rest =~ /\(0,(\d+)\)/) { + my $bin = $1; + if (not exists $INDATALOG{$host}) { + open(my $handle, '>', "$SERVER-harvest/$SERVER-$host-indata.log") or die; + $INDATALOG{$host} = $handle; + } + print {$INDATALOG{$host}} "$bin\t$ms\n"; + } + $EVENTS{"$host"} = { "+hs"=>0 } if not exists $EVENTS{"$host"}; + + print "$time $SERVER $host$channel $event $rest\n"; + + # DO STATS + $EVENTS{"$host"}{"$event"} = 0 if not exists $EVENTS{"$host"}{"$event"}; + $EVENTS{"$host"}{"$event"}++; + +} + +for $host (keys %CWNDLOG) { + close($CWNDLOG{$host}); +} +for $host (keys %OWDLOG) { + close ($OWDLOG{$host}); +} +for $host (keys %RTTLOG) { + close ($RTTLOG{$host}); +} +for $host (keys %TDATALOG) { + close ($TDATALOG{$host}); +} +for $host (keys %RDATALOG) { + close ($RDATALOG{$host}); +} +for $host (keys %INDATALOG) { + close ($INDATALOG{$host}); +} + +open(LEGEND,"> $SERVER-harvest/$SERVER-legend.txt") or die; + +for $channel (keys %CHANN) { + my $host = $CHANN{"$channel"}; + print LEGEND "$channel\t$host\n"; + open(STATS,"> $SERVER-harvest/$SERVER-$host.stat") or die; + my %events = %{ $EVENTS{"$host"} }; + for $event ( keys %events ) { + print STATS "$event\t".($events{"$event"})."\n"; + } + close STATS; + open(HTML,"> $SERVER-harvest/$SERVER-$host.html") or die; + print HTML "\n"; + my $rcvd = $RCVD{$host}; + my $sent = $SENT{$host}; + $rcvd=0.001 if not $rcvd; + $sent=0.001 if not $sent; + printf HTML + "". + "\n", + $sent, $SENTB?$sent/$SENTB*100:0, $rcvd, $RCVDB?$rcvd/$RCVDB*100:0; + print HTML + "\n"; + printf HTML + "\n", + $events{"+data"}, ($events{"+data"}*1029)/$sent*100, + $events{"-data"}, ($events{"-data"}*1029)/$rcvd*100; + printf HTML + "\n", + $events{"+hash"}, ($events{"+hash"}*25)/$sent*100, + $events{"-hash"}, ($events{"-hash"}*25)/$rcvd*100; + printf HTML + "\n", + $events{"+ack"}, ($events{"+ack"}*5)/$sent*100, + $events{"-ack"}, ($events{"-ack"}*5)/$rcvd*100; + printf HTML + "\n", + $events{"+hint"}, ($events{"+hint"}*5)/$sent*100, + $events{"-hint"}, ($events{"-hint"}*5)/$rcvd*100; + printf HTML + "\n", + $events{"+hs"}, $events{"-hs"}; + my $losses = $events{"+data"}>0 ? + ($events{"Rdata"}+$events{"Tdata"})/$events{"+data"}*100 : 0; + printf HTML + "\n", + $events{"Rdata"}, $events{"Tdata"}, + $events{"Rdata"}+$events{"Tdata"}, $losses; + + print HTML "
sentrcvd
bytes%i/%.1f%%%i/%.1f%%
dgrams".$DSENT{$host}."".$DRCVD{$host}."
data%i/%.1f%%%i/%.1f%%
hash%i/%.1f%%%i/%.1f%%
ack%i/%.1f%%%i/%.1f%%
hint%i/%.1f%%%i/%.1f%%
hs%i%i
lossesR:%i+T:%i=%i/%.1f%%
\n"; + close HTML; +} +close LEGEND; diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/logreport tribler-6.2.0/Tribler/SwiftEngine/mfold/logreport --- tribler-6.2.0/Tribler/SwiftEngine/mfold/logreport 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/logreport 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,35 @@ +#!/bin/bash + +if [ ! $SERVERS ]; then + export SERVERS="servers.txt" +fi + +cd harvest + +echo '' +echo '' +echo 'Manifold: swarm tomography' `date` `git log --summary | head -1` '' +echo '' +echo '' +done +echo '' +for fromstr in `grep -v '#' ../$SERVERS`; do + from=${fromstr%:*} + echo '' + for tostr in `grep -v '#' ../$SERVERS`; do + to=${tostr%:*} + echo '' + done + echo '' +done +echo '' diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/net.aussie.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/net.aussie.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/net.aussie.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/net.aussie.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,3 @@ +echo eth0 > .netem-on + +sudo tc qdisc add dev eth0 root netem delay 400ms diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/net.lossy.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/net.lossy.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/net.lossy.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/net.lossy.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,3 @@ +echo eth0 > .netem-on + +sudo tc qdisc add dev eth0 root netem delay 100ms loss 5.0% diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/net.messy.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/net.messy.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/net.messy.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/net.messy.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,3 @@ +echo eth0 > .netem-on + +sudo tc qdisc add dev eth0 root netem delay 100ms 20ms 25% diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/netclean.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/netclean.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/netclean.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/netclean.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,28 @@ +#!/bin/bash +# Cleans configuration made with netem script. + +if [ ! $EMIF ] ; then + exit +fi + +if [ ! $SWFTPORT ]; then + exit +fi + +TC="sudo tc " +CLASSID=$(($SWFTPORT - 9900)) + +echo cleaning filter and class id 1:$CLASSID from ifb0 +$TC filter del dev ifb0 protocol ip prio 1 handle 800::$CLASSID u32 \ + flowid 1:$CLASSID +$TC class del dev ifb0 classid 1:$CLASSID + +echo cleaning filter and class id 1:$CLASSID from lo +$TC filter del dev lo protocol ip prio 1 handle 800::$CLASSID u32 \ + flowid 1:$CLASSID +$TC class del dev lo classid 1:$CLASSID + +echo cleaning filter and class id 1:$CLASSID from $EMIF +$TC filter del dev $EMIF protocol ip prio 1 handle 800::$CLASSID u32 \ + flowid 1:$CLASSID +$TC class del dev $EMIF classid 1:$CLASSID diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/netcleanroot.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/netcleanroot.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/netcleanroot.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/netcleanroot.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,16 @@ +#!/bin/bash +# Cleans configurations made with netroot and netem scripts. + +if [ ! $EMIF ] ; then + exit +fi + +TC="sudo tc " + +echo cleanup +$TC qdisc del dev $EMIF root +$TC qdisc del dev $EMIF ingress +$TC qdisc del dev ifb0 root +$TC qdisc del dev lo root + +exit 0 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/netem.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/netem.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/netem.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/netem.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,103 @@ +#!/bin/bash +# Sets HTB/Netem parameters for the server interfaces. netroot script +# must be run before this. + +if [ ! $EMIF ] ; then + exit +fi + +if [ ! $SWFTPORT ]; then + echo No swift port defined! + exit 1 +fi + +if [ ! $EMLOSS ]; then + EMLOSS=0% +fi + +if [ ! $EMDELAY ]; then + EMDELAY=10ms +fi + +if [ ! $EMBW ]; then + EMBW=10mbit +fi + +if [ ! $EMJTTR ]; then + EMJTTR=0ms +fi + +# ingress params +if [ ! $EMLOSS_IN ]; then + EMLOSS_IN=$EMLOSS +fi + +if [ ! $EMDELAY_IN ]; then + EMDELAY_IN=$EMDELAY +fi + +# zero delay in lo may affect htb performance accuracy (?) +if [ $EMDELAY_IN == 0ms ]; then + EMDELAY_LO_IN=0.1ms +else + EMDELAY_LO_IN=$EMDELAY_IN +fi + +if [ ! $EMBW_IN ]; then + EMBW_IN=$EMBW +fi + +if [ ! $EMJTTR_IN ]; then + EMJTTR_IN=$EMJTTR +fi + +# egress params +if [ ! $EMLOSS_OUT ]; then + EMLOSS_OUT=$EMLOSS +fi + +if [ ! $EMDELAY_OUT ]; then + EMDELAY_OUT=$EMDELAY +fi + +if [ ! $EMBW_OUT ]; then + EMBW_OUT=$EMBW +fi + +if [ ! $EMJTTR_OUT ]; then + EMJTTR_OUT=$EMJTTR +fi + +TC="sudo tc " + +CLASSID=$(($SWFTPORT - 9900)) +HANDLEID=1$CLASSID + +# ingress config +echo adding htb class 1:$CLASSID with rate $EMBW_IN to ifb0 +$TC class add dev ifb0 parent 1: classid 1:$CLASSID htb rate $EMBW_IN || exit 2 +echo adding filter for destination port $SWFTPORT for to ifb0 +$TC filter add dev ifb0 protocol ip prio 1 handle ::$CLASSID u32 \ + match ip dport $SWFTPORT 0xffff flowid 1:$CLASSID || exit 3 +echo adding downlink netem handle $HANDLEID for $EMDELAY_IN, $EMLOSS_IN to ifb0 +$TC qdisc add dev ifb0 parent 1:$CLASSID handle $HANDLEID \ + netem delay $EMDELAY_IN $EMJTTR_IN 25% loss $EMLOSS_IN || exit 4 + +echo adding htb class 1:$CLASSID with rate $EMBW_IN to lo +$TC class add dev lo parent 1: classid 1:$CLASSID htb rate $EMBW_IN || exit 5 +echo adding filter for destination port $SWFTPORT for to lo +$TC filter add dev lo protocol ip prio 1 handle ::$CLASSID u32 \ + match ip dport $SWFTPORT 0xffff flowid 1:$CLASSID || exit 6 +echo adding downlink netem handle $HANDLEID for $EMDELAY_LO_IN, $EMLOSS_IN to lo +$TC qdisc add dev lo parent 1:$CLASSID handle $HANDLEID \ + netem delay $EMDELAY_LO_IN $EMJTTR_IN 25% loss $EMLOSS_IN || exit 7 + +#egress config +echo adding htb class 1:$CLASSID with rate $EMBW_OUT to $EMIF +$TC class add dev $EMIF parent 1: classid 1:$CLASSID htb rate $EMBW_OUT || exit 8 +echo adding filter for source port $SWFTPORT for to $EMIF +$TC filter add dev $EMIF protocol ip prio 1 handle ::$CLASSID u32 \ + match ip sport $SWFTPORT 0xffff flowid 1:$CLASSID || exit 9 +echo adding uplink netem handle $HANDLEID for $EMDELAY_OUT, $EMLOSS_OUT to $EMIF +$TC qdisc add dev $EMIF parent 1:$CLASSID handle $HANDLEID \ + netem delay $EMDELAY_OUT $EMJTTR_OUT 25% loss $EMLOSS_OUT || exit 10 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/netroot.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/netroot.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/netroot.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/netroot.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,36 @@ +#!/bin/bash + +if [ ! $EMIF ] ; then + exit +fi + +TC="sudo tc " + +# echo cleanup +# $TC qdisc del dev $EMIF root +# $TC qdisc del dev $EMIF ingress +# $TC qdisc del dev ifb0 root + +echo ifb0 up +sudo modprobe ifb +sudo ip link set dev ifb0 up + +echo set lo mtu to 1500 +sudo ifconfig lo mtu 1500 || exit 1 + +# Should return OK, when using multiple peers in same host +echo adding ingress +$TC qdisc add dev $EMIF ingress || exit 0 + +echo redirecting to ifb +$TC filter add dev $EMIF parent ffff: protocol ip prio 1 u32 \ + match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb0 || exit 3 + +echo adding ifb0 root htb +$TC qdisc add dev ifb0 handle 1: root htb || exit 4 + +echo adding $EMIF root htb +$TC qdisc add dev $EMIF handle 1: root htb || exit 5 + +echo adding lo root htb +$TC qdisc add dev lo handle 1: root htb || exit 6 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/ps.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/ps.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/ps.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/ps.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,4 @@ +if ps -ef | grep l[e]echer > /dev/null; then + echo `hostname` has a running leecher + return 1 +fi diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/report.css tribler-6.2.0/Tribler/SwiftEngine/mfold/report.css --- tribler-6.2.0/Tribler/SwiftEngine/mfold/report.css 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/report.css 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,35 @@ +table#main table { + background: #ffe; +} + +td.host { + text-align: center; + font: 24pt "Courier"; +} + +table.channel { +/* border: 1 dotted #aa5; */ + font-size: smaller; +} + +td { + border-top: 1 dotted #aa5; + border-spacing: 0; +} + +pp { + color: #a00; +} + +tr.bytes { + background: #fed; +} + +img.thumb { + width: 160pt; + border-style: none; +} + +table#main tr td { + vertical-align: top; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/run.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/run.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/run.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/run.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,13 @@ +#!/bin/bash +# This script runs a leecher at some server; +# env variables are set in env.default.sh + +export LD_LIBRARY_PATH=$HOME/lib + +ulimit -c 1024000 +cd swift || exit 1 +rm -f core +rm -f $HOST-chunk +sleep $(( $RANDOM % 5 )) +bin/swift-o2 -w -h $HASH -f $HOST-chunk -t $SEEDER:$SEEDERPORT \ + -l 0.0.0.0:$SWFTPORT -p -D 2>$HOST-lerr | gzip > $HOST-lout.gz || exit 2 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/run.node300.das2.ewi.tudelft.nl.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/run.node300.das2.ewi.tudelft.nl.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/run.node300.das2.ewi.tudelft.nl.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/run.node300.das2.ewi.tudelft.nl.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,7 @@ +#!/bin/bash + +ulimit -c 1024000 +cd swift || exit 2 +#wget -c http://video.ted.com/talks/podcast/ScottKim_2008P.mp4 || exit 1 + +./exec/seeder ScottKim_2008P.mp4 0.0.0.0:20000 >lout 2> lerr diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/run.seeder.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/run.seeder.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/run.seeder.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/run.seeder.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,13 @@ +#!/bin/bash + +export LD_LIBRARY_PATH=$HOME/lib + +ulimit -c 1024000 +cd swift || exit 2 +if [ ! -e ScottKim_2008P.mp4 ]; then + wget -c http://video.ted.com/talks/podcast/ScottKim_2008P.mp4 || exit 1 +fi + +bin/swift-o2 -w -f ScottKim_2008P.mp4 -p -D \ + -l 0.0.0.0:$SEEDERPORT 2>$HOST-lerr | gzip > $HOST-lout.gz || exit 2 +exit diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/servers.txt tribler-6.2.0/Tribler/SwiftEngine/mfold/servers.txt --- tribler-6.2.0/Tribler/SwiftEngine/mfold/servers.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/servers.txt 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,43 @@ +node300 +aussie +lossy +messy +node304 +node305 +node306 +node308 +node309 +node310 +node312 +node313 +node314 +node315 +node317 +node319 +node320 +node322 +node323 +node324 +node325 +node326 +node327 +node328 +node329 +node330 +node331 +vtt1 +vtt2 +vtt3 +vtt4 +vtt5 +vtt6 +vtt7 +vtt8 +vtt9 +vttA +vttB +vttC +vttD +canada +media +itanium diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/ssh-config tribler-6.2.0/Tribler/SwiftEngine/mfold/ssh-config --- tribler-6.2.0/Tribler/SwiftEngine/mfold/ssh-config 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/ssh-config 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,129 @@ +Host node300 +HostName 130.161.211.200 + +Host aussie +HostName 130.161.211.201 + +Host lossy +HostName 130.161.211.202 + +Host messy +HostName 130.161.211.203 + +Host node304 +HostName 130.161.211.204 + +Host node305 +HostName 130.161.211.205 + +Host node306 +HostName 130.161.211.206 + +Host node308 +HostName 130.161.211.208 + +Host node309 +HostName 130.161.211.209 + +Host node310 +HostName 130.161.211.210 + +Host node312 +HostName 130.161.211.212 + +Host node313 +HostName 130.161.211.213 + +Host node314 +HostName 130.161.211.214 + +Host node315 +HostName 130.161.211.215 + +Host node317 +HostName 130.161.211.217 + +Host node319 +HostName 130.161.211.219 + +Host node320 +HostName 130.161.211.220 + +Host node322 +HostName 130.161.211.222 + +Host node323 +HostName 130.161.211.223 + +Host node324 +HostName 130.161.211.224 + +Host node325 +HostName 130.161.211.225 + +Host node326 +HostName 130.161.211.226 + +Host node327 +HostName 130.161.211.227 + +Host node328 +HostName 130.161.211.228 + +Host node329 +HostName 130.161.211.229 + +Host node330 +HostName 130.161.211.230 + +Host node331 +HostName 130.161.211.231 + +Host vtt1 +HostName 130.188.225.81 + +Host vtt2 +HostName 130.188.225.82 + +Host vtt3 +HostName 130.188.225.83 + +Host vtt4 +HostName 130.188.225.84 + +Host vtt5 +HostName 130.188.225.85 + +Host vtt6 +HostName 130.188.225.86 + +Host vtt7 +HostName 130.188.225.87 + +Host vtt8 +HostName 130.188.225.88 + +Host vtt9 +HostName 130.188.225.89 + +Host vttA +HostName 130.188.225.90 + +Host vttB +HostName 130.188.225.91 + +Host vttC +HostName 130.188.225.97 + +Host vttD +HostName 130.188.225.98 + +Host canada +HostName 72.55.184.147 + +Host media +HostName 83.96.143.114 + +Host itanium +HostName 194.226.235.181 +User RUNC\gritzko diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/status.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/status.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/status.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/status.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1 @@ +echo -e $HOST"\t"`tail -1 swift/$HOST-lerr` diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/tcinfo.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/tcinfo.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/tcinfo.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/tcinfo.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,21 @@ +#!/bin/bash +# A convenience script for use with net* scripts. + +echo --- qdisc info of dev $EMIF --- +sudo tc qdisc show dev $EMIF +echo --- class info of dev $EMIF --- +sudo tc class show dev $EMIF +echo --- filter info of dev $EMIF --- +sudo tc filter show dev $EMIF +echo --- qdisc info of dev ifb0 --- +sudo tc qdisc show dev ifb0 +echo --- class info of dev ifb0 --- +sudo tc class show dev ifb0 +echo --- filter info of dev ifb0 --- +sudo tc filter show dev ifb0 +echo --- qdisc info of dev lo --- +sudo tc qdisc show dev lo +echo --- class info of dev lo --- +sudo tc class show dev lo +echo --- filter info of dev lo --- +sudo tc filter show dev lo diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/test.default.sh tribler-6.2.0/Tribler/SwiftEngine/mfold/test.default.sh --- tribler-6.2.0/Tribler/SwiftEngine/mfold/test.default.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/test.default.sh 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,3 @@ +#!/bin/bash + +hostname diff -Nru tribler-6.2.0/Tribler/SwiftEngine/mfold/vtt.txt tribler-6.2.0/Tribler/SwiftEngine/mfold/vtt.txt --- tribler-6.2.0/Tribler/SwiftEngine/mfold/vtt.txt 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/mfold/vtt.txt 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,15 @@ +130.188.225.81 vtt1 +130.188.225.82 vtt2 +130.188.225.83 vtt3 +130.188.225.84 vtt4 +130.188.225.85 vtt5 +130.188.225.86 vtt6 +130.188.225.87 vtt7 +130.188.225.88 vtt8 +130.188.225.89 vtt9 +130.188.225.90 vtt10 +130.188.225.91 vtt11 +130.188.225.97 vtt12 +130.188.225.98 vtt13 +193.166.160.195 willab1 +193.166.160.196 willab2 diff -Nru tribler-6.2.0/Tribler/SwiftEngine/nat_test.cpp tribler-6.2.0/Tribler/SwiftEngine/nat_test.cpp --- tribler-6.2.0/Tribler/SwiftEngine/nat_test.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/nat_test.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,159 @@ +/* + * nat_test.cpp + * NAT type testing. + * + * Created by Gertjan Halkes. + * Copyright 2010 Delft University of Technology. All rights reserved. + * + */ + +#include "swift.h" +#ifdef _WIN32 +#include +#else +#include +#include +#include +#include +#include +#endif + +#define REQUEST_MAGIC 0x5a9e5fa1 +#define REPLY_MAGIC 0xa655c5d5 +#define REPLY_SEC_MAGIC 0x85e4a5ca +#define MAX_TRIES 3 +namespace swift { + +static void on_may_receive(SOCKET sock); +static void on_may_send(SOCKET sock); +static tint test_start; +static int tries; +static int packets_since_last_try; + +static sckrwecb_t callbacks(0, on_may_receive, on_may_send, NULL); +/* Note that we lookup the addresses when we actually send, because Windows requires that + the winsock library is first intialized. If we use Address type variables here, the + lookup would be tried before that initialization, which fails... */ +//FIXME: Change addresses to actual addresses used in test (at least 2 should be provided!) +static const char *servers[] = { "dutigp.st.ewi.tudelft.nl:18375" , + "127.0.0.3:18375" }; + +static void on_may_receive(SOCKET sock) { + Datagram data(sock); + + data.Recv(); + + uint32_t magic = data.Pull32(); + if ((magic != REPLY_MAGIC && magic != REPLY_SEC_MAGIC) || + (magic == REPLY_MAGIC && data.size() != 6) || (magic == REPLY_SEC_MAGIC && data.size() != 0)) + { + dprintf("%s #0 NATTEST weird packet %s \n", tintstr(), data.address().str()); + return; + } + + if (magic == REPLY_MAGIC) { + uint32_t ip = data.Pull32(); + uint16_t port = data.Pull16(); + Address reported(ip, port); + dprintf("%s #0 NATTEST incoming %s %s\n", tintstr(), data.address().str(), reported.str()); + } else { + dprintf("%s #0 NATTEST incoming secondary %s\n", tintstr(), data.address().str()); + } + packets_since_last_try++; +} + +static void on_may_send(SOCKET sock) { + callbacks.may_write = NULL; + Datagram::Listen3rdPartySocket(callbacks); + + for (size_t i = 0; i < (sizeof(servers)/sizeof(servers[0])); i++) { + Datagram request(sock, Address(servers[i])); + + request.Push32(REQUEST_MAGIC); + request.Send(); + } + test_start = NOW; + + struct sockaddr_in name; + socklen_t namelen = sizeof(name); + if (getsockname(sock, (struct sockaddr *) &name, &namelen) < 0) { + dprintf("%s #0 NATTEST could not get local address\n", tintstr()); + } else { + Address local(ntohl(name.sin_addr.s_addr), ntohs(name.sin_port)); + dprintf("%s #0 NATTEST local %s\n", tintstr(), local.str()); + } +} + +static void printAddresses(void) { +#ifdef _WIN32 + IP_ADAPTER_INFO *adapterInfo = NULL; + IP_ADAPTER_INFO *adapter = NULL; + DWORD retval = 0; + UINT i; + ULONG size = 0; + + if ((retval = GetAdaptersInfo(adapterInfo, &size)) != ERROR_BUFFER_OVERFLOW) { + dprintf("ERROR: %d\n", (int) retval); + return; + } + + adapterInfo = (IP_ADAPTER_INFO *) malloc(size); + if (adapterInfo == NULL) { + dprintf("ERROR: out of memory\n"); + return; + } + + if ((retval = GetAdaptersInfo(adapterInfo, &size)) == NO_ERROR) { + adapter = adapterInfo; + while (adapter) { + IP_ADDR_STRING *address; + for (address = &adapter->IpAddressList; address != NULL; address = address->Next) { + if (address->IpAddress.String[0] != 0) + dprintf("ADDRESS: %s\n", address->IpAddress.String); + } + adapter = adapter->Next; + } + } else { + dprintf("ERROR: %d\n", (int) retval); + } + free(adapterInfo); +#else + struct ifaddrs *addrs, *ptr; + if (getifaddrs(&addrs) < 0) { + dprintf("ERROR: %s\n", strerror(errno)); + return; + } + + for (ptr = addrs; ptr != NULL; ptr = ptr->ifa_next) { + if (ptr->ifa_addr->sa_family == AF_INET) { + dprintf("ADDRESS: %s\n", inet_ntoa(((struct sockaddr_in *) ptr->ifa_addr)->sin_addr)); + } + } + freeifaddrs(addrs); +#endif +} + + +void nat_test_update(void) { + static bool initialized; + if (!initialized) { + initialized = true; + printAddresses(); + } + + if (tries < MAX_TRIES && NOW - test_start > 30 * TINT_SEC) { + if (tries == 0) { + Address any; + SOCKET sock = Datagram::Bind(any, callbacks); + callbacks.sock = sock; + } else if (packets_since_last_try == 0) { + // Keep on trying if we didn't receive _any_ packet in response to our last request + tries--; + } + tries++; + callbacks.may_write = on_may_send; + Datagram::Listen3rdPartySocket(callbacks); + } +} + +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/nat_test_server.c tribler-6.2.0/Tribler/SwiftEngine/nat_test_server.c --- tribler-6.2.0/Tribler/SwiftEngine/nat_test_server.c 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/nat_test_server.c 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,156 @@ +/* + * nat_test_server.c + * NAT type testing (server). + * + * Created by Gertjan Halkes. + * Copyright 2010 Delft University of Technology. All rights reserved. + * + */ + +//FIXME: add timestamp to log output + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define REQUEST_MAGIC 0x5a9e5fa1 +#define REPLY_MAGIC 0xa655c5d5 +#define REPLY_SEC_MAGIC 0x85e4a5ca + +static int has_secondary; + +/** Alert the user of a fatal error and quit. + @param fmt The format string for the message. See fprintf(3) for details. + @param ... The arguments for printing. +*/ +void fatal(const char *fmt, ...) { + va_list args; + + va_start(args, fmt); + vfprintf(stderr, fmt, args); + va_end(args); + exit(EXIT_FAILURE); +} + +const char *getTimestamp(void) { + static char timeBuffer[1024]; + struct timeval now; + double nowF; + + gettimeofday(&now, NULL); + nowF = (double) now.tv_sec + (double) now.tv_usec / 1000000; + snprintf(timeBuffer, 1024, "%.4f", nowF); + return timeBuffer; +} + +int main(int argc, char *argv[]) { + struct sockaddr_in local, remote, secondary; + uint32_t packet[3]; + int c, sock, sock2, sock3, sock4; + ssize_t result; + + local.sin_addr.s_addr = INADDR_ANY; + + while ((c = getopt(argc, argv, "s:")) > 0) { + switch (c) { + case 's': + has_secondary = 1; + secondary.sin_addr.s_addr = inet_addr(optarg); + break; + default: + fatal("Unknown option %c\n", c); + break; + } + } + + if (argc - optind != 3) + fatal("Usage: nat_test_server [] \n"); + + local.sin_family = AF_INET; + local.sin_addr.s_addr = inet_addr(argv[optind++]); + local.sin_port = htons(atoi(argv[optind++])); + + if ((sock = socket(PF_INET, SOCK_DGRAM, 0)) < 0) + fatal("Error opening primary socket: %m\n"); + if (bind(sock, (struct sockaddr *) &local, sizeof(local)) < 0) + fatal("Error binding primary socket: %m\n"); + + if (has_secondary) { + secondary.sin_family = AF_INET; + secondary.sin_port = local.sin_port; + + if ((sock3 = socket(PF_INET, SOCK_DGRAM, 0)) < 0) + fatal("Error opening primary socket on secondary address: %m\n"); + if (bind(sock3, (struct sockaddr *) &secondary, sizeof(secondary)) < 0) + fatal("Error binding primary socket on secondary address: %m\n"); + } + + local.sin_port = htons(atoi(argv[optind++])); + + if ((sock2 = socket(PF_INET, SOCK_DGRAM, 0)) < 0) + fatal("Error opening secondary socket: %m\n"); + if (bind(sock2, (struct sockaddr *) &local, sizeof(local)) < 0) + fatal("Error binding secondary socket: %m\n"); + + if (has_secondary) { + secondary.sin_port = local.sin_port; + + if ((sock4 = socket(PF_INET, SOCK_DGRAM, 0)) < 0) + fatal("Error opening secondary socket on secondary address: %m\n"); + if (bind(sock4, (struct sockaddr *) &secondary, sizeof(secondary)) < 0) + fatal("Error binding secondary socket on secondary address: %m\n"); + } + + while (1) { + socklen_t socklen = sizeof(remote); + if ((result = recvfrom(sock, &packet, sizeof(packet), 0, (struct sockaddr *) &remote, &socklen)) < 0) { + if (errno == EAGAIN) + continue; + fatal("%s: Error receiving packet: %m\n", getTimestamp()); + } else if (result != 4 || ntohl(packet[0]) != REQUEST_MAGIC) { + fprintf(stderr, "Strange packet received from %s\n", inet_ntoa(remote.sin_addr)); + } else { + fprintf(stderr, "%s: Received packet from %s:%d\n", getTimestamp(), inet_ntoa(remote.sin_addr), ntohs(remote.sin_port)); + packet[0] = htonl(REPLY_MAGIC); + packet[1] = remote.sin_addr.s_addr; + *(uint16_t *)(packet + 2) = remote.sin_port; + retry: + if (sendto(sock, packet, 10, 0, (const struct sockaddr *) &remote, socklen) < 10) { + if (errno == EAGAIN) + goto retry; + fprintf(stderr, "%s: Error sending packet on primary socket: %m\n", getTimestamp()); + } + retry2: + if (sendto(sock2, packet, 10, 0, (const struct sockaddr *) &remote, socklen) < 10) { + if (errno == EAGAIN) + goto retry2; + fprintf(stderr, "%s: Error sending packet on secondary socket: %m\n", getTimestamp()); + } + + if (has_secondary) { + packet[0] = htonl(REPLY_SEC_MAGIC); + retry3: + if (sendto(sock3, packet, 4, 0, (const struct sockaddr *) &remote, socklen) < 4) { + if (errno == EAGAIN) + goto retry3; + fprintf(stderr, "%s: Error sending packet on primary socket on secondary address: %m\n", getTimestamp()); + } + retry4: + if (sendto(sock4, packet, 4, 0, (const struct sockaddr *) &remote, socklen) < 4) { + if (errno == EAGAIN) + goto retry4; + fprintf(stderr, "%s: Error sending packet on secondary socket on secondary address: %m\n", getTimestamp()); + } + } + + } + } + return 0; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/operational.h tribler-6.2.0/Tribler/SwiftEngine/operational.h --- tribler-6.2.0/Tribler/SwiftEngine/operational.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/operational.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,26 @@ +/* + * operational.h + * + * Created on: Jun 22, 2012 + * Author: arno + */ + +#ifndef OPERATIONAL_H_ +#define OPERATIONAL_H_ + +namespace swift +{ + +class Operational +{ + public: + Operational(bool working=true) { working_ = working; } + bool IsOperational() { return working_; } + void SetBroken() { working_ = false; } + protected: + bool working_; +}; + + +}; +#endif /* OPERATIONAL_H_ */ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/send_control.cpp tribler-6.2.0/Tribler/SwiftEngine/send_control.cpp --- tribler-6.2.0/Tribler/SwiftEngine/send_control.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/send_control.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,231 @@ +/* + * send_control.cpp + * congestion control logic for the swift protocol + * + * Created by Victor Grishchenko on 12/10/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "swift.h" +#include + +using namespace swift; +using namespace std; + +tint Channel::MIN_DEV = 50*TINT_MSEC; +tint Channel::MAX_SEND_INTERVAL = TINT_SEC*58; +tint Channel::LEDBAT_TARGET = TINT_MSEC*25; +float Channel::LEDBAT_GAIN = 1.0/LEDBAT_TARGET; +tint Channel::LEDBAT_DELAY_BIN = TINT_SEC*30; +tint Channel::MAX_POSSIBLE_RTT = TINT_SEC*10; +const char* Channel::SEND_CONTROL_MODES[] = {"keepalive", "pingpong", + "slowstart", "standard_aimd", "ledbat", "closing"}; + + +tint Channel::NextSendTime () { + TimeoutDataOut(); // precaution to know free cwnd + switch (send_control_) { + case KEEP_ALIVE_CONTROL: return KeepAliveNextSendTime(); + case PING_PONG_CONTROL: return PingPongNextSendTime(); + case SLOW_START_CONTROL: return SlowStartNextSendTime(); + case AIMD_CONTROL: return AimdNextSendTime(); + case LEDBAT_CONTROL: return LedbatNextSendTime(); + case CLOSE_CONTROL: return TINT_NEVER; + default: fprintf(stderr,"send_control.cpp: unknown control %d\n", send_control_); return TINT_NEVER; + } +} + +tint Channel::SwitchSendControl (send_control_t control_mode) { + dprintf("%s #%u sendctrl switch %s->%s\n",tintstr(),id(), + SEND_CONTROL_MODES[send_control_],SEND_CONTROL_MODES[control_mode]); + switch (control_mode) { + case KEEP_ALIVE_CONTROL: + send_interval_ = rtt_avg_; //max(TINT_SEC/10,rtt_avg_); + dev_avg_ = max(TINT_SEC,rtt_avg_); + data_out_cap_ = bin_t::ALL; + cwnd_ = 1; + break; + case PING_PONG_CONTROL: + dev_avg_ = max(TINT_SEC,rtt_avg_); + data_out_cap_ = bin_t::ALL; + cwnd_ = 1; + break; + case SLOW_START_CONTROL: + cwnd_ = 1; + break; + case AIMD_CONTROL: + break; + case LEDBAT_CONTROL: + break; + case CLOSE_CONTROL: + break; + default: + assert(false); + } + send_control_ = control_mode; + return NextSendTime(); +} + +tint Channel::KeepAliveNextSendTime () { + if (sent_since_recv_>=3 && last_recv_time_ 1) + { + send_interval_ <<= 1; + } + if (send_interval_>MAX_SEND_INTERVAL) + send_interval_ = MAX_SEND_INTERVAL; + return last_send_time_ + send_interval_; +} + +tint Channel::PingPongNextSendTime () { // FIXME INFINITE LOOP + if (dgrams_sent_>=10) + return SwitchSendControl(KEEP_ALIVE_CONTROL); + if (ack_rcvd_recent_) + return SwitchSendControl(SLOW_START_CONTROL); + if (data_in_.time!=TINT_NEVER) + return NOW; + if (last_recv_time_>last_send_time_) + return NOW; + if (!last_send_time_) + return NOW; + return last_send_time_ + ack_timeout(); // timeout +} + +tint Channel::CwndRateNextSendTime () { + if (data_in_.time!=TINT_NEVER) + return NOW; // TODO: delayed ACKs + //if (last_recv_time_max(rtt_avg_,TINT_SEC)*4) + return SwitchSendControl(KEEP_ALIVE_CONTROL); + if (data_out_.size()1) + cwnd_ += ack_rcvd_recent_/cwnd_; + else + cwnd_ *= 2; + } + ack_rcvd_recent_=0; + return CwndRateNextSendTime(); +} + +tint Channel::LedbatNextSendTime () { + float oldcwnd = cwnd_; + + tint owd_cur(TINT_NEVER), owd_min(TINT_NEVER); + for(int i=0; i<4; i++) { + if (owd_min>owd_min_bins_[i]) + owd_min = owd_min_bins_[i]; + if (owd_cur>owd_current_[i]) + owd_cur = owd_current_[i]; + } + if (ack_not_rcvd_recent_) + BackOffOnLosses(0.8); + ack_rcvd_recent_ = 0; + tint queueing_delay = owd_cur - owd_min; + tint off_target = LEDBAT_TARGET - queueing_delay; + cwnd_ += LEDBAT_GAIN * off_target / cwnd_; + if (cwnd_<1) + cwnd_ = 1; + if (owd_cur==TINT_NEVER || owd_min==TINT_NEVER) + cwnd_ = 1; + + //Arno, 2012-02-02: Somehow LEDBAT gets stuck at cwnd_ == 1 sometimes + // This hack appears to work to get it back on the right track quickly. + if (oldcwnd == 1 && cwnd_ == 1) + cwnd_count1_++; + else + cwnd_count1_ = 0; + if (cwnd_count1_ > 10) + { + dprintf("%s #%u sendctrl ledbat stuck, reset\n",tintstr(),id() ); + cwnd_count1_ = 0; + for(int i=0; i<4; i++) { + owd_min_bins_[i] = TINT_NEVER; + owd_current_[i] = TINT_NEVER; + } + } + + dprintf("%s #%u sendctrl ledbat %lli-%lli => %3.2f\n", + tintstr(),id_,owd_cur,owd_min,cwnd_); + return CwndRateNextSendTime(); +} + + + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/sendrecv.cpp tribler-6.2.0/Tribler/SwiftEngine/sendrecv.cpp --- tribler-6.2.0/Tribler/SwiftEngine/sendrecv.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/sendrecv.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,1207 @@ +/* + * sendrecv.cpp + * most of the swift's state machine + * + * Created by Victor Grishchenko on 3/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include "bin_utils.h" +#include "swift.h" +#include // kill it +#include +#include +#include +#include "compat.h" + +using namespace swift; +using namespace std; + +struct event_base *Channel::evbase; +struct event Channel::evrecv; + +#define DEBUGTRAFFIC 0 + +/** Arno: Victor's design allows a sender to choose some data to push to + * a receiver, if that receiver is not HINTing at data. Should be disabled + * when the receiver has a download rate limit. + */ +#define ENABLE_SENDERSIZE_PUSH 0 + + +/** Arno, 2011-11-24: When rate limit is on and the download is in progress + * we send HINTs for 2 chunks at the moment. This constant can be used to + * get greater granularity. Set to 0 for original behaviour. + */ +#define HINT_GRANULARITY 16 // chunks + +/** Arno, 2012-03-16: Swift can now tunnel data from CMDGW over UDP to + * CMDGW at another swift instance. This is the default channel ID on UDP + * for that traffic (cf. overlay swarm). + */ +#define CMDGW_TUNNEL_DEFAULT_CHANNEL_ID 0xffffffff + +/* + TODO 25 Oct 18:55 + - range: ALL + - randomized testing of advanced ops (new testcase) + */ + +void Channel::AddPeakHashes (struct evbuffer *evb) { + for(int i=0; ipeak_count(); i++) { + bin_t peak = hashtree()->peak(i); + evbuffer_add_8(evb, SWIFT_HASH); + evbuffer_add_32be(evb, bin_toUInt32(peak)); + evbuffer_add_hash(evb, hashtree()->peak_hash(i)); + char bin_name_buf[32]; + dprintf("%s #%u +phash %s\n",tintstr(),id_,peak.str(bin_name_buf)); + } +} + + +void Channel::AddUncleHashes (struct evbuffer *evb, bin_t pos) { + + char bin_name_buf2[32]; + dprintf("%s #%u +uncle hash for %s\n",tintstr(),id_,pos.str(bin_name_buf2)); + + bin_t peak = hashtree()->peak_for(pos); + while (pos!=peak && ((NOW&3)==3 || !pos.parent().contains(data_out_cap_)) && + ack_in_.is_empty(pos.parent()) ) { + bin_t uncle = pos.sibling(); + evbuffer_add_8(evb, SWIFT_HASH); + evbuffer_add_32be(evb, bin_toUInt32(uncle)); + evbuffer_add_hash(evb, hashtree()->hash(uncle) ); + char bin_name_buf[32]; + dprintf("%s #%u +hash %s\n",tintstr(),id_,uncle.str(bin_name_buf)); + pos = pos.parent(); + } +} + + +bin_t Channel::ImposeHint () { + uint64_t twist = peer_channel_id_; // got no hints, send something randomly + + twist &= hashtree()->peak(0).toUInt(); // FIXME may make it semi-seq here + + bin_t my_pick = binmap_t::find_complement(ack_in_, *(hashtree()->ack_out()), twist); + + my_pick.to_twisted(twist); + while (my_pick.base_length()>max(1,(int)cwnd_)) + my_pick = my_pick.left(); + + return my_pick.twisted(twist); +} + + +bin_t Channel::DequeueHint (bool *retransmitptr) { + bin_t send = bin_t::NONE; + + // Arno, 2012-01-23: Extra protection against channel loss, don't send DATA + if (last_recv_time_ < NOW-(3*TINT_SEC)) + { + dprintf("%s #%u dequeued bad time %llu\n",tintstr(),id_, last_recv_time_ ); + return bin_t::NONE; + } + + // Arno, 2012-07-27: Reenable Victor's retransmit, check for ACKs + *retransmitptr = false; + while (!data_out_tmo_.empty()) { + tintbin tb = data_out_tmo_.front(); + data_out_tmo_.pop_front(); + if (ack_in_.is_filled(tb.bin)) { + // chunk was acknowledged in meantime + continue; + } + else { + send = tb.bin; + *retransmitptr = true; + break; + } + } + + if (ENABLE_SENDERSIZE_PUSH && send.is_none() && hint_in_.empty() && last_recv_time_>NOW-rtt_avg_-TINT_SEC) { + bin_t my_pick = ImposeHint(); // FIXME move to the loop + if (!my_pick.is_none()) { + hint_in_.push_back(my_pick); + char bin_name_buf[32]; + dprintf("%s #%u *hint %s\n",tintstr(),id_,my_pick.str(bin_name_buf)); + } + } + + while (!hint_in_.empty() && send.is_none()) { + bin_t hint = hint_in_.front().bin; + tint time = hint_in_.front().time; + hint_in_.pop_front(); + while (!hint.is_base()) { // FIXME optimize; possible attack + hint_in_.push_front(tintbin(time,hint.right())); + hint = hint.left(); + } + //if (time < NOW-TINT_SEC*3/2 ) + // continue; bad idea + if (!ack_in_.is_filled(hint)) + send = hint; + } + uint64_t mass = 0; + // Arno, 2012-03-09: Is mucho expensive on busy server. + //for(int i=0; iroot_hash()); + dprintf("%s #%u +hash ALL %s\n", + tintstr(),id_,hashtree()->root_hash().hex().c_str()); + } + evbuffer_add_8(evb, SWIFT_HANDSHAKE); + int encoded = -1; + if (send_control_==CLOSE_CONTROL) { + encoded = 0; + } + else + encoded = EncodeID(id_); + evbuffer_add_32be(evb, encoded); + dprintf("%s #%u +hs %x\n",tintstr(),id_,encoded); + have_out_.clear(); +} + + +void Channel::Send () { + + dprintf("%s #%u Send called \n",tintstr(),id_); + + struct evbuffer *evb = evbuffer_new(); + evbuffer_add_32be(evb, peer_channel_id_); + bin_t data = bin_t::NONE; + int evbnonadplen = 0; + if ( is_established() ) { + if (send_control_!=CLOSE_CONTROL) { + // FIXME: seeder check + AddHave(evb); + AddAck(evb); + if (!hashtree()->is_complete()) { + AddHint(evb); + /* Gertjan fix: 7aeea65f3efbb9013f601b22a57ee4a423f1a94d + "Only call Reschedule for 'reverse PEX' if the channel is in keep-alive mode" + */ + AddPexReq(evb); + } + AddPex(evb); + TimeoutDataOut(); + data = AddData(evb); + } else { + // Arno: send explicit close + AddHandshake(evb); + } + } else { + AddHandshake(evb); + AddHave(evb); // Arno, 2011-10-28: from AddHandShake. Why double? + AddHave(evb); + AddAck(evb); + } + + lastsendwaskeepalive_ = (evbuffer_get_length(evb) == 4); + + if (evbuffer_get_length(evb)==4) {// only the channel id; bare keep-alive + data = bin_t::ALL; + } + dprintf("%s #%u sent %ib %s:%x\n", + tintstr(),id_,(int)evbuffer_get_length(evb),peer().str(), + peer_channel_id_); + int r = SendTo(socket_,peer(),evb); + if (r==-1) + print_error("swift can't send datagram"); + else + raw_bytes_up_ += r; + last_send_time_ = NOW; + sent_since_recv_++; + dgrams_sent_++; + evbuffer_free(evb); + Reschedule(); +} + +void Channel::AddHint (struct evbuffer *evb) { + + // RATELIMIT + // Policy is to not send hints when we are above speed limit + if (transfer().GetCurrentSpeed(DDIR_DOWNLOAD) > transfer().GetMaxSpeed(DDIR_DOWNLOAD)) { + if (DEBUGTRAFFIC) + fprintf(stderr,"hint: forbidden#"); + return; + } + + + // 1. Calc max of what we are allowed to request, uncongested bandwidth wise + tint plan_for = max(TINT_SEC,rtt_avg_*4); + + tint timed_out = NOW - plan_for*2; + while ( !hint_out_.empty() && hint_out_.front().time < timed_out ) { + hint_out_size_ -= hint_out_.front().bin.base_length(); + hint_out_.pop_front(); + } + + int first_plan_pck = max ( (tint)1, plan_for / dip_avg_ ); + + // Riccardo, 2012-04-04: Actually allowed is max minus what we already asked for + int queue_allowed_hints = max(0,first_plan_pck-(int)hint_out_size_); + + + // RATELIMIT + // 2. Calc max of what is allowed by the rate limiter + int rate_allowed_hints = LONG_MAX; + if (transfer().GetMaxSpeed(DDIR_DOWNLOAD) < DBL_MAX) + { + uint64_t rough_global_hint_out_size = 0; // rough estimate, as hint_out_ clean up is not done for all channels + channels_t::iterator iter; + for (iter=transfer().mychannels_.begin(); iter!=transfer().mychannels_.end(); iter++) + { + Channel *c = *iter; + if (c != NULL) + rough_global_hint_out_size += c->hint_out_size_; + } + + // Policy: this channel is allowed to hint at the limit - global_hinted_at + // Handle MaxSpeed = unlimited + double rate_hints_limit_float = transfer().GetMaxSpeed(DDIR_DOWNLOAD)/((double)hashtree()->chunk_size()); + + int rate_hints_limit = (int)min((double)LONG_MAX,rate_hints_limit_float); + + // Actually allowed is max minus what we already asked for, globally (=all channels) + rate_allowed_hints = max(0,rate_hints_limit-(int)rough_global_hint_out_size); + } + + // 3. Take the smallest allowance from rate and queue limit + uint64_t plan_pck = (uint64_t)min(rate_allowed_hints,queue_allowed_hints); + + // 4. Ask allowance in blocks of chunks to get pipelining going from serving peer. + if (hint_out_size_ == 0 || plan_pck > HINT_GRANULARITY) + { + bin_t hint = transfer().picker().Pick(ack_in_,plan_pck,NOW+plan_for*2); + if (!hint.is_none()) { + if (DEBUGTRAFFIC) + { + char binstr[32]; + fprintf(stderr,"hint c%d: ask %s\n", id(), hint.str(binstr) ); + } + evbuffer_add_8(evb, SWIFT_HINT); + evbuffer_add_32be(evb, bin_toUInt32(hint)); + char bin_name_buf[32]; + dprintf("%s #%u +hint %s [%lli]\n",tintstr(),id_,hint.str(bin_name_buf),hint_out_size_); + dprintf("%s #%u +hint base %s width %d\n",tintstr(),id_,hint.base_left().str(bin_name_buf), hint.base_length() ); + hint_out_.push_back(hint); + hint_out_size_ += hint.base_length(); + //fprintf(stderr,"send c%d: HINTLEN %i\n", id(), hint.base_length()); + //fprintf(stderr,"HL %i ", hint.base_length()); + } + else + dprintf("%s #%u Xhint\n",tintstr(),id_); + + } +} + + +bin_t Channel::AddData (struct evbuffer *evb) { + // RATELIMIT + if (transfer().GetCurrentSpeed(DDIR_UPLOAD) > transfer().GetMaxSpeed(DDIR_UPLOAD)) { + transfer().OnSendNoData(); + return bin_t::NONE; + } + + if (!hashtree()->size()) // know nothing + return bin_t::NONE; + + bin_t tosend = bin_t::NONE; + bool isretransmit = false; + tint luft = send_interval_>>4; // may wake up a bit earlier + if (data_out_.size()NOW-TINT_SEC || data_out_.empty())) + return bin_t::NONE; // once in a while, empty data is sent just to check rtt FIXED + + if (ack_in_.is_empty() && hashtree()->size()) + AddPeakHashes(evb); + + //NETWVSHASH + if (hashtree()->get_check_netwvshash()) + AddUncleHashes(evb,tosend); + + if (!ack_in_.is_empty()) // TODO: cwnd_>1 + data_out_cap_ = tosend; + + // Arno, 2011-11-03: May happen when first data packet is sent to empty + // leech, then peak + uncle hashes may be so big that they don't fit in eth + // frame with DATA. Send 2 datagrams then, one with peaks so they have + // a better chance of arriving. Optimistic violation of atomic datagram + // principle. + if (hashtree()->chunk_size() == SWIFT_DEFAULT_CHUNK_SIZE && evbuffer_get_length(evb) > SWIFT_MAX_NONDATA_DGRAM_SIZE) { + dprintf("%s #%u fsent %ib %s:%x\n", + tintstr(),id_,(int)evbuffer_get_length(evb),peer().str(), + peer_channel_id_); + int ret = Channel::SendTo(socket_,peer(),evb); // kind of fragmentation + if (ret > 0) + raw_bytes_up_ += ret; + evbuffer_add_32be(evb, peer_channel_id_); + } + + if (hashtree()->chunk_size() != SWIFT_DEFAULT_CHUNK_SIZE && isretransmit) { + /* FRAGRAND + * Arno, 2012-01-17: We observe strange behaviour when using + * fragmented UDP packets. When ULANC sends a specific datagram ("995"), + * the 2nd IP packet carrying it gets lost structurally. When + * downloading from the same asset hosted on a Linux 32-bit machine + * using a Win7 32-bit client (behind a NAT), one specific full + * datagram never gets delivered (6970 one before do). A workaround + * is to add some random data to the datagram. Hence we introduce + * the SWIFT_RANDOMIZE message, that is added to the datagram carrying + * the DATA on a retransmit. + */ + char binstr[32]; + fprintf(stderr,"AddData: retransmit of randomized chunk %s\n",tosend.str(binstr) ); + evbuffer_add_8(evb, SWIFT_RANDOMIZE); + evbuffer_add_32be(evb, (int)rand() ); + } + + evbuffer_add_8(evb, SWIFT_DATA); + evbuffer_add_32be(evb, bin_toUInt32(tosend)); + + struct evbuffer_iovec vec; + if (evbuffer_reserve_space(evb, hashtree()->chunk_size(), &vec, 1) < 0) { + print_error("error on evbuffer_reserve_space"); + return bin_t::NONE; + } + size_t r = transfer().GetStorage()->Read((char *)vec.iov_base, + hashtree()->chunk_size(),tosend.base_offset()*hashtree()->chunk_size()); + // TODO: corrupted data, retries, caching + if (r<0) { + print_error("error on reading"); + vec.iov_len = 0; + evbuffer_commit_space(evb, &vec, 1); + return bin_t::NONE; + } + // assert(dgram.space()>=r+4+1); + vec.iov_len = r; + if (evbuffer_commit_space(evb, &vec, 1) < 0) { + print_error("error on evbuffer_commit_space"); + return bin_t::NONE; + } + + last_data_out_time_ = NOW; + data_out_.push_back(tosend); + bytes_up_ += r; + global_bytes_up += r; + + char bin_name_buf[32]; + dprintf("%s #%u +data %s\n",tintstr(),id_,tosend.str(bin_name_buf)); + + // RATELIMIT + // ARNOSMPTODO: count overhead bytes too? Move to Send() then. + transfer_->OnSendData(hashtree()->chunk_size()); + + return tosend; +} + + +void Channel::AddAck (struct evbuffer *evb) { + if (data_in_==tintbin()) + //if (data_in_.bin==bin64_t::NONE) + return; + // sometimes, we send a HAVE (e.g. in case the peer did repetitive send) + evbuffer_add_8(evb, data_in_.time==TINT_NEVER?SWIFT_HAVE:SWIFT_ACK); + evbuffer_add_32be(evb, bin_toUInt32(data_in_.bin)); + if (data_in_.time!=TINT_NEVER) + evbuffer_add_64be(evb, data_in_.time); + + + if (DEBUGTRAFFIC) + fprintf(stderr,"send c%d: ACK %i\n", id(), bin_toUInt32(data_in_.bin)); + + have_out_.set(data_in_.bin); + char bin_name_buf[32]; + dprintf("%s #%u +ack %s %s\n", + tintstr(),id_,data_in_.bin.str(bin_name_buf),tintstr(data_in_.time)); + if (data_in_.bin.layer()>2) + data_in_dbl_ = data_in_.bin; + + //fprintf(stderr,"data_in_ c%d\n", id() ); + data_in_ = tintbin(); + //data_in_ = tintbin(NOW,bin64_t::NONE); +} + + +void Channel::AddHave (struct evbuffer *evb) { + if (!data_in_dbl_.is_none()) { // TODO: do redundancy better + evbuffer_add_8(evb, SWIFT_HAVE); + evbuffer_add_32be(evb, bin_toUInt32(data_in_dbl_)); + data_in_dbl_=bin_t::NONE; + } + if (DEBUGTRAFFIC) + fprintf(stderr,"send c%d: HAVE ",id() ); + + // ZEROSTATE + if (transfer().IsZeroState()) + { + if (is_established()) + return; + + // Say we have peaks + for(int i=0; ipeak_count(); i++) { + bin_t peak = hashtree()->peak(i); + evbuffer_add_8(evb, SWIFT_HAVE); + evbuffer_add_32be(evb, bin_toUInt32(peak)); + char bin_name_buf[32]; + dprintf("%s #%u +have %s\n",tintstr(),id_,peak.str(bin_name_buf)); + } + return; + } + + for(int count=0; count<4; count++) { + bin_t ack = binmap_t::find_complement(have_out_, *(hashtree()->ack_out()), 0); // FIXME: do rotating queue + if (ack.is_none()) + break; + ack = hashtree()->ack_out()->cover(ack); + have_out_.set(ack); + evbuffer_add_8(evb, SWIFT_HAVE); + evbuffer_add_32be(evb, bin_toUInt32(ack)); + + if (DEBUGTRAFFIC) + fprintf(stderr," %i", bin_toUInt32(ack)); + + char bin_name_buf[32]; + dprintf("%s #%u +have %s\n",tintstr(),id_,ack.str(bin_name_buf)); + } + if (DEBUGTRAFFIC) + fprintf(stderr,"\n"); + +} + + +void Channel::Recv (struct evbuffer *evb) { + dprintf("%s #%u recvd %ib\n",tintstr(),id_,(int)evbuffer_get_length(evb)+4); + dgrams_rcvd_++; + + if (!transfer().IsOperational()) { + dprintf("%s #%u recvd on broken transfer %d \n",tintstr(),id_, transfer().fd() ); + CloseOnError(); + return; + } + + lastrecvwaskeepalive_ = (evbuffer_get_length(evb) == 0); + if (lastrecvwaskeepalive_) + // Update speed measurements such that they decrease when DL stops + transfer().OnRecvData(0); + + if (last_send_time_ && rtt_avg_==TINT_SEC && dev_avg_==0) { + rtt_avg_ = NOW - last_send_time_; + dev_avg_ = rtt_avg_; + dip_avg_ = rtt_avg_; + dprintf("%s #%u sendctrl rtt init %lli\n",tintstr(),id_,rtt_avg_); + } + + bin_t data = evbuffer_get_length(evb) ? bin_t::NONE : bin_t::ALL; + + if (DEBUGTRAFFIC) + fprintf(stderr,"recv c%d: size %d ", id(), evbuffer_get_length(evb)); + + while (evbuffer_get_length(evb)) { + uint8_t type = evbuffer_remove_8(evb); + + if (DEBUGTRAFFIC) + fprintf(stderr," %d", type); + + switch (type) { + case SWIFT_HANDSHAKE: + OnHandshake(evb); + break; + case SWIFT_DATA: + if (!transfer().IsZeroState()) + data=OnData(evb); + else + OnDataZeroState(evb); + break; + case SWIFT_HAVE: + if (!transfer().IsZeroState()) + OnHave(evb); + else + OnHaveZeroState(evb); + break; + case SWIFT_ACK: + OnAck(evb); + break; + case SWIFT_HASH: + if (!transfer().IsZeroState()) + OnHash(evb); + else + OnHashZeroState(evb); + break; + case SWIFT_HINT: + OnHint(evb); + break; + case SWIFT_PEX_ADD: + if (!transfer().IsZeroState()) + OnPexAdd(evb); + else + OnPexAddZeroState(evb); + break; + case SWIFT_PEX_REQ: + if (!transfer().IsZeroState()) + OnPexReq(); + else + OnPexReqZeroState(evb); + break; + case SWIFT_RANDOMIZE: + OnRandomize(evb); + break; //FRAGRAND + default: + dprintf("%s #%u ?msg id unknown %i\n",tintstr(),id_,(int)type); + return; + } + } + if (DEBUGTRAFFIC) + { + fprintf(stderr,"\n"); + } + + last_recv_time_ = NOW; + sent_since_recv_ = 0; + + + // Arno: see if transfer still in working order + transfer().UpdateOperational(); + if (!transfer().IsOperational()) { + dprintf("%s #%u recvd broke transfer %d \n",tintstr(),id_, transfer().fd() ); + CloseOnError(); + return; + } + + Reschedule(); +} + + +void Channel::CloseOnError() +{ + Close(); + // set established->false after Close, so Close does send explicit close. + // RecvDatagram will schedule this for delete. + peer_channel_id_ = 0; + return; +} + + +/* + * Arno: FAXME: HASH+DATA should be handled as a transaction: only when the + * hashes check out should they be stored in the hashtree, otherwise revert. + */ +void Channel::OnHash (struct evbuffer *evb) { + bin_t pos = bin_fromUInt32(evbuffer_remove_32be(evb)); + Sha1Hash hash = evbuffer_remove_hash(evb); + hashtree()->OfferHash(pos,hash); + char bin_name_buf[32]; + dprintf("%s #%u -hash %s\n",tintstr(),id_,pos.str(bin_name_buf)); + + //fprintf(stderr,"HASH %lli hex %s\n",pos.toUInt(), hash.hex().c_str() ); +} + + +void Channel::CleanHintOut (bin_t pos) { + int hi = 0; + while (hi hashtree()->chunk_size()) { + dprintf("%s #%u !data chunk size mismatch %s: exp %lu got " PRISIZET "\n",tintstr(),id_,pos.str(bin_name_buf), hashtree()->chunk_size(), evbuffer_get_length(evb)); + fprintf(stderr,"WARNING: chunk size mismatch: exp %lu got " PRISIZET "\n",hashtree()->chunk_size(), evbuffer_get_length(evb)); + } + + int length = (evbuffer_get_length(evb) < hashtree()->chunk_size()) ? evbuffer_get_length(evb) : hashtree()->chunk_size(); + if (!hashtree()->ack_out()->is_empty(pos)) { + // Arno, 2012-01-24: print message for duplicate + dprintf("%s #%u Ddata %s\n",tintstr(),id_,pos.str(bin_name_buf)); + evbuffer_drain(evb, length); + data_in_ = tintbin(TINT_NEVER,transfer().ack_out()->cover(pos)); + + // Arno, 2012-01-24: Make sure data interarrival periods don't get + // screwed up because of these (ignored) duplicates. + UpdateDIP(pos); + return bin_t::NONE; + } + uint8_t *data = evbuffer_pullup(evb, length); + data_in_ = tintbin(NOW,bin_t::NONE); + if (!hashtree()->OfferData(pos, (char*)data, length)) { + evbuffer_drain(evb, length); + char bin_name_buf[32]; + dprintf("%s #%u !data %s\n",tintstr(),id_,pos.str(bin_name_buf)); + return bin_t::NONE; + } + evbuffer_drain(evb, length); + dprintf("%s #%u -data %s\n",tintstr(),id_,pos.str(bin_name_buf)); + + if (DEBUGTRAFFIC) + fprintf(stderr,"$ "); + + bin_t cover = transfer().ack_out()->cover(pos); + for(int i=0; i=transfer().cb_agg[i]) + transfer().callbacks[i](transfer().fd(),cover); // FIXME + if (cover.layer() >= 5) // Arno: tested with 32K, presently = 2 ** 5 * chunk_size CHUNKSIZE + transfer().OnRecvData( pow((double)2,(double)5)*((double)hashtree()->chunk_size()) ); + data_in_.bin = pos; + + UpdateDIP(pos); + CleanHintOut(pos); + bytes_down_ += length; + global_bytes_down += length; + return pos; +} + + +void Channel::UpdateDIP(bin_t pos) +{ + if (!pos.is_none()) { + if (last_data_in_time_) { + tint dip = NOW - last_data_in_time_; + dip_avg_ = ( dip_avg_*3 + dip ) >> 2; + } + last_data_in_time_ = NOW; + } +} + + +void Channel::OnAck (struct evbuffer *evb) { + bin_t ackd_pos = bin_fromUInt32(evbuffer_remove_32be(evb)); + tint peer_time = evbuffer_remove_64be(evb); // FIXME 32 + // FIXME FIXME: wrap around here + if (ackd_pos.is_none()) + return; // likely, broken chunk/ insufficient hashes + if (hashtree()->size() && ackd_pos.base_offset()>=hashtree()->size_in_chunks()) { + char bin_name_buf[32]; + eprintf("invalid ack: %s\n",ackd_pos.str(bin_name_buf)); + return; + } + ack_in_.set(ackd_pos); + + //fprintf(stderr,"OnAck: got bin %s is_complete %d\n", ackd_pos.str(), (int)ack_in_.is_complete_arno( hashtree()->ack_out()->get_height() )); + + int di = 0, ri = 0; + // find an entry for the send (data out) event + while ( di> 3; + dev_avg_ = ( dev_avg_*3 + tintabs(rtt-rtt_avg_) ) >> 2; + assert(data_out_[di].time!=TINT_NEVER); + // one-way delay calculations + tint owd = peer_time - data_out_[di].time; + owd_cur_bin_ = 0;//(owd_cur_bin_+1) & 3; + owd_current_[owd_cur_bin_] = owd; + if ( owd_min_bin_start_+TINT_SEC*30 < NOW ) { + owd_min_bin_start_ = NOW; + owd_min_bin_ = (owd_min_bin_+1) & 3; + owd_min_bins_[owd_min_bin_] = TINT_NEVER; + } + if (owd_min_bins_[owd_min_bin_]>owd) + owd_min_bins_[owd_min_bin_] = owd; + dprintf("%s #%u sendctrl rtt %lli dev %lli based on %s\n", + tintstr(),id_,rtt_avg_,dev_avg_,data_out_[di].bin.str(bin_name_buf)); + ack_rcvd_recent_++; + // early loss detection by packet reordering + for (int re=0; resize() > 0) + { + transfer().availability().setSize(hashtree()->size_in_chunks()); + } + // Ric: update the availability if needed + transfer().availability().set(id_, ack_in_, ackd_pos); + } + + ack_in_.set(ackd_pos); + char bin_name_buf[32]; + dprintf("%s #%u -have %s\n",tintstr(),id_,ackd_pos.str(bin_name_buf)); + + //fprintf(stderr,"OnHave: got bin %s is_complete %d\n", ackd_pos.str(), IsComplete() ); + +} + + +void Channel::OnHint (struct evbuffer *evb) { + bin_t hint = bin_fromUInt32(evbuffer_remove_32be(evb)); + // FIXME: wake up here + hint_in_.push_back(hint); + char bin_name_buf[32]; + dprintf("%s #%u -hint %s\n",tintstr(),id_,hint.str(bin_name_buf)); +} + + +void Channel::OnHandshake (struct evbuffer *evb) { + + uint32_t pcid = evbuffer_remove_32be(evb); + dprintf("%s #%u -hs %x\n",tintstr(),id_,pcid); + + if (is_established() && pcid == 0) { + // Arno: received explicit close + peer_channel_id_ = 0; // == established -> false + Close(); + return; + } + + peer_channel_id_ = pcid; + // self-connection check + + if (!SELF_CONN_OK) { + uint32_t try_id = DecodeID(peer_channel_id_); + // Arno, 2012-05-29: Fixed duplicate test + if (channel(try_id) && channel(try_id)->peer_channel_id_) { + peer_channel_id_ = 0; + Close(); + return; // this is a self-connection + } + } + + // FUTURE: channel forking + if (is_established()) + dprintf("%s #%u established %s\n", tintstr(), id_, peer().str()); +} + + +void Channel::OnPexAdd (struct evbuffer *evb) { + uint32_t ipv4 = evbuffer_remove_32be(evb); + uint16_t port = evbuffer_remove_16be(evb); + Address addr(ipv4,port); + dprintf("%s #%u -pex %s\n",tintstr(),id_,addr.str()); + if (transfer().OnPexAddIn(addr)) + useless_pex_count_ = 0; + else + { + dprintf("%s #%u already channel to %s\n", tintstr(),id_,addr.str()); + useless_pex_count_++; + } + pex_request_outstanding_ = false; +} + + +//FRAGRAND +void Channel::OnRandomize (struct evbuffer *evb) { + dprintf("%s #%u -rand\n",tintstr(),id_ ); + // Payload is 4 random bytes + uint32_t r = evbuffer_remove_32be(evb); +} + + +void Channel::AddPex (struct evbuffer *evb) { + // Gertjan fix: Reverse PEX + // PEX messages sent to facilitate NAT/FW puncturing get priority + if (!reverse_pex_out_.empty()) { + do { + tintbin pex_peer = reverse_pex_out_.front(); + reverse_pex_out_.pop_front(); + if (channels[(int) pex_peer.bin.toUInt()] == NULL) + continue; + Address a = channels[(int) pex_peer.bin.toUInt()]->peer(); + // Arno, 2012-02-28: Don't send private addresses to non-private peers. + if (!a.is_private() || (a.is_private() && peer().is_private())) + { + evbuffer_add_8(evb, SWIFT_PEX_ADD); + evbuffer_add_32be(evb, a.ipv4()); + evbuffer_add_16be(evb, a.port()); + dprintf("%s #%u +pex (reverse) %s\n",tintstr(),id_,a.str()); + } + } while (!reverse_pex_out_.empty() && (SWIFT_MAX_NONDATA_DGRAM_SIZE-evbuffer_get_length(evb)) >= 7); + + // Arno: 2012-02-23: Don't think this is right. Bit of DoS thing, + // that you only get back the addr of people that got your addr. + // Disable for now. + //return; + } + + if (!pex_requested_) + return; + + // Arno, 2012-02-28: Don't send private addresses to non-private peers. + int chid = 0, tries=0; + Address a; + while (true) + { + // Arno, 2011-10-03: Choosing Gertjan's RandomChannel over RevealChannel here. + chid = transfer().RandomChannel(id_); + if (chid==-1 || chid==id_ || tries > 5) { + pex_requested_ = false; + return; + } + a = channels[chid]->peer(); + if (!a.is_private() || (a.is_private() && peer().is_private())) + break; + tries++; + } + + evbuffer_add_8(evb, SWIFT_PEX_ADD); + evbuffer_add_32be(evb, a.ipv4()); + evbuffer_add_16be(evb, a.port()); + dprintf("%s #%u +pex %s\n",tintstr(),id_,a.str()); + + pex_requested_ = false; + /* Ensure that we don't add the same id to the reverse_pex_out_ queue + more than once. */ + for (tbqueue::iterator i = channels[chid]->reverse_pex_out_.begin(); + i != channels[chid]->reverse_pex_out_.end(); i++) + if ((int) (i->bin.toUInt()) == id_) + return; + + dprintf("%s #%u adding pex for channel %u at time %s\n", tintstr(), chid, + id_, tintstr(NOW + 2 * TINT_SEC)); + // Arno, 2011-10-03: should really be a queue of (tint,channel id(= uint32_t)) pairs. + channels[chid]->reverse_pex_out_.push_back(tintbin(NOW + 2 * TINT_SEC, bin_t(id_))); + if (channels[chid]->send_control_ == KEEP_ALIVE_CONTROL && + channels[chid]->next_send_time_ > NOW + 2 * TINT_SEC) + channels[chid]->Reschedule(); +} + +void Channel::OnPexReq(void) { + dprintf("%s #%u -pex req\n", tintstr(), id_); + if (NOW > MIN_PEX_REQUEST_INTERVAL + last_pex_request_time_) + pex_requested_ = true; +} + +void Channel::AddPexReq(struct evbuffer *evb) { + // Rate limit the number of PEX requests + if (NOW < next_pex_request_time_) + return; + + // If no answer has been received from a previous request, count it as useless + if (pex_request_outstanding_) + useless_pex_count_++; + + pex_request_outstanding_ = false; + + // Initiate at most SWIFT_MAX_CONNECTIONS connections + if (transfer().hs_in_.size() >= SWIFT_MAX_CONNECTIONS || + // Check whether this channel has been providing useful peer information + useless_pex_count_ > 2) + { + // Arno, 2012-02-23: Fix: Code doesn't recover from useless_pex_count_ > 2, + // let's just try again in 30s + useless_pex_count_ = 0; + next_pex_request_time_ = NOW + 30 * TINT_SEC; + + return; + } + + dprintf("%s #%u +pex req\n", tintstr(), id_); + evbuffer_add_8(evb, SWIFT_PEX_REQ); + /* Add a little more than the minimum interval, such that the other party is + less likely to drop it due to too high rate */ + next_pex_request_time_ = NOW + MIN_PEX_REQUEST_INTERVAL * 1.1; + pex_request_outstanding_ = true; +} + + + +/* + * Channel class methods + */ + +void Channel::LibeventReceiveCallback(evutil_socket_t fd, short event, void *arg) { + // Called by libevent when a datagram is received on the socket + Time(); + RecvDatagram(fd); + event_add(&evrecv, NULL); +} + +void Channel::RecvDatagram (evutil_socket_t socket) { + struct evbuffer *evb = evbuffer_new(); + Address addr; + + RecvFrom(socket, addr, evb); + size_t evboriglen = evbuffer_get_length(evb); + +//#define return_log(...) { fprintf(stderr,__VA_ARGS__); evbuffer_free(evb); return; } +#define return_log(...) { dprintf(__VA_ARGS__); evbuffer_free(evb); return; } + if (evbuffer_get_length(evb)<4) + return_log("socket layer weird: datagram < 4 bytes from %s (prob ICMP unreach)\n",addr.str()); + uint32_t mych = evbuffer_remove_32be(evb); + Sha1Hash hash; + Channel* channel = NULL; + if (mych==0) { // peer initiates handshake + if (evbuffer_get_length(evb)<1+4+1+4+Sha1Hash::SIZE) + return_log ("%s #0 incorrect size %i initial handshake packet %s\n", + tintstr(),(int)evbuffer_get_length(evb),addr.str()); + uint8_t hashid = evbuffer_remove_8(evb); + if (hashid!=SWIFT_HASH) + return_log ("%s #0 no hash in the initial handshake %s\n", + tintstr(),addr.str()); + bin_t pos = bin_fromUInt32(evbuffer_remove_32be(evb)); + if (!pos.is_all()) + return_log ("%s #0 that is not the root hash %s\n",tintstr(),addr.str()); + hash = evbuffer_remove_hash(evb); + FileTransfer* ft = FileTransfer::Find(hash); + if (!ft) + { + ZeroState *zs = ZeroState::GetInstance(); + ft = zs->Find(hash); + if (!ft) + return_log ("%s #0 hash %s unknown, requested by %s\n",tintstr(),hash.hex().c_str(),addr.str()); + } + else if (ft->IsZeroState() && !ft->hashtree()->is_complete()) + { + return_log ("%s #0 zero hash %s broken, requested by %s\n",tintstr(),hash.hex().c_str(),addr.str()); + } + if (!ft->IsOperational()) + { + return_log ("%s #0 hash %s broken, requested by %s\n",tintstr(),hash.hex().c_str(),addr.str()); + } + + dprintf("%s #0 -hash ALL %s\n",tintstr(),hash.hex().c_str()); + + // Arno, 2012-02-27: Check for duplicate channel + Channel* existchannel = ft->FindChannel(addr,NULL); + if (existchannel) + { + // Arno: 2011-10-13: Ignore if established, otherwise consider + // it a concurrent connection attempt. + if (existchannel->is_established()) { + // ARNOTODO: Read complete handshake here so we know whether + // attempt is to new channel or to existing. Currently read + // in OnHandshake() + // + return_log("%s #0 have a channel already to %s\n",tintstr(),addr.str()); + } else { + channel = existchannel; + //fprintf(stderr,"Channel::RecvDatagram: HANDSHAKE: reuse channel %s\n", channel->peer_.str() ); + } + } + if (channel == NULL) { + //fprintf(stderr,"Channel::RecvDatagram: HANDSHAKE: create new channel %s\n", addr.str() ); + channel = new Channel(ft, socket, addr); + } + //fprintf(stderr,"CHANNEL INCOMING DEF hass %s is id %d\n",hash.hex().c_str(),channel->id()); + + } else if (mych==CMDGW_TUNNEL_DEFAULT_CHANNEL_ID) { + // SOCKTUNNEL + CmdGwTunnelUDPDataCameIn(addr,CMDGW_TUNNEL_DEFAULT_CHANNEL_ID,evb); + evbuffer_free(evb); + return; + } else { // peer responds to my handshake (and other messages) + mych = DecodeID(mych); + if (mych>=channels.size()) + return_log("%s invalid channel #%u, %s\n",tintstr(),mych,addr.str()); + channel = channels[mych]; + if (!channel) + return_log ("%s #%u is already closed\n",tintstr(),mych); + if (channel->IsDiffSenderOrDuplicate(addr,mych)) { + channel->Schedule4Close(); + return_log ("%s #%u is duplicate\n",tintstr(),mych); + } + channel->own_id_mentioned_ = true; + } + channel->raw_bytes_down_ += evboriglen; + //dprintf("recvd %i bytes for %i\n",data.size(),channel->id); + bool wasestablished = channel->is_established(); + + //dprintf("%s #%u peer %s recv_peer %s addr %s\n", tintstr(),mych, channel->peer().str(), channel->recv_peer().str(), addr.str() ); + + channel->Recv(evb); + + evbuffer_free(evb); + //SAFECLOSE + if (wasestablished && !channel->is_established()) { + // Arno, 2012-01-26: Received an explict close, clean up channel, safely. + channel->Schedule4Close(); + } +} + + + +/* + * Channel instance methods + */ + +void Channel::CloseChannelByAddress(const Address &addr) +{ + // fprintf(stderr,"CloseChannelByAddress: address is %s\n", addr.str() ); + channels_t::iterator iter; + for (iter = channels.begin(); iter != channels.end(); iter++) + { + Channel *c = *iter; + if (c != NULL && c->peer_ == addr) + { + // ARNOSMPTODO: will do another send attempt before not being + // Rescheduled. + c->peer_channel_id_ = 0; // established->false, do no more sending + c->Schedule4Close(); + break; + } + } +} + + +void Channel::Close () { + + this->SwitchSendControl(CLOSE_CONTROL); + + if (is_established()) + this->Send(); // Arno: send explicit close + + if (!transfer().IsZeroState() && ENABLE_VOD_PIECEPICKER) { + // Ric: remove its binmap from the availability + transfer().availability().remove(id_, ack_in_); + } + + // SAFECLOSE + // Arno: ensure LibeventSendCallback is no longer called with ptr to this Channel + ClearEvents(); +} + + +void Channel::Reschedule () { + + // Arno: CAREFUL: direct send depends on diff between next_send_time_ and + // NOW to be 0, so any calls to Time in between may put things off. Sigh. + Time(); + next_send_time_ = NextSendTime(); + if (next_send_time_!=TINT_NEVER) { + + assert(next_send_time_Schedule4Close(); + } +} + + +/* + * Channel class methods + */ +void Channel::LibeventSendCallback(int fd, short event, void *arg) { + + // Called by libevent when it is the requested send time. + Time(); + Channel * sender = (Channel*) arg; + if (NOWnext_send_time_-TINT_MSEC) + dprintf("%s #%u suspicious send %s<%s\n",tintstr(), + sender->id(),tintstr(NOW),tintstr(sender->next_send_time_)); + if (sender->next_send_time_ != TINT_NEVER) + sender->Send(); +} + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/serialize.h tribler-6.2.0/Tribler/SwiftEngine/serialize.h --- tribler-6.2.0/Tribler/SwiftEngine/serialize.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/serialize.h 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,23 @@ +/* + * serialize.h + * + * Created by Arno Bakker + * Copyright 2010-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#ifndef SWIFT_SERIALIZE_H_ +#define SWIFT_SERIALIZE_H_ + +#include + +#define fprintf_retiffail(...) { if (fprintf(__VA_ARGS__) < 0) { return -1; }} +#define fscanf_retiffail(...) { if (fscanf(__VA_ARGS__) == EOF) { return -1; }} + +class Serializable { + public: + virtual int serialize(FILE *fp) = 0; + virtual int deserialize(FILE *fp) = 0; +}; + +#endif /* SWIFT_SERIALIZE_H_ */ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/sha1.cpp tribler-6.2.0/Tribler/SwiftEngine/sha1.cpp --- tribler-6.2.0/Tribler/SwiftEngine/sha1.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/sha1.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,289 @@ +// licensed under the GPL v2 as part of the git project http://git-scm.com/ +/* + * SHA1 routine optimized to do word accesses rather than byte accesses, + * and to avoid unnecessary copies into the context array. + * + * This was initially based on the Mozilla SHA1 implementation, although + * none of the original Mozilla code remains. + */ + +/* this is only to get definitions for memcpy(), ntohl() and htonl() */ +//#include "../git-compat-util.h" +#ifdef _WIN32 +#include +#else +#include +#endif +#include + +#include "sha1.h" + +#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) + +/* + * Force usage of rol or ror by selecting the one with the smaller constant. + * It _can_ generate slightly smaller code (a constant of 1 is special), but + * perhaps more importantly it's possibly faster on any uarch that does a + * rotate with a loop. + */ + +#define SHA_ASM(op, x, n) ({ unsigned int __res; __asm__(op " %1,%0":"=r" (__res):"i" (n), "0" (x)); __res; }) +#define SHA_ROL(x,n) SHA_ASM("rol", x, n) +#define SHA_ROR(x,n) SHA_ASM("ror", x, n) + +#else + +#define SHA_ROT(X,l,r) (((X) << (l)) | ((X) >> (r))) +#define SHA_ROL(X,n) SHA_ROT(X,n,32-(n)) +#define SHA_ROR(X,n) SHA_ROT(X,32-(n),n) + +#endif + +/* + * If you have 32 registers or more, the compiler can (and should) + * try to change the array[] accesses into registers. However, on + * machines with less than ~25 registers, that won't really work, + * and at least gcc will make an unholy mess of it. + * + * So to avoid that mess which just slows things down, we force + * the stores to memory to actually happen (we might be better off + * with a 'W(t)=(val);asm("":"+m" (W(t))' there instead, as + * suggested by Artur Skawina - that will also make gcc unable to + * try to do the silly "optimize away loads" part because it won't + * see what the value will be). + * + * Ben Herrenschmidt reports that on PPC, the C version comes close + * to the optimized asm with this (ie on PPC you don't want that + * 'volatile', since there are lots of registers). + * + * On ARM we get the best code generation by forcing a full memory barrier + * between each SHA_ROUND, otherwise gcc happily get wild with spilling and + * the stack frame size simply explode and performance goes down the drain. + */ + +#if defined(__i386__) || defined(__x86_64__) + #define setW(x, val) (*(volatile unsigned int *)&W(x) = (val)) +#elif defined(__GNUC__) && defined(__arm__) + #define setW(x, val) do { W(x) = (val); __asm__("":::"memory"); } while (0) +#else + #define setW(x, val) (W(x) = (val)) +#endif + +/* + * Performance might be improved if the CPU architecture is OK with + * unaligned 32-bit loads and a fast ntohl() is available. + * Otherwise fall back to byte loads and shifts which is portable, + * and is faster on architectures with memory alignment issues. + */ + +#if defined(__i386__) || defined(__x86_64__) || \ + defined(__ppc__) || defined(__ppc64__) || \ + defined(__powerpc__) || defined(__powerpc64__) || \ + defined(__s390__) || defined(__s390x__) + +#define get_be32(p) ntohl(*(unsigned int *)(p)) +#define put_be32(p, v) do { *(unsigned int *)(p) = htonl(v); } while (0) + +#else + +#define get_be32(p) ( \ + (*((unsigned char *)(p) + 0) << 24) | \ + (*((unsigned char *)(p) + 1) << 16) | \ + (*((unsigned char *)(p) + 2) << 8) | \ + (*((unsigned char *)(p) + 3) << 0) ) +#define put_be32(p, v) do { \ + unsigned int __v = (v); \ + *((unsigned char *)(p) + 0) = __v >> 24; \ + *((unsigned char *)(p) + 1) = __v >> 16; \ + *((unsigned char *)(p) + 2) = __v >> 8; \ + *((unsigned char *)(p) + 3) = __v >> 0; } while (0) + +#endif + +/* This "rolls" over the 512-bit array */ +#define W(x) (array[(x)&15]) + +/* + * Where do we get the source from? The first 16 iterations get it from + * the input data, the next mix it from the 512-bit array. + */ +#define SHA_SRC(t) get_be32(data + t) +#define SHA_MIX(t) SHA_ROL(W(t+13) ^ W(t+8) ^ W(t+2) ^ W(t), 1) + +#define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \ + unsigned int TEMP = input(t); setW(t, TEMP); \ + E += TEMP + SHA_ROL(A,5) + (fn) + (constant); \ + B = SHA_ROR(B, 2); } while (0) + +#define T_0_15(t, A, B, C, D, E) SHA_ROUND(t, SHA_SRC, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) +#define T_16_19(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) +#define T_20_39(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0x6ed9eba1, A, B, C, D, E ) +#define T_40_59(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, ((B&C)+(D&(B^C))) , 0x8f1bbcdc, A, B, C, D, E ) +#define T_60_79(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0xca62c1d6, A, B, C, D, E ) + +static void blk_SHA1_Block(blk_SHA_CTX *ctx, const unsigned int *data) +{ + unsigned int A,B,C,D,E; + unsigned int array[16]; + + A = ctx->H[0]; + B = ctx->H[1]; + C = ctx->H[2]; + D = ctx->H[3]; + E = ctx->H[4]; + + /* Round 1 - iterations 0-16 take their input from 'data' */ + T_0_15( 0, A, B, C, D, E); + T_0_15( 1, E, A, B, C, D); + T_0_15( 2, D, E, A, B, C); + T_0_15( 3, C, D, E, A, B); + T_0_15( 4, B, C, D, E, A); + T_0_15( 5, A, B, C, D, E); + T_0_15( 6, E, A, B, C, D); + T_0_15( 7, D, E, A, B, C); + T_0_15( 8, C, D, E, A, B); + T_0_15( 9, B, C, D, E, A); + T_0_15(10, A, B, C, D, E); + T_0_15(11, E, A, B, C, D); + T_0_15(12, D, E, A, B, C); + T_0_15(13, C, D, E, A, B); + T_0_15(14, B, C, D, E, A); + T_0_15(15, A, B, C, D, E); + + /* Round 1 - tail. Input from 512-bit mixing array */ + T_16_19(16, E, A, B, C, D); + T_16_19(17, D, E, A, B, C); + T_16_19(18, C, D, E, A, B); + T_16_19(19, B, C, D, E, A); + + /* Round 2 */ + T_20_39(20, A, B, C, D, E); + T_20_39(21, E, A, B, C, D); + T_20_39(22, D, E, A, B, C); + T_20_39(23, C, D, E, A, B); + T_20_39(24, B, C, D, E, A); + T_20_39(25, A, B, C, D, E); + T_20_39(26, E, A, B, C, D); + T_20_39(27, D, E, A, B, C); + T_20_39(28, C, D, E, A, B); + T_20_39(29, B, C, D, E, A); + T_20_39(30, A, B, C, D, E); + T_20_39(31, E, A, B, C, D); + T_20_39(32, D, E, A, B, C); + T_20_39(33, C, D, E, A, B); + T_20_39(34, B, C, D, E, A); + T_20_39(35, A, B, C, D, E); + T_20_39(36, E, A, B, C, D); + T_20_39(37, D, E, A, B, C); + T_20_39(38, C, D, E, A, B); + T_20_39(39, B, C, D, E, A); + + /* Round 3 */ + T_40_59(40, A, B, C, D, E); + T_40_59(41, E, A, B, C, D); + T_40_59(42, D, E, A, B, C); + T_40_59(43, C, D, E, A, B); + T_40_59(44, B, C, D, E, A); + T_40_59(45, A, B, C, D, E); + T_40_59(46, E, A, B, C, D); + T_40_59(47, D, E, A, B, C); + T_40_59(48, C, D, E, A, B); + T_40_59(49, B, C, D, E, A); + T_40_59(50, A, B, C, D, E); + T_40_59(51, E, A, B, C, D); + T_40_59(52, D, E, A, B, C); + T_40_59(53, C, D, E, A, B); + T_40_59(54, B, C, D, E, A); + T_40_59(55, A, B, C, D, E); + T_40_59(56, E, A, B, C, D); + T_40_59(57, D, E, A, B, C); + T_40_59(58, C, D, E, A, B); + T_40_59(59, B, C, D, E, A); + + /* Round 4 */ + T_60_79(60, A, B, C, D, E); + T_60_79(61, E, A, B, C, D); + T_60_79(62, D, E, A, B, C); + T_60_79(63, C, D, E, A, B); + T_60_79(64, B, C, D, E, A); + T_60_79(65, A, B, C, D, E); + T_60_79(66, E, A, B, C, D); + T_60_79(67, D, E, A, B, C); + T_60_79(68, C, D, E, A, B); + T_60_79(69, B, C, D, E, A); + T_60_79(70, A, B, C, D, E); + T_60_79(71, E, A, B, C, D); + T_60_79(72, D, E, A, B, C); + T_60_79(73, C, D, E, A, B); + T_60_79(74, B, C, D, E, A); + T_60_79(75, A, B, C, D, E); + T_60_79(76, E, A, B, C, D); + T_60_79(77, D, E, A, B, C); + T_60_79(78, C, D, E, A, B); + T_60_79(79, B, C, D, E, A); + + ctx->H[0] += A; + ctx->H[1] += B; + ctx->H[2] += C; + ctx->H[3] += D; + ctx->H[4] += E; +} + +void blk_SHA1_Init(blk_SHA_CTX *ctx) +{ + ctx->size = 0; + + /* Initialize H with the magic constants (see FIPS180 for constants) */ + ctx->H[0] = 0x67452301; + ctx->H[1] = 0xefcdab89; + ctx->H[2] = 0x98badcfe; + ctx->H[3] = 0x10325476; + ctx->H[4] = 0xc3d2e1f0; +} + +void blk_SHA1_Update(blk_SHA_CTX *ctx, const void *data, unsigned long len) +{ + int lenW = ctx->size & 63; + + ctx->size += len; + + /* Read the data into W and process blocks as they get full */ + if (lenW) { + int left = 64 - lenW; + if (len < left) + left = len; + memcpy(lenW + (char *)ctx->W, data, left); + lenW = (lenW + left) & 63; + len -= left; + data = ((const char *)data + left); + if (lenW) + return; + blk_SHA1_Block(ctx, ctx->W); + } + while (len >= 64) { + blk_SHA1_Block(ctx, (const unsigned int*)data); + data = ((const char *)data + 64); + len -= 64; + } + if (len) + memcpy(ctx->W, data, len); +} + +void blk_SHA1_Final(unsigned char hashout[20], blk_SHA_CTX *ctx) +{ + static const unsigned char pad[64] = { 0x80 }; + unsigned int padlen[2]; + int i; + + /* Pad with a binary 1 (ie 0x80), then zeroes, then length */ + padlen[0] = htonl(ctx->size >> 29); + padlen[1] = htonl(ctx->size << 3); + + i = ctx->size & 63; + blk_SHA1_Update(ctx, pad, 1+ (63 & (55 - i))); + blk_SHA1_Update(ctx, padlen, 8); + + /* Output hash */ + for (i = 0; i < 5; i++) + put_be32(hashout + i*4, ctx->H[i]); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/sha1.h tribler-6.2.0/Tribler/SwiftEngine/sha1.h --- tribler-6.2.0/Tribler/SwiftEngine/sha1.h 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/sha1.h 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,23 @@ +// licensed under the GPL v2 as part of the git project http://git-scm.com/ +/* + * SHA1 routine optimized to do word accesses rather than byte accesses, + * and to avoid unnecessary copies into the context array. + * + * This was initially based on the Mozilla SHA1 implementation, although + * none of the original Mozilla code remains. + */ +#ifndef GIT_SHA1 +#define GIT_SHA1 + +typedef struct { + unsigned long long size; + unsigned int H[5]; + unsigned int W[16]; +} blk_SHA_CTX; + +void blk_SHA1_Init(blk_SHA_CTX *ctx); +void blk_SHA1_Update(blk_SHA_CTX *ctx, const void *dataIn, unsigned long len); +void blk_SHA1_Final(unsigned char hashout[20], blk_SHA_CTX *ctx); + +#endif + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/statsgw.cpp tribler-6.2.0/Tribler/SwiftEngine/statsgw.cpp --- tribler-6.2.0/Tribler/SwiftEngine/statsgw.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/statsgw.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,284 @@ +/* + * statsgw.cpp + * HTTP server for showing some DL stats via SwarmPlayer 3000's webUI, + * libevent based + * + * Created by Victor Grishchenko, Arno Bakker + * Copyright 2010-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "swift.h" +#include + +using namespace swift; + +int statsgw_reqs_count = 0; + + +uint64_t statsgw_last_down; +uint64_t statsgw_last_up; +tint statsgw_last_time = 0; +bool statsgw_quit_process=false; +struct evhttp *statsgw_event; +struct evhttp_bound_socket *statsgw_handle; + + +const char *top_page = " \ + \ + \ + \ + \ + \ + Swift Web Interface \ + \ + \ +
\ +
\ +

Swift swarms:

"; + +const char *swarm_page_templ = " \ +

Root hash: %s

\ +
    \ +
  • Progress: %d%c \ +
  • Download speed: %d KB/s \ +
  • Upload speed: %d KB/s \ +
"; + + +const char *bottom_page = " \ + \ +
\ +
\ + \ +"; + + +const char *exit_page = " \ + \ + \ + \ + \ + Swift Web Interface \ + \ + \ +
\ +
\ +

Swift is no longer running.

\ +
\ +
\ + \ +"; + + +static void StatsGwNewRequestCallback (struct evhttp_request *evreq, void *arg); + + +void StatsExitCallback(struct evhttp_request *evreq) +{ + char contlenstr[1024]; + sprintf(contlenstr,"%i",strlen(exit_page)); + struct evkeyvalq *headers = evhttp_request_get_output_headers(evreq); + evhttp_add_header(headers, "Connection", "close" ); + evhttp_add_header(headers, "Content-Type", "text/html" ); + evhttp_add_header(headers, "Content-Length", contlenstr ); + evhttp_add_header(headers, "Accept-Ranges", "none" ); + + // Construct evbuffer and send via chunked encoding + struct evbuffer *evb = evbuffer_new(); + int ret = evbuffer_add(evb,exit_page,strlen(exit_page)); + if (ret < 0) { + print_error("statsgw: ExitCallback: error evbuffer_add"); + return; + } + + evhttp_send_reply(evreq, 200, "OK", evb); + evbuffer_free(evb); +} + + +bool StatsQuit() +{ + return statsgw_quit_process; +} + + +void StatsOverviewCallback(struct evhttp_request *evreq) +{ + tint nu = NOW; + uint64_t down = Channel::global_raw_bytes_down; + uint64_t up = Channel::global_raw_bytes_up; + + int dspeed = 0, uspeed = 0; + tint tdiff = (nu - statsgw_last_time)/1000000; + if (tdiff > 0) { + dspeed = (int)(((down-statsgw_last_down)/1024) / tdiff); + uspeed = (int)(((up-statsgw_last_up)/1024) / tdiff); + } + //statsgw_last_down = down; + //statsgw_last_up = up; + + + char bodystr[102400]; + strcpy(bodystr,""); + strcat(bodystr,top_page); + + for (int i=0; ifd(); + uint64_t total = (int)swift::Size(fd); + uint64_t down = (int)swift::Complete(fd); + int perc = (int)((down * 100) / total); + + char roothashhexstr[256]; + sprintf(roothashhexstr,"%s", RootMerkleHash(fd).hex().c_str() ); + + char templ[1024]; + sprintf(templ,swarm_page_templ,roothashhexstr, perc, '%', dspeed, uspeed ); + strcat(bodystr,templ); + } + } + + strcat(bodystr,bottom_page); + + char contlenstr[1024]; + sprintf(contlenstr,"%i",strlen(bodystr)); + struct evkeyvalq *headers = evhttp_request_get_output_headers(evreq); + evhttp_add_header(headers, "Connection", "close" ); + evhttp_add_header(headers, "Content-Type", "text/html" ); + evhttp_add_header(headers, "Content-Length", contlenstr ); + evhttp_add_header(headers, "Accept-Ranges", "none" ); + + // Construct evbuffer and send via chunked encoding + struct evbuffer *evb = evbuffer_new(); + int ret = evbuffer_add(evb,bodystr,strlen(bodystr)); + if (ret < 0) { + print_error("statsgw: OverviewCallback: error evbuffer_add"); + return; + } + + evhttp_send_reply(evreq, 200, "OK", evb); + evbuffer_free(evb); +} + + +void StatsGetSpeedCallback(struct evhttp_request *evreq) +{ + if (statsgw_last_time == 0) + { + statsgw_last_time = NOW-1000000; + } + + tint nu = Channel::Time(); + uint64_t down = Channel::global_raw_bytes_down; + uint64_t up = Channel::global_raw_bytes_up; + + int dspeed = 0, uspeed = 0; + tint tdiff = (nu - statsgw_last_time)/1000000; + if (tdiff > 0) { + dspeed = (int)(((down-statsgw_last_down)/1024) / tdiff); + uspeed = (int)(((up-statsgw_last_up)/1024) / tdiff); + } + statsgw_last_down = down; + statsgw_last_up = up; + statsgw_last_time = nu; + + // Arno: PDD+ wants content speeds too + double contentdownspeed = 0.0, contentupspeed = 0.0; + uint32_t nleech=0,nseed=0; + for (int i=0; iGetCurrentSpeed(DDIR_DOWNLOAD); + contentupspeed += ft->GetCurrentSpeed(DDIR_UPLOAD); + nleech += ft->GetNumLeechers(); + nseed += ft->GetNumSeeders(); + } + } + int cdownspeed = (int)(contentdownspeed/1024.0); + int cupspeed = (int)(contentupspeed/1024.0); + + char speedstr[1024]; + sprintf(speedstr,"{\"downspeed\": %d, \"success\": \"true\", \"upspeed\": %d, \"cdownspeed\": %d, \"cupspeed\": %d, \"nleech\": %d, \"nseed\": %d}", dspeed, uspeed, cdownspeed, cupspeed, nleech, nseed ); + + char contlenstr[1024]; + sprintf(contlenstr,"%i",strlen(speedstr)); + struct evkeyvalq *headers = evhttp_request_get_output_headers(evreq); + evhttp_add_header(headers, "Connection", "close" ); + evhttp_add_header(headers, "Content-Type", "application/json" ); + evhttp_add_header(headers, "Content-Length", contlenstr ); + evhttp_add_header(headers, "Accept-Ranges", "none" ); + + // Construct evbuffer and send via chunked encoding + struct evbuffer *evb = evbuffer_new(); + int ret = evbuffer_add(evb,speedstr,strlen(speedstr)); + if (ret < 0) { + print_error("statsgw: GetSpeedCallback: error evbuffer_add"); + return; + } + + evhttp_send_reply(evreq, 200, "OK", evb); + evbuffer_free(evb); +} + + +void StatsGwNewRequestCallback (struct evhttp_request *evreq, void *arg) { + + dprintf("%s @%i http new request\n",tintstr(),statsgw_reqs_count); + statsgw_reqs_count++; + + if (evhttp_request_get_command(evreq) != EVHTTP_REQ_GET) { + return; + } + + // Parse URI + const char *uri = evhttp_request_get_uri(evreq); + //struct evkeyvalq *headers = evhttp_request_get_input_headers(evreq); + //const char *contentrangestr =evhttp_find_header(headers,"Content-Range"); + + fprintf(stderr,"statsgw: GOT %s\n", uri); + + if (strstr(uri,"get_speed_info") != NULL) + { + StatsGetSpeedCallback(evreq); + } + else if (!strncmp(uri,"/webUI/exit",strlen("/webUI/exit")) || statsgw_quit_process) + { + statsgw_quit_process = true; + StatsExitCallback(evreq); + } + else if (!strncmp(uri,"/webUI",strlen("/webUI"))) + { + StatsOverviewCallback(evreq); + } +} + + +bool InstallStatsGateway (struct event_base *evbase,Address bindaddr) { + // Arno, 2011-10-04: From libevent's http-server.c example + + /* Create a new evhttp object to handle requests. */ + statsgw_event = evhttp_new(evbase); + if (!statsgw_event) { + print_error("statsgw: evhttp_new failed"); + return false; + } + + /* Install callback for all requests */ + evhttp_set_gencb(statsgw_event, StatsGwNewRequestCallback, NULL); + + /* Now we tell the evhttp what port to listen on */ + statsgw_handle = evhttp_bind_socket_with_handle(statsgw_event, bindaddr.ipv4str(), bindaddr.port()); + if (!statsgw_handle) { + print_error("statsgw: evhttp_bind_socket_with_handle failed"); + return false; + } + + return true; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/storage.cpp tribler-6.2.0/Tribler/SwiftEngine/storage.cpp --- tribler-6.2.0/Tribler/SwiftEngine/storage.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/storage.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,702 @@ +/* + * storage.cpp + * swift + * + * Created by Arno Bakker. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + * TODO: + * - Unicode? + * - Slow resume after alloc big file (Win32, work on swift-trunk) + */ + +#include "swift.h" +#include "compat.h" + +#include +#include + +using namespace swift; + + +const std::string Storage::MULTIFILE_PATHNAME = "META-INF-multifilespec.txt"; +const std::string Storage::MULTIFILE_PATHNAME_FILE_SEP = "/"; + +Storage::Storage(std::string ospathname, std::string destdir, int transferfd) : + Operational(), + state_(STOR_STATE_INIT), + os_pathname_(ospathname), destdir_(destdir), ht_(NULL), spec_size_(0), + single_fd_(-1), reserved_size_(-1), total_size_from_spec_(-1), last_sf_(NULL), + transfer_fd_(transferfd), alloc_cb_(NULL) +{ + + //fprintf(stderr,"Storage: ospathname %s destdir %s\n", ospathname.c_str(), destdir.c_str() ); + + int64_t fsize = file_size_by_path_utf8(ospathname.c_str()); + if (fsize < 0 && errno == ENOENT) + { + // File does not exist, assume we're a client and all will be revealed + // (single file, multi-spec) when chunks come in. + return; + } + + // File exists. Check first bytes to see if a multifile-spec + FILE *fp = fopen_utf8(ospathname.c_str(),"rb"); + if (!fp) + { + dprintf("%s %s storage: File exists, but error opening\n", tintstr(), roothashhex().c_str() ); + print_error("Could not open existing storage file"); + SetBroken(); + return; + } + + char readbuf[1024]; + int ret = fread(readbuf,sizeof(char),MULTIFILE_PATHNAME.length(),fp); + fclose(fp); + if (ret < 0) + { + SetBroken(); + return; + } + + if (!strncmp(readbuf,MULTIFILE_PATHNAME.c_str(),MULTIFILE_PATHNAME.length())) + { + // Pathname points to a multi-file spec, assume we're seeding + state_ = STOR_STATE_MFSPEC_COMPLETE; + + dprintf("%s %s storage: Found multifile-spec, will seed it.\n", tintstr(), roothashhex().c_str() ); + + StorageFile *sf = new StorageFile(MULTIFILE_PATHNAME,0,fsize,ospathname); + sfs_.push_back(sf); + if (ParseSpec(sf) < 0) + { + print_error("storage: error parsing multi-file spec"); + SetBroken(); + } + } + else + { + // Normal swarm + dprintf("%s %s storage: Found single file, will check it.\n", tintstr(), roothashhex().c_str() ); + + (void)OpenSingleFile(); // sets state to STOR_STATE_SINGLE_FILE + } +} + + +Storage::~Storage() +{ + if (single_fd_ != -1) + { + close(single_fd_); + } + + storage_files_t::iterator iter; + for (iter = sfs_.begin(); iter < sfs_.end(); iter++) + { + StorageFile *sf = *iter; + delete sf; + } + sfs_.clear(); +} + + +ssize_t Storage::Write(const void *buf, size_t nbyte, int64_t offset) +{ + //dprintf("%s %s storage: Write: nbyte %d off %lld\n", tintstr(), roothashhex().c_str(), nbyte,offset); + + if (state_ == STOR_STATE_SINGLE_FILE) + { + return pwrite(single_fd_, buf, nbyte, offset); + } + // MULTIFILE + if (state_ == STOR_STATE_INIT) + { + if (offset != 0) + { + errno = EINVAL; + return -1; + } + + //dprintf("%s %s storage: Write: chunk 0\n"); + + // Check for multifile spec. If present, multifile, otherwise single + if (!strncmp((const char *)buf,MULTIFILE_PATHNAME.c_str(),strlen(MULTIFILE_PATHNAME.c_str()))) + { + dprintf("%s %s storage: Write: Is multifile\n", tintstr(), roothashhex().c_str() ); + + // multifile entry will fit into first chunk + const char *bufstr = (const char *)buf; + int n = sscanf((const char *)&bufstr[strlen(MULTIFILE_PATHNAME.c_str())+1],"%lld",&spec_size_); + if (n != 1) + { + errno = EINVAL; + return -1; + } + + //dprintf("%s %s storage: Write: multifile: specsize %lld\n", tintstr(), roothashhex().c_str(), spec_size_ ); + + // Create StorageFile for multi-file spec. + StorageFile *sf = new StorageFile(MULTIFILE_PATHNAME,0,spec_size_,os_pathname_); + sfs_.push_back(sf); + + // Write all, or part of spec and set state_ + return WriteSpecPart(sf,buf,nbyte,offset); + } + else + { + // Is a single file swarm. + int ret = OpenSingleFile(); // sets state to STOR_STATE_SINGLE_FILE + if (ret < 0) + return -1; + + // Write chunk to file via recursion. + return Write(buf,nbyte,offset); + } + } + else if (state_ == STOR_STATE_MFSPEC_SIZE_KNOWN) + { + StorageFile *sf = sfs_[0]; + + dprintf("%s %s storage: Write: mf spec size known\n", tintstr(), roothashhex().c_str()); + + return WriteSpecPart(sf,buf,nbyte,offset); + } + else + { + // state_ == STOR_STATE_MFSPEC_COMPLETE; + //dprintf("%s %s storage: Write: complete\n", tintstr(), roothashhex().c_str()); + + StorageFile *sf = NULL; + if (last_sf_ != NULL && offset >= last_sf_->GetStart() && offset <= last_sf_->GetEnd()) + sf = last_sf_; + else + { + sf = FindStorageFile(offset); + if (sf == NULL) + { + dprintf("%s %s storage: Write: File not found!\n", tintstr(), roothashhex().c_str()); + errno = EINVAL; + return -1; + } + last_sf_ = sf; + } + + std::pair ht = WriteBuffer(sf,buf,nbyte,offset); + if (ht.first == -1) + { + errno = EINVAL; + return -1; + } + + //dprintf("%s %s storage: Write: complete: first %lld second %lld\n", tintstr(), roothashhex().c_str(), ht.first, ht.second); + + if (ht.second > 0) + { + // Write tail to next StorageFile(s) using recursion + const char *bufstr = (const char *)buf; + int ret = Write(&bufstr[ht.first], ht.second, offset+ht.first ); + if (ret < 0) + return ret; + else + return ht.first+ret; + } + else + return ht.first; + } +} + + +int Storage::WriteSpecPart(StorageFile *sf, const void *buf, size_t nbyte, int64_t offset) +{ + //dprintf("%s %s storage: WriteSpecPart: %s %d %lld\n", tintstr(), roothashhex().c_str(), sf->GetSpecPathName().c_str(), nbyte, offset ); + + std::pair ht = WriteBuffer(sf,buf,nbyte,offset); + if (ht.first == -1) + { + errno = EINVAL; + return -1; + } + + if (offset+ht.first == sf->GetEnd()+1) + { + // Wrote last part of spec + state_ = STOR_STATE_MFSPEC_COMPLETE; + + int ret = ParseSpec(sf); + if (ret < 0) + { + errno = EINVAL; + return -1; + } + + // We know exact size after chunk 0, inform hash tree (which doesn't + // know until chunk N-1) is in. + ht_->set_size(GetSizeFromSpec()); + + // Resize all files + ret = ResizeReserved(GetSizeFromSpec()); + if (ret < 0) + return ret; + + // Write tail to next StorageFile(s) using recursion + const char *bufstr = (const char *)buf; + ret = Write(&bufstr[ht.first], ht.second, offset+ht.first ); + if (ret < 0) + return ret; + else + return ht.first+ret; + } + else + { + state_ = STOR_STATE_MFSPEC_SIZE_KNOWN; + return ht.first; + } +} + + + +std::pair Storage::WriteBuffer(StorageFile *sf, const void *buf, size_t nbyte, int64_t offset) +{ + //dprintf("%s %s storage: WriteBuffer: %s %d %lld\n", tintstr(), roothashhex().c_str(), sf->GetSpecPathName().c_str(), nbyte, offset ); + + int ret = -1; + if (offset+nbyte <= sf->GetEnd()+1) + { + // Chunk belongs completely in sf + ret = sf->Write(buf,nbyte,offset - sf->GetStart()); + + //dprintf("%s %s storage: WriteBuffer: Write: covered ret %d\n", tintstr(), roothashhex().c_str(), ret ); + + if (ret < 0) + return std::make_pair(-1,-1); + else + return std::make_pair(nbyte,0); + + } + else + { + int64_t head = sf->GetEnd()+1 - offset; + int64_t tail = nbyte - head; + + // Write last part of file + ret = sf->Write(buf,head,offset - sf->GetStart() ); + + //dprintf("%s %s storage: WriteBuffer: Write: partial ret %d\n", tintstr(), roothashhex().c_str(), ret ); + + if (ret < 0) + return std::make_pair(-1,-1); + else + return std::make_pair(head,tail); + } +} + + + + +StorageFile * Storage::FindStorageFile(int64_t offset) +{ + // Binary search for StorageFile that manages the given offset + int imin = 0, imax=sfs_.size()-1; + while (imax >= imin) + { + int imid = (imin + imax) / 2; + if (offset >= sfs_[imid]->GetEnd()+1) + imin = imid + 1; + else if (offset < sfs_[imid]->GetStart()) + imax = imid - 1; + else + return sfs_[imid]; + } + // Should find it. + return NULL; +} + + +int Storage::ParseSpec(StorageFile *sf) +{ + char *retstr = NULL,line[MULTIFILE_MAX_LINE+1]; + FILE *fp = fopen_utf8(sf->GetOSPathName().c_str(),"rb"); + if (fp == NULL) + { + print_error("cannot open multifile-spec"); + SetBroken(); + return -1; + } + + int64_t offset=0; + int ret=0; + while(1) + { + retstr = fgets(line,MULTIFILE_MAX_LINE,fp); + if (retstr == NULL) + break; + + // Format: "specpath filesize\n" + std::string pline(line); + size_t idx = pline.rfind(' ',pline.length()-1); + + std::string specpath = pline.substr(0,idx); + std::string sizestr = pline.substr(idx+1,pline.length()); + + int64_t fsize=0; + int n = sscanf(sizestr.c_str(),"%lld",&fsize); + if (n == 0) + { + ret = -1; + break; + } + + // Check pathname safety + if (specpath.substr(0,1) == MULTIFILE_PATHNAME_FILE_SEP) + { + // Must not start with / + ret = -1; + break; + } + idx = specpath.find("..",0); + if (idx != std::string::npos) + { + // Must not contain .. path escapes + ret = -1; + break; + } + + if (offset == 0) + { + // sf already created for multifile-spec entry + offset += sf->GetSize(); + } + else + { + // Convert specname to OS name + std::string ospath = destdir_+FILE_SEP; + ospath += Storage::spec2ospn(specpath); + + StorageFile *sf = new StorageFile(specpath,offset,fsize,ospath); + sfs_.push_back(sf); + offset += fsize; + } + } + + // Assume: Multi-file spec sorted, so vector already sorted on offset + storage_files_t::iterator iter; + for (iter = sfs_.begin(); iter < sfs_.end(); iter++) + { + StorageFile *sf = *iter; + dprintf("%s %s storage: parsespec: Got %s start %lld size %lld\n", tintstr(), roothashhex().c_str(), sf->GetSpecPathName().c_str(), sf->GetStart(), sf->GetSize() ); + } + + fclose(fp); + if (ret < 0) + { + SetBroken(); + return ret; + } + else { + total_size_from_spec_ = offset; + return 0; + } +} + + +int Storage::OpenSingleFile() +{ + state_ = STOR_STATE_SINGLE_FILE; + + single_fd_ = open_utf8(os_pathname_.c_str(),OPENFLAGS,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); + if (single_fd_<0) { + single_fd_ = -1; + print_error("storage: cannot open single file"); + SetBroken(); + return -1; + } + + // Perform postponed resize. + if (reserved_size_ != -1) + { + int ret = ResizeReserved(reserved_size_); + if (ret < 0) + { + close(single_fd_); + single_fd_ = -1; + SetBroken(); + } + } + + return single_fd_; +} + + + + +ssize_t Storage::Read(void *buf, size_t nbyte, int64_t offset) +{ + //dprintf("%s %s storage: Read: nbyte " PRISIZET " off %lld\n", tintstr(), roothashhex().c_str(), nbyte, offset ); + + if (state_ == STOR_STATE_SINGLE_FILE) + { + return pread(single_fd_, buf, nbyte, offset); + } + + // MULTIFILE + if (state_ == STOR_STATE_INIT) + { + errno = EINVAL; + return -1; + } + else + { + StorageFile *sf = NULL; + if (last_sf_ != NULL && offset >= last_sf_->GetStart() && offset <= last_sf_->GetEnd()) + sf = last_sf_; + else + { + sf = FindStorageFile(offset); + if (sf == NULL) + { + errno = EINVAL; + return -1; + } + last_sf_ = sf; + //dprintf("%s %s storage: Read: Found file %s for off %lld\n", tintstr(), roothashhex().c_str(), sf->GetSpecPathName().c_str(), offset ); + } + + ssize_t ret = sf->Read(buf,nbyte,offset - sf->GetStart()); + if (ret < 0) + return ret; + + //dprintf("%s %s storage: Read: read %d\n", tintstr(), roothashhex().c_str(), ret ); + + if (ret < nbyte && offset+ret != ht_->size()) + { + //dprintf("%s %s storage: Read: want %d more\n", tintstr(), roothashhex().c_str(), nbyte-ret ); + + // Not at end, and can fit more in buffer. Do recursion + char *bufstr = (char *)buf; + ssize_t newret = Read((void *)(bufstr+ret),nbyte-ret,offset+ret); + if (newret < 0) + return newret; + else + return ret + newret; + } + else + return ret; + } +} + + +int64_t Storage::GetSizeFromSpec() +{ + if (state_ == STOR_STATE_SINGLE_FILE) + return -1; + else + return total_size_from_spec_; +} + + + +int64_t Storage::GetReservedSize() +{ + if (state_ == STOR_STATE_SINGLE_FILE) + { + return file_size(single_fd_); + } + else if (state_ != STOR_STATE_MFSPEC_COMPLETE) + return -1; + + // MULTIFILE + storage_files_t::iterator iter; + int64_t totaldisksize=0; + for (iter = sfs_.begin(); iter < sfs_.end(); iter++) + { + StorageFile *sf = *iter; + + dprintf("storage: getdisksize: statting %s\n", sf->GetOSPathName().c_str() ); + + int64_t fsize = file_size_by_path_utf8( sf->GetOSPathName().c_str() ); + if( fsize < 0) + { + dprintf("%s %s storage: getdisksize: cannot stat file %s\n", tintstr(), roothashhex().c_str(), sf->GetOSPathName().c_str() ); + return fsize; + } + else + totaldisksize += fsize; + } + + dprintf("storage: getdisksize: total already sized is %lld\n", totaldisksize ); + + return totaldisksize; +} + + +int64_t Storage::GetMinimalReservedSize() +{ + if (state_ == STOR_STATE_SINGLE_FILE) + { + return 0; + } + else if (state_ != STOR_STATE_MFSPEC_COMPLETE) + return -1; + + StorageFile *sf = sfs_[0]; + return sf->GetSize(); +} + + +int Storage::ResizeReserved(int64_t size) +{ + // Arno, 2012-05-24: File allocation slow on Win32 without sparse files, + // make this detectable. + if (alloc_cb_ != NULL) + { + alloc_cb_(transfer_fd_,bin_t::NONE); + alloc_cb_ = NULL; // One time callback + } + + if (state_ == STOR_STATE_SINGLE_FILE) + { + dprintf("%s %s storage: Resizing single file %d to %lld\n", tintstr(), roothashhex().c_str(), single_fd_, size); + return file_resize(single_fd_,size); + } + else if (state_ == STOR_STATE_INIT) + { + dprintf("%s %s storage: Postpone resize to %lld\n", tintstr(), roothashhex().c_str(), size); + reserved_size_ = size; + return 0; + } + else if (state_ != STOR_STATE_MFSPEC_COMPLETE) + return -1; + + // MULTIFILE + if (size > GetReservedSize()) + { + dprintf("%s %s storage: Resizing multi file to %lld\n", tintstr(), roothashhex().c_str(), size); + + // Resize files to wanted size, so pread() / pwrite() works for all offsets. + storage_files_t::iterator iter; + for (iter = sfs_.begin(); iter < sfs_.end(); iter++) + { + StorageFile *sf = *iter; + int ret = sf->ResizeReserved(); + if (ret < 0) + return ret; + } + } + else + dprintf("%s %s storage: Resize multi-file to <= %lld, ignored\n", tintstr(), roothashhex().c_str(), size); + + return 0; +} + + +std::string Storage::spec2ospn(std::string specpn) +{ + std::string dest = specpn; + // compat.h I/O layer does UTF-8 to OS encoding + if (MULTIFILE_PATHNAME_FILE_SEP != FILE_SEP) + { + // Replace OS filesep with spec + swift::stringreplace(dest,MULTIFILE_PATHNAME_FILE_SEP,FILE_SEP); + } + return dest; +} + +std::string Storage::os2specpn(std::string ospn) +{ + std::string dest = ospn; + // compat.h I/O layer does OS to UTF-8 encoding + if (MULTIFILE_PATHNAME_FILE_SEP != FILE_SEP) + { + // Replace OS filesep with spec + swift::stringreplace(dest,FILE_SEP,MULTIFILE_PATHNAME_FILE_SEP); + } + return dest; +} + + + +/* + * StorageFile + */ + + + +StorageFile::StorageFile(std::string specpath, int64_t start, int64_t size, std::string ospath) : + Operational(), + fd_(-1) +{ + spec_pathname_ = specpath; + start_ = start; + end_ = start+size-1; + os_pathname_ = ospath; + + //fprintf(stderr,"StorageFile: os_pathname_ is %s\n", os_pathname_.c_str() ); + + std::string normospath = os_pathname_; +#ifdef _WIN32 + swift::stringreplace(normospath,"\\\\","\\"); +#else + swift::stringreplace(normospath,"//","/"); +#endif + + // Handle subdirs, if not multifilespec.txt + if (start_ != 0 && normospath.find(FILE_SEP,0) != std::string::npos) + { + // Path contains dirs, make them + size_t i = 0; + while (true) + { + i = normospath.find(FILE_SEP,i+1); + if (i == std::string::npos) + break; + std::string path = normospath.substr(0,i); +#ifdef _WIN32 + if (path.size() == 2 && path[1] == ':') + // Windows drive spec, ignore + continue; +#endif + int ret = file_exists_utf8( path.c_str() ); + if (ret <= 0) + { + ret = mkdir_utf8(path.c_str()); + + //fprintf(stderr,"StorageFile: mkdir %s returns %d\n", path.c_str(), ret ); + + if (ret < 0) + { + SetBroken(); + return; + } + } + else if (ret == 1) + { + // Something already exists and it is not a dir + + dprintf("StorageFile: exists %s but is not dir %d\n", path.c_str(), ret ); + SetBroken(); + return; + } + } + } + + + // Open + fd_ = open_utf8(os_pathname_.c_str(),OPENFLAGS,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); + if (fd_<0) { + //print_error("storage: file: Could not open"); + dprintf("%s %s storage: file: Could not open %s\n", tintstr(), "0000000000000000000000000000000000000000", os_pathname_.c_str() ); + SetBroken(); + return; + } +} + +StorageFile::~StorageFile() +{ + if (fd_>=0) + { + close(fd_); + } +} + + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/swift.cpp tribler-6.2.0/Tribler/SwiftEngine/swift.cpp --- tribler-6.2.0/Tribler/SwiftEngine/swift.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/swift.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,780 @@ +/* + * swift.cpp + * swift the multiparty transport protocol + * + * Created by Victor Grishchenko on 2/15/10. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include +#include "compat.h" +#include "swift.h" +#include +#include + +using namespace swift; + + +// Local constants +#define RESCAN_DIR_INTERVAL 30 // seconds +#define REPORT_INTERVAL 4 // seconds + +// Local prototypes +#define quit(...) {fprintf(stderr,__VA_ARGS__); exit(1); } +int HandleSwiftFile(std::string filename, Sha1Hash root_hash, std::string trackerargstr, bool printurl, std::string urlfilename, double *maxspeed); +int OpenSwiftFile(std::string filename, const Sha1Hash& hash, Address tracker, bool force_check_diskvshash, uint32_t chunk_size); +int OpenSwiftDirectory(std::string dirname, Address tracker, bool force_check_diskvshash, uint32_t chunk_size); + +void ReportCallback(int fd, short event, void *arg); +void EndCallback(int fd, short event, void *arg); +void RescanDirCallback(int fd, short event, void *arg); +int CreateMultifileSpec(std::string specfilename, int argc, char *argv[], int argidx); + +// Gateway stuff +bool InstallHTTPGateway(struct event_base *evbase,Address addr,uint32_t chunk_size, double *maxspeed); +bool InstallStatsGateway(struct event_base *evbase,Address addr); +bool InstallCmdGateway (struct event_base *evbase,Address cmdaddr,Address httpaddr); +bool HTTPIsSending(); +bool StatsQuit(); +void CmdGwUpdateDLStatesCallback(); + + +// Global variables +struct event evreport, evrescan, evend; +int single_fd = -1; +bool file_enable_checkpoint = false; +bool file_checkpointed = false; +bool report_progress = false; +bool quiet=false; +bool exitoncomplete=false; +bool httpgw_enabled=false,cmdgw_enabled=false; +// Gertjan fix +bool do_nat_test = false; +bool generate_multifile=false; + +std::string scan_dirname=""; +uint32_t chunk_size = SWIFT_DEFAULT_CHUNK_SIZE; +Address tracker; + +long long int cmdgw_report_counter=0; +long long int cmdgw_report_interval=1; // seconds + +// UNICODE: TODO, convert to std::string carrying UTF-8 arguments. Problem is +// a string based getopt_long type parser. +int utf8main (int argc, char** argv) +{ + static struct option long_options[] = + { + {"hash", required_argument, 0, 'h'}, + {"file", required_argument, 0, 'f'}, + {"dir", required_argument, 0, 'd'}, // SEEDDIR reuse + {"listen", required_argument, 0, 'l'}, + {"tracker", required_argument, 0, 't'}, + {"debug", no_argument, 0, 'D'}, + {"progress",no_argument, 0, 'p'}, + {"httpgw", required_argument, 0, 'g'}, + {"wait", optional_argument, 0, 'w'}, + {"nat-test",no_argument, 0, 'N'}, + {"statsgw", required_argument, 0, 's'}, // SWIFTPROC + {"cmdgw", required_argument, 0, 'c'}, // SWIFTPROC + {"destdir", required_argument, 0, 'o'}, // SWIFTPROC + {"uprate", required_argument, 0, 'u'}, // RATELIMIT + {"downrate",required_argument, 0, 'y'}, // RATELIMIT + {"checkpoint",no_argument, 0, 'H'}, + {"chunksize",required_argument, 0, 'z'}, // CHUNKSIZE + {"printurl", no_argument, 0, 'm'}, + {"urlfile", required_argument, 0, 'r'}, // should be optional arg to printurl, but win32 getopt don't grok + {"multifile",required_argument, 0, 'M'}, // MULTIFILE + {"zerosdir",required_argument, 0, 'e'}, // ZEROSTATE + {"dummy",no_argument, 0, 'j'}, // WIN32 + {"cmdgwint",required_argument, 0, 'C'}, // SWIFTPROC + {"filehex", required_argument, 0, '1'}, // SWIFTPROCUNICODE + {"urlfilehex",required_argument, 0, '2'}, // SWIFTPROCUNICODE + {"zerosdirhex",required_argument, 0, '3'}, // SWIFTPROCUNICODE + {"zerostimeout",required_argument, 0, 'T'}, // ZEROSTATE + {0, 0, 0, 0} + }; + + Sha1Hash root_hash; + std::string filename = "",destdir = "", trackerargstr= "", zerostatedir="", urlfilename=""; + bool printurl=false; + Address bindaddr; + Address httpaddr; + Address statsaddr; + Address cmdaddr; + tint wait_time = 0; + double maxspeed[2] = {DBL_MAX,DBL_MAX}; + tint zerostimeout = TINT_NEVER; + + LibraryInit(); + Channel::evbase = event_base_new(); + + int c,n; + while ( -1 != (c = getopt_long (argc, argv, ":h:f:d:l:t:D:pg:s:c:o:u:y:z:wBNHmM:e:r:jC:1:2:3:T:", long_options, 0)) ) { + switch (c) { + case 'h': + if (strlen(optarg)!=40) + quit("SHA1 hash must be 40 hex symbols\n"); + root_hash = Sha1Hash(true,optarg); // FIXME ambiguity + if (root_hash==Sha1Hash::ZERO) + quit("SHA1 hash must be 40 hex symbols\n"); + break; + case 'f': + filename = strdup(optarg); + break; + case 'd': + scan_dirname = strdup(optarg); + break; + case 'l': + bindaddr = Address(optarg); + if (bindaddr==Address()) + quit("address must be hostname:port, ip:port or just port\n"); + wait_time = TINT_NEVER; + break; + case 't': + tracker = Address(optarg); + trackerargstr = strdup(optarg); + if (tracker==Address()) + quit("address must be hostname:port, ip:port or just port\n"); + break; + case 'D': + Channel::debug_file = optarg ? fopen_utf8(optarg,"a") : stderr; + break; + // Arno hack: get opt diff Win32 doesn't allow -D without arg + case 'B': + fprintf(stderr,"SETTING DEBUG TO STDOUT\n"); + Channel::debug_file = stderr; + break; + case 'p': + report_progress = true; + break; + case 'g': + httpgw_enabled = true; + httpaddr = Address(optarg); + wait_time = TINT_NEVER; // seed + break; + case 'w': + if (optarg) { + char unit = 'u'; + if (sscanf(optarg,"%lli%c",&wait_time,&unit)!=2) + quit("time format: 1234[umsMHD], e.g. 1M = one minute\n"); + + switch (unit) { + case 'D': wait_time *= 24; + case 'H': wait_time *= 60; + case 'M': wait_time *= 60; + case 's': wait_time *= 1000; + case 'm': wait_time *= 1000; + case 'u': break; + default: quit("time format: 1234[umsMHD], e.g. 1D = one day\n"); + } + } else + wait_time = TINT_NEVER; + break; + case 'N': // Gertjan fix + do_nat_test = true; + break; + case 's': // SWIFTPROC + statsaddr = Address(optarg); + if (statsaddr==Address()) + quit("address must be hostname:port, ip:port or just port\n"); + break; + case 'c': // SWIFTPROC + cmdgw_enabled = true; + cmdaddr = Address(optarg); + if (cmdaddr==Address()) + quit("address must be hostname:port, ip:port or just port\n"); + wait_time = TINT_NEVER; // seed + break; + case 'o': // SWIFTPROC + destdir = strdup(optarg); // UNICODE + break; + case 'u': // RATELIMIT + n = sscanf(optarg,"%lf",&maxspeed[DDIR_UPLOAD]); + if (n != 1) + quit("uprate must be KiB/s as float\n"); + maxspeed[DDIR_UPLOAD] *= 1024.0; + break; + case 'y': // RATELIMIT + n = sscanf(optarg,"%lf",&maxspeed[DDIR_DOWNLOAD]); + if (n != 1) + quit("downrate must be KiB/s as float\n"); + maxspeed[DDIR_DOWNLOAD] *= 1024.0; + break; + case 'H': //CHECKPOINT + file_enable_checkpoint = true; + break; + case 'z': // CHUNKSIZE + n = sscanf(optarg,"%i",&chunk_size); + if (n != 1) + quit("chunk size must be bytes as int\n"); + break; + case 'm': // printurl + printurl = true; + quiet = true; + wait_time = 0; + break; + case 'r': + urlfilename = strdup(optarg); + break; + case 'M': // MULTIFILE + filename = strdup(optarg); + generate_multifile = true; + break; + case 'e': // ZEROSTATE + zerostatedir = strdup(optarg); // UNICODE + wait_time = TINT_NEVER; // seed + break; + case 'j': // WIN32 + break; + case 'C': // SWIFTPROC + if (sscanf(optarg,"%lli",&cmdgw_report_interval)!=1) + quit("report interval must be int\n"); + break; + case '1': // SWIFTPROCUNICODE + // Swift on Windows expects command line arguments as UTF-16. + // When swift is run with Python's popen, however, popen + // doesn't allow us to pass params in UTF-16, hence workaround. + // Format = hex encoded UTF-8 + filename = hex2bin(strdup(optarg)); + break; + case '2': // SWIFTPROCUNICODE + urlfilename = hex2bin(strdup(optarg)); + break; + case '3': // ZEROSTATE // SWIFTPROCUNICODE + zerostatedir = hex2bin(strdup(optarg)); + break; + case 'T': // ZEROSTATE + double t=0.0; + n = sscanf(optarg,"%lf",&t); + if (n != 1) + quit("zerostimeout must be seconds as float\n"); + zerostimeout = t * TINT_SEC; + break; + } + + } // arguments parsed + + + // Change dir to destdir, if set, or to tempdir if HTTPGW + if (destdir == "") { + if (httpgw_enabled) { + std::string dd = gettmpdir_utf8(); + chdir_utf8(dd); + } + } + else + chdir_utf8(destdir); + + if (httpgw_enabled) + fprintf(stderr,"CWD %s\n",getcwd_utf8().c_str() ); + + if (bindaddr!=Address()) { // seeding + if (Listen(bindaddr)<=0) + quit("cant listen to %s\n",bindaddr.str()) + } else if (tracker!=Address() || httpgw_enabled || cmdgw_enabled) { // leeching + evutil_socket_t sock = INVALID_SOCKET; + for (int i=0; i<=10; i++) { + bindaddr = Address((uint32_t)INADDR_ANY,0); + sock = Listen(bindaddr); + if (sock>0) + break; + if (i==10) + quit("cant listen on %s\n",bindaddr.str()); + } + if (!quiet) + fprintf(stderr,"swift: My listen port is %d\n", BoundAddress(sock).port() ); + } + + if (tracker!=Address() && !printurl) + SetTracker(tracker); + + if (httpgw_enabled) + InstallHTTPGateway(Channel::evbase,httpaddr,chunk_size,maxspeed); + if (cmdgw_enabled) + InstallCmdGateway(Channel::evbase,cmdaddr,httpaddr); + + // TRIALM36: Allow browser to retrieve stats via AJAX and as HTML page + if (statsaddr != Address()) + InstallStatsGateway(Channel::evbase,statsaddr); + + // ZEROSTATE + ZeroState *zs = ZeroState::GetInstance(); + zs->SetContentDir(zerostatedir); + zs->SetConnectTimeout(zerostimeout); + + + if (!cmdgw_enabled) + { + int ret = -1; + if (!generate_multifile) + { + if (filename != "" || root_hash != Sha1Hash::ZERO) { + + // Single file + ret = HandleSwiftFile(filename,root_hash,trackerargstr,printurl,urlfilename,maxspeed); + } + else if (scan_dirname != "") + ret = OpenSwiftDirectory(scan_dirname,Address(),false,chunk_size); + else + ret = -1; + } + else + { + // MULTIFILE + // Generate multi-file spec + ret = CreateMultifileSpec(filename,argc,argv,optind); //optind is global var points to first non-opt cmd line argument + if (ret < 0) + quit("Cannot generate multi-file spec") + else + // Calc roothash + ret = HandleSwiftFile(filename,root_hash,trackerargstr,printurl,urlfilename,maxspeed); + } + + // For testing + if (httpgw_enabled || zerostatedir != "") + ret = 0; + + // No file/dir nor HTTP gateway nor CMD gateway, will never know what to swarm + if (ret == -1) { + fprintf(stderr,"Usage:\n"); + fprintf(stderr," -h, --hash\troot Merkle hash for the transmission\n"); + fprintf(stderr," -f, --file\tname of file to use (root hash by default)\n"); + fprintf(stderr," -l, --listen\t[ip:|host:]port to listen to (default: random)\n"); + fprintf(stderr," -t, --tracker\t[ip:|host:]port of the tracker (default: none)\n"); + fprintf(stderr," -D, --debug\tfile name for debugging logs (default: stdout)\n"); + fprintf(stderr," -B\tdebugging logs to stdout (win32 hack)\n"); + fprintf(stderr," -p, --progress\treport transfer progress\n"); + fprintf(stderr," -g, --httpgw\t[ip:|host:]port to bind HTTP content gateway to (no default)\n"); + fprintf(stderr," -s, --statsgw\t[ip:|host:]port to bind HTTP stats listen socket to (no default)\n"); + fprintf(stderr," -c, --cmdgw\t[ip:|host:]port to bind CMD listen socket to (no default)\n"); + fprintf(stderr," -o, --destdir\tdirectory for saving data (default: none)\n"); + fprintf(stderr," -u, --uprate\tupload rate limit in KiB/s (default: unlimited)\n"); + fprintf(stderr," -y, --downrate\tdownload rate limit in KiB/s (default: unlimited)\n"); + fprintf(stderr," -w, --wait\tlimit running time, e.g. 1[DHMs] (default: infinite with -l, -g)\n"); + fprintf(stderr," -H, --checkpoint\tcreate checkpoint of file when complete for fast restart\n"); + fprintf(stderr," -z, --chunksize\tchunk size in bytes (default: %d)\n", SWIFT_DEFAULT_CHUNK_SIZE); + fprintf(stderr," -m, --printurl\tcompose URL from tracker, file and chunksize\n"); + fprintf(stderr," -M, --multifile\tcreate multi-file spec with given files\n"); + return 1; + } + } + + // Arno, 2012-01-04: Allow download and quit mode + if (single_fd != -1 && root_hash != Sha1Hash::ZERO && wait_time == 0) { + wait_time = TINT_NEVER; + exitoncomplete = true; + } + + // End after wait_time + if ((long)wait_time > 0) { + evtimer_assign(&evend, Channel::evbase, EndCallback, NULL); + evtimer_add(&evend, tint2tv(wait_time)); + } + + // Enter mainloop, if daemonizing + if (wait_time == TINT_NEVER || (long)wait_time > 0) { + // Arno: always, for statsgw, rate control, etc. + evtimer_assign(&evreport, Channel::evbase, ReportCallback, NULL); + evtimer_add(&evreport, tint2tv(REPORT_INTERVAL*TINT_SEC)); + + + // Arno: + if (scan_dirname != "") { + evtimer_assign(&evrescan, Channel::evbase, RescanDirCallback, NULL); + evtimer_add(&evrescan, tint2tv(RESCAN_DIR_INTERVAL*TINT_SEC)); + } + + + fprintf(stderr,"swift: Mainloop\n"); + // Enter libevent mainloop + event_base_dispatch(Channel::evbase); + + // event_base_loopexit() was called, shutting down + } + + // Arno, 2012-01-03: Close all transfers + for (int i=0; ifd()); + } + + if (Channel::debug_file) + fclose(Channel::debug_file); + + swift::Shutdown(); + + return 0; +} + + +int HandleSwiftFile(std::string filename, Sha1Hash root_hash, std::string trackerargstr, bool printurl, std::string urlfilename, double *maxspeed) +{ + if (root_hash!=Sha1Hash::ZERO && filename == "") + filename = strdup(root_hash.hex().c_str()); + + single_fd = OpenSwiftFile(filename,root_hash,Address(),false,chunk_size); + if (single_fd < 0) + quit("cannot open file %s",filename.c_str()); + if (printurl) { + + FILE *fp = stdout; + if (urlfilename != "") + { + fp = fopen_utf8(urlfilename.c_str(),"wb"); + if (!fp) + { + print_error("cannot open file to write tswift URL to"); + quit("cannot open URL file %s",urlfilename.c_str()); + } + } + + if (swift::Complete(single_fd) == 0) + quit("cannot open empty file %s",filename.c_str()); + + std::ostringstream oss; + oss << "tswift:"; + if (trackerargstr != "") + oss << "//" << trackerargstr; + oss << "/" << RootMerkleHash(single_fd).hex(); + if (chunk_size != SWIFT_DEFAULT_CHUNK_SIZE) + oss << "$" << chunk_size; + oss << "\n"; + + std::stringbuf *pbuf=oss.rdbuf(); + if (pbuf == NULL) + print_error("cannot create URL"); + int ret = 0; + ret = fprintf(fp,"%s", pbuf->str().c_str()); + if (ret <0) + print_error("cannot write URL"); + + if (urlfilename != "") + fclose(fp); + } + else + { + printf("Root hash: %s\n", RootMerkleHash(single_fd).hex().c_str()); + fflush(stdout); // For testing + } + + if (printurl || file_enable_checkpoint) + { + // Arno, 2012-01-04: LivingLab: Create checkpoint such that content + // can be copied to scanned dir and quickly loaded + swift::Checkpoint(single_fd); + } + + // RATELIMIT + FileTransfer *ft = FileTransfer::file(single_fd); + ft->SetMaxSpeed(DDIR_DOWNLOAD,maxspeed[DDIR_DOWNLOAD]); + ft->SetMaxSpeed(DDIR_UPLOAD,maxspeed[DDIR_UPLOAD]); + + return single_fd; +} + + +int OpenSwiftFile(std::string filename, const Sha1Hash& hash, Address tracker, bool force_check_diskvshash, uint32_t chunk_size) +{ + std::string binmap_filename = filename; + binmap_filename.append(".mbinmap"); + + // Arno, 2012-01-03: Hack to discover root hash of a file on disk, such that + // we don't load it twice while rescanning a dir of content. + MmapHashTree *ht = new MmapHashTree(true,binmap_filename); + + // fprintf(stderr,"swift: parsedir: File %s may have hash %s\n", filename, ht->root_hash().hex().c_str() ); + + int fd = swift::Find(ht->root_hash()); + delete ht; + if (fd == -1) { + if (!quiet) + fprintf(stderr,"swift: parsedir: Opening %s\n", filename.c_str()); + + fd = swift::Open(filename,hash,tracker,force_check_diskvshash,true,chunk_size); + } + else if (!quiet) + fprintf(stderr,"swift: parsedir: Ignoring loaded %s\n", filename.c_str() ); + return fd; +} + + +int OpenSwiftDirectory(std::string dirname, Address tracker, bool force_check_diskvshash, uint32_t chunk_size) +{ + DirEntry *de = opendir_utf8(dirname); + if (de == NULL) + return -1; + + while(1) + { + if (!(de->isdir_ || de->filename_.rfind(".mhash") != std::string::npos || de->filename_.rfind(".mbinmap") != std::string::npos)) + { + // Not dir, or metafile + std::string path = dirname; + path.append(FILE_SEP); + path.append(de->filename_); + int fd = OpenSwiftFile(path,Sha1Hash::ZERO,tracker,force_check_diskvshash,chunk_size); + if (fd >= 0) + Checkpoint(fd); + } + + DirEntry *newde = readdir_utf8(de); + delete de; + de = newde; + if (de == NULL) + break; + } + return 1; +} + + + +int CleanSwiftDirectory(std::string dirname) +{ + std::set delset; + std::vector::iterator iter; + for (iter=FileTransfer::files.begin(); iter!=FileTransfer::files.end(); iter++) + { + FileTransfer *ft = *iter; + if (ft != NULL) { + std::string filename = ft->GetStorage()->GetOSPathName(); + fprintf(stderr,"swift: clean: Checking %s\n", filename.c_str() ); + int res = file_exists_utf8( filename ); + if (res == 0) { + fprintf(stderr,"swift: clean: Missing %s\n", filename.c_str() ); + delset.insert(ft->fd()); + } + } + } + + std::set::iterator iiter; + for (iiter=delset.begin(); iiter!=delset.end(); iiter++) + { + int fd = *iiter; + fprintf(stderr,"swift: clean: Deleting transfer %d\n", fd ); + swift::Close(fd); + } + + return 1; +} + + + + + +void ReportCallback(int fd, short event, void *arg) { + // Called every second to print/calc some stats + // Arno, 2012-05-24: Why-oh-why, update NOW + Channel::Time(); + + if (single_fd >= 0) + { + if (report_progress) { + fprintf(stderr, + "%s %lli of %lli (seq %lli) %lli dgram %lli bytes up, " \ + "%lli dgram %lli bytes down\n", + IsComplete(single_fd ) ? "DONE" : "done", + Complete(single_fd), Size(single_fd), SeqComplete(single_fd), + Channel::global_dgrams_up, Channel::global_raw_bytes_up, + Channel::global_dgrams_down, Channel::global_raw_bytes_down ); + } + + FileTransfer *ft = FileTransfer::file(single_fd); + if (report_progress) { // TODO: move up + fprintf(stderr,"upload %lf\n",ft->GetCurrentSpeed(DDIR_UPLOAD)); + fprintf(stderr,"dwload %lf\n",ft->GetCurrentSpeed(DDIR_DOWNLOAD) ); + //fprintf(stderr,"npeers %d\n",ft->GetNumLeechers()+ft->GetNumSeeders() ); + } + // Update speed measurements such that they decrease when DL/UL stops + // Always + ft->OnRecvData(0); + ft->OnSendData(0); + + // CHECKPOINT + if (file_enable_checkpoint && !file_checkpointed && IsComplete(single_fd)) + { + std::string binmap_filename = ft->GetStorage()->GetOSPathName(); + binmap_filename.append(".mbinmap"); + fprintf(stderr,"swift: Complete, checkpointing %s\n", binmap_filename.c_str() ); + + if (swift::Checkpoint(single_fd) >= 0) + file_checkpointed = true; + } + + + if (exitoncomplete && IsComplete(single_fd)) + // Download and stop mode + event_base_loopexit(Channel::evbase, NULL); + + } + if (httpgw_enabled) + { + //fprintf(stderr,"."); + + // ARNOSMPTODO: Restore fail behaviour when used in SwarmPlayer 3000. + if (!HTTPIsSending()) { + // TODO + //event_base_loopexit(Channel::evbase, NULL); + return; + } + } + if (StatsQuit()) + { + // SwarmPlayer 3000: User click "Quit" button in webUI. + struct timeval tv; + tv.tv_sec = 1; + int ret = event_base_loopexit(Channel::evbase,&tv); + } + // SWIFTPROC + if (cmdgw_report_interval == 1 || ((cmdgw_report_counter % cmdgw_report_interval) == 0)) + CmdGwUpdateDLStatesCallback(); + + cmdgw_report_counter++; + + // Gertjan fix + // Arno, 2011-10-04: Temp disable + //if (do_nat_test) + // nat_test_update(); + + evtimer_add(&evreport, tint2tv(REPORT_INTERVAL*TINT_SEC)); +} + +void EndCallback(int fd, short event, void *arg) { + // Called when wait timer expires == fixed time daemon + event_base_loopexit(Channel::evbase, NULL); +} + + +void RescanDirCallback(int fd, short event, void *arg) { + + // SEEDDIR + // Rescan dir: CAREFUL: this is blocking, better prepare .m* files first + // by running swift separately and then copy content + *.m* to scanned dir, + // such that a fast restore from checkpoint is done. + // + OpenSwiftDirectory(scan_dirname,tracker,false,chunk_size); + + CleanSwiftDirectory(scan_dirname); + + evtimer_add(&evrescan, tint2tv(RESCAN_DIR_INTERVAL*TINT_SEC)); +} + + +#include + +// MULTIFILE +typedef std::vector > filelist_t; +int CreateMultifileSpec(std::string specfilename, int argc, char *argv[], int argidx) +{ + fprintf(stderr,"CreateMultiFileSpec: %s nfiles %d\n", specfilename.c_str(), argc-argidx ); + + filelist_t filelist; + + + // MULTIFILE TODO: if arg is a directory, include all files + + + // 1. Make list of files + for (int i=argidx; i\n", spec.str().c_str() ); + + // 6. Write to specfile + FILE *fp = fopen_utf8(specfilename.c_str(),"wb"); + int ret = fwrite(spec.str().c_str(),sizeof(char),spec.str().length(),fp); + if (ret < 0) + print_error("cannot write multi-file spec"); + fclose(fp); + + return ret; +} + + +#ifdef _WIN32 + +// UTF-16 version of app entry point for console Windows-apps +int wmain( int wargc, wchar_t *wargv[ ], wchar_t *envp[ ] ) +{ + char **utf8args = (char **)malloc(wargc*sizeof(char *)); + for (int i=0; i +#include +#include +#include +#include +#include +#include + +#include "compat.h" +#include +#include +#include +#include "bin.h" +#include "binmap.h" +#include "hashtree.h" +#include "avgspeed.h" +// Arno, 2012-05-21: MacOS X has an Availability.h :-( +#include "avail.h" + + +namespace swift { + +#define SWIFT_MAX_UDP_OVER_ETH_PAYLOAD (1500-20-8) +// Arno: Maximum size of non-DATA messages in a UDP packet we send. +#define SWIFT_MAX_NONDATA_DGRAM_SIZE (SWIFT_MAX_UDP_OVER_ETH_PAYLOAD-SWIFT_DEFAULT_CHUNK_SIZE-1-4) +// Arno: Maximum size of a UDP packet we send. Note: depends on CHUNKSIZE 8192 +#define SWIFT_MAX_SEND_DGRAM_SIZE (SWIFT_MAX_NONDATA_DGRAM_SIZE+1+4+8192) +// Arno: Maximum size of a UDP packet we are willing to accept. Note: depends on CHUNKSIZE 8192 +#define SWIFT_MAX_RECV_DGRAM_SIZE (SWIFT_MAX_SEND_DGRAM_SIZE*2) + +#define layer2bytes(ln,cs) (uint64_t)( ((double)cs)*pow(2.0,(double)ln)) +#define bytes2layer(bn,cs) (int)log2( ((double)bn)/((double)cs) ) + +// Arno, 2011-12-22: Enable Riccardo's VodPiecePicker +#define ENABLE_VOD_PIECEPICKER 1 + +#define SWIFT_URI_SCHEME "tswift" + + +/** IPv4 address, just a nice wrapping around struct sockaddr_in. */ + struct Address { + struct sockaddr_in addr; + static uint32_t LOCALHOST; + void set_port (uint16_t port) { + addr.sin_port = htons(port); + } + void set_port (const char* port_str) { + int p; + if (sscanf(port_str,"%i",&p)) + set_port(p); + } + void set_ipv4 (uint32_t ipv4) { + addr.sin_addr.s_addr = htonl(ipv4); + } + void set_ipv4 (const char* ipv4_str) ; + //{ inet_aton(ipv4_str,&(addr.sin_addr)); } + void clear () { + memset(&addr,0,sizeof(struct sockaddr_in)); + addr.sin_family = AF_INET; + } + Address() { + clear(); + } + Address(const char* ip, uint16_t port) { + clear(); + set_ipv4(ip); + set_port(port); + } + Address(const char* ip_port); + Address(uint16_t port) { + clear(); + set_ipv4((uint32_t)INADDR_ANY); + set_port(port); + } + Address(uint32_t ipv4addr, uint16_t port) { + clear(); + set_ipv4(ipv4addr); + set_port(port); + } + Address(const struct sockaddr_in& address) : addr(address) {} + uint32_t ipv4 () const { return ntohl(addr.sin_addr.s_addr); } + uint16_t port () const { return ntohs(addr.sin_port); } + operator sockaddr_in () const {return addr;} + bool operator == (const Address& b) const { + return addr.sin_family==b.addr.sin_family && + addr.sin_port==b.addr.sin_port && + addr.sin_addr.s_addr==b.addr.sin_addr.s_addr; + } + const char* str () const { + // Arno, 2011-10-04: not thread safe, replace. + static char rs[4][32]; + static int i; + i = (i+1) & 3; + sprintf(rs[i],"%i.%i.%i.%i:%i",ipv4()>>24,(ipv4()>>16)&0xff, + (ipv4()>>8)&0xff,ipv4()&0xff,port()); + return rs[i]; + } + const char* ipv4str () const { + // Arno, 2011-10-04: not thread safe, replace. + static char rs[4][32]; + static int i; + i = (i+1) & 3; + sprintf(rs[i],"%i.%i.%i.%i",ipv4()>>24,(ipv4()>>16)&0xff, + (ipv4()>>8)&0xff,ipv4()&0xff); + return rs[i]; + } + bool operator != (const Address& b) const { return !(*this==b); } + bool is_private() const { + // TODO IPv6 + uint32_t no = ipv4(); uint8_t no0 = no>>24,no1 = (no>>16)&0xff; + if (no0 == 10) return true; + else if (no0 == 172 && no1 >= 16 && no1 <= 31) return true; + else if (no0 == 192 && no1 == 168) return true; + else return false; + } + }; + +// Arno, 2011-10-03: Use libevent callback functions, no on_error? +#define sockcb_t event_callback_fn + struct sckrwecb_t { + sckrwecb_t (evutil_socket_t s=0, sockcb_t mr=NULL, sockcb_t mw=NULL, + sockcb_t oe=NULL) : + sock(s), may_read(mr), may_write(mw), on_error(oe) {} + evutil_socket_t sock; + sockcb_t may_read; + sockcb_t may_write; + sockcb_t on_error; + }; + + struct now_t { + static tint now; + }; + +#define NOW now_t::now + + /** tintbin is basically a pair plus some nice operators. + Most frequently used in different queues (acknowledgements, requests, + etc). */ + struct tintbin { + tint time; + bin_t bin; + tintbin(const tintbin& b) : time(b.time), bin(b.bin) {} + tintbin() : time(TINT_NEVER), bin(bin_t::NONE) {} + tintbin(tint time_, bin_t bin_) : time(time_), bin(bin_) {} + tintbin(bin_t bin_) : time(NOW), bin(bin_) {} + bool operator < (const tintbin& b) const + { return time > b.time; } + bool operator == (const tintbin& b) const + { return time==b.time && bin==b.bin; } + bool operator != (const tintbin& b) const + { return !(*this==b); } + }; + + typedef std::deque tbqueue; + typedef std::deque binqueue; + typedef Address Address; + + /** A heap (priority queue) for timestamped bin numbers (tintbins). */ + class tbheap { + tbqueue data_; + public: + int size () const { return data_.size(); } + bool is_empty () const { return data_.empty(); } + tintbin pop() { + tintbin ret = data_.front(); + std::pop_heap(data_.begin(),data_.end()); + data_.pop_back(); + return ret; + } + void push(const tintbin& tb) { + data_.push_back(tb); + push_heap(data_.begin(),data_.end()); + } + const tintbin& peek() const { + return data_.front(); + } + }; + + typedef std::pair stringpair; + typedef std::map parseduri_t; + bool ParseURI(std::string uri,parseduri_t &map); + + /** swift protocol message types; these are used on the wire. */ + typedef enum { + SWIFT_HANDSHAKE = 0, + SWIFT_DATA = 1, + SWIFT_ACK = 2, + SWIFT_HAVE = 3, + SWIFT_HASH = 4, + SWIFT_PEX_ADD = 5, + SWIFT_PEX_REQ = 6, + SWIFT_SIGNED_HASH = 7, + SWIFT_HINT = 8, + SWIFT_MSGTYPE_RCVD = 9, + SWIFT_RANDOMIZE = 10, //FRAGRAND + SWIFT_VERSION = 11, // Arno, 2011-10-19: TODO to match RFC-rev-03 + SWIFT_MESSAGE_COUNT = 12 + } messageid_t; + + typedef enum { + DDIR_UPLOAD, + DDIR_DOWNLOAD + } data_direction_t; + + class PiecePicker; + //class CongestionController; // Arno: Currently part of Channel. See ::NextSendTime + class PeerSelector; + class Channel; + typedef std::vector channels_t; + typedef void (*ProgressCallback) (int transfer, bin_t bin); + class Storage; + + /** A class representing single file transfer. */ + class FileTransfer : public Operational { + + public: + + /** A constructor. Open/submit/retrieve a file. + * @param file_name the name of the file + * @param root_hash the root hash of the file; zero hash if the file + * is newly submitted + * @param force_check_diskvshash whether to force a check of disk versus hashes + * @param check_netwvshash whether to hash check chunk on receipt + * @param chunk_size size of chunk to use + * @param zerostate whether to serve the hashes + content directly from disk + */ + FileTransfer(std::string file_name, const Sha1Hash& root_hash=Sha1Hash::ZERO, bool force_check_diskvshash=true, bool check_netwvshash=true, uint32_t chunk_size=SWIFT_DEFAULT_CHUNK_SIZE, bool zerostate=false); + + /** Close everything. */ + ~FileTransfer(); + + + /** While we need to feed ACKs to every peer, we try (1) avoid + unnecessary duplication and (2) keep minimum state. Thus, + we use a rotating queue of bin completion events. */ + //bin64_t RevealAck (uint64_t& offset); + /** Rotating queue read for channels of this transmission. */ + // Jori + int RevealChannel (int& i); + // Gertjan + int RandomChannel (int own_id); + + + /** Find transfer by the root hash. */ + static FileTransfer* Find (const Sha1Hash& hash); + /** Find transfer by the file descriptor. */ + static FileTransfer* file (int fd) { + return fdack_out(); } + /** Piece picking strategy used by this transfer. */ + PiecePicker& picker () { return *picker_; } + /** The number of channels working for this transfer. */ + int channel_count () const { return hs_in_.size(); } + /** Hash tree checked file; all the hashes and data are kept here. */ + HashTree * hashtree() { return hashtree_; } + /** File descriptor for the data file. */ + int fd () const { return fd_; } + /** Root SHA1 hash of the transfer (and the data file). */ + const Sha1Hash& root_hash () const { return hashtree_->root_hash(); } + /** Ric: the availability in the swarm */ + Availability& availability() { return *availability_; } + + // RATELIMIT + /** Arno: Call when n bytes are received. */ + void OnRecvData(int n); + /** Arno: Call when n bytes are sent. */ + void OnSendData(int n); + /** Arno: Call when no bytes are sent due to rate limiting. */ + void OnSendNoData(); + /** Arno: Return current speed for the given direction in bytes/s */ + double GetCurrentSpeed(data_direction_t ddir); + /** Arno: Return maximum speed for the given direction in bytes/s */ + double GetMaxSpeed(data_direction_t ddir); + /** Arno: Set maximum speed for the given direction in bytes/s */ + void SetMaxSpeed(data_direction_t ddir, double m); + /** Arno: Return the number of non-seeders current channeled with. */ + uint32_t GetNumLeechers(); + /** Arno: Return the number of seeders current channeled with. */ + uint32_t GetNumSeeders(); + /** Arno: Return the set of Channels for this transfer. MORESTATS */ + channels_t GetChannels() { return mychannels_; } + + /** Arno: set the tracker for this transfer. Reseting it won't kill + * any existing connections. + */ + void SetTracker(Address tracker) { tracker_ = tracker; } + + /** Arno: (Re)Connect to tracker for this transfer, or global Channel::tracker if not set */ + void ConnectToTracker(); + + /** Arno: Reconnect to the tracker if no established peers and + * exp backoff allows it. + */ + void ReConnectToTrackerIfAllowed(bool hasestablishedpeers); + + /** Arno: Return the Channel to peer "addr" that is not equal to "notc". */ + Channel * FindChannel(const Address &addr, Channel *notc); + + // MULTIFILE + Storage * GetStorage() { return storage_; } + + // SAFECLOSE + static void LibeventCleanCallback(int fd, short event, void *arg); + + //ZEROSTATE + /** Returns whether this FileTransfer is running in zero-state mode, + * meaning that the hash tree is not mmapped into memory but read + * directly from disk, and other memory saving measures. + */ + bool IsZeroState() { return zerostate_; } + + /** Add a peer to the set of addresses to connect to */ + void AddPeer(Address &peer); + + /** Check whether all components still in working state */ + void UpdateOperational(); + + protected: + + HashTree* hashtree_; + + /** Piece picker strategy. */ + PiecePicker* picker_; + + /** Channels working for this transfer. */ + binqueue hs_in_; // Arno, 2011-10-03: Should really be queue of channel ID (=uint32_t) + + /** Messages we are accepting. */ + uint64_t cap_out_; + + tint init_time_; + + // Ric: PPPLUG + /** Availability in the swarm */ + Availability* availability_; + +#define SWFT_MAX_TRANSFER_CB 8 + ProgressCallback callbacks[SWFT_MAX_TRANSFER_CB]; + uint8_t cb_agg[SWFT_MAX_TRANSFER_CB]; + int cb_installed; + + // RATELIMIT + channels_t mychannels_; // Arno, 2012-01-31: May be duplicate of hs_in_ + MovingAverageSpeed cur_speed_[2]; + double max_speed_[2]; + int speedzerocount_; + + // SAFECLOSE + struct event evclean_; + + Address tracker_; // Tracker for this transfer + tint tracker_retry_interval_; + tint tracker_retry_time_; + + // MULTIFILE + Storage *storage_; + int fd_; + + //ZEROSTATE + bool zerostate_; + + public: + void OnDataIn (bin_t pos); + // Gertjan fix: return bool + bool OnPexAddIn (const Address& addr); + + static std::vector files; + + + friend class Channel; + // Ric: maybe not really needed + friend class Availability; + friend uint64_t Size (int fdes); + friend bool IsComplete (int fdes); + friend uint64_t Complete (int fdes); + friend uint64_t SeqComplete (int fdes, int64_t offset); + friend int Open (const char* filename, const Sha1Hash& hash, Address tracker, bool force_check_diskvshash, bool check_netwvshash, uint32_t chunk_size); + friend void Close (int fd) ; + friend void AddProgressCallback (int transfer,ProgressCallback cb,uint8_t agg); + friend void RemoveProgressCallback (int transfer,ProgressCallback cb); + friend void ExternallyRetrieved (int transfer,bin_t piece); + }; + + + /** PiecePicker implements some strategy of choosing (picking) what + to request next, given the possible range of choices: + data acknowledged by the peer minus data already retrieved. + May pick sequentially, do rarest first or in some other way. */ + class PiecePicker { + public: + virtual void Randomize (uint64_t twist) = 0; + /** The piece picking method itself. + * @param offered the data acknowledged by the peer + * @param max_width maximum number of packets to ask for + * @param expires (not used currently) when to consider request expired + * @return the bin number to request */ + virtual bin_t Pick (binmap_t& offered, uint64_t max_width, tint expires) = 0; + virtual void LimitRange (bin_t range) = 0; + virtual ~PiecePicker() {} + /** updates the playback position for streaming piece picking. + * @param offbin bin number of new playback pos + * @param whence only SEEK_CUR supported */ + virtual int Seek(bin_t offbin, int whence) = 0; + }; + + + class PeerSelector { // Arno: partically unused + public: + virtual void AddPeer (const Address& addr, const Sha1Hash& root) = 0; + virtual Address GetPeer (const Sha1Hash& for_root) = 0; + }; + + + /** swift channel's "control block"; channels loosely correspond to TCP + connections or FTP sessions; one channel is created for one file + being transferred between two peers. As we don't need buffers and + lots of other TCP stuff, sizeof(Channel+members) must be below 1K. + Normally, API users do not deal with this class. */ + class Channel { + + public: + Channel (FileTransfer* file, int socket=INVALID_SOCKET, Address peer=Address()); + ~Channel(); + + typedef enum { + KEEP_ALIVE_CONTROL, + PING_PONG_CONTROL, + SLOW_START_CONTROL, + AIMD_CONTROL, + LEDBAT_CONTROL, + CLOSE_CONTROL + } send_control_t; + + static Address tracker; // Global tracker for all transfers + struct event *evsend_ptr_; // Arno: timer per channel // SAFECLOSE + static struct event_base *evbase; + static struct event evrecv; + static const char* SEND_CONTROL_MODES[]; + + static tint epoch, start; + static uint64_t global_dgrams_up, global_dgrams_down, global_raw_bytes_up, global_raw_bytes_down, global_bytes_up, global_bytes_down; + static void CloseChannelByAddress(const Address &addr); + + // SOCKMGMT + // Arno: channel is also a "singleton" class that manages all sockets + // for a swift process + static void LibeventSendCallback(int fd, short event, void *arg); + static void LibeventReceiveCallback(int fd, short event, void *arg); + static void RecvDatagram (evutil_socket_t socket); // Called by LibeventReceiveCallback + static int RecvFrom(evutil_socket_t sock, Address& addr, struct evbuffer *evb); // Called by RecvDatagram + static int SendTo(evutil_socket_t sock, const Address& addr, struct evbuffer *evb); // Called by Channel::Send() + static evutil_socket_t Bind(Address address, sckrwecb_t callbacks=sckrwecb_t()); + static Address BoundAddress(evutil_socket_t sock); + static evutil_socket_t default_socket() + { return sock_count ? sock_open[0].sock : INVALID_SOCKET; } + + /** close the port */ + static void CloseSocket(evutil_socket_t sock); + static void Shutdown (); + /** the current time */ + static tint Time(); + + // Arno: Per instance methods + void Recv (struct evbuffer *evb); + void Send (); // Called by LibeventSendCallback + void Close (); + + void OnAck (struct evbuffer *evb); + void OnHave (struct evbuffer *evb); + bin_t OnData (struct evbuffer *evb); + void OnHint (struct evbuffer *evb); + void OnHash (struct evbuffer *evb); + void OnPexAdd (struct evbuffer *evb); + void OnHandshake (struct evbuffer *evb); + void OnRandomize (struct evbuffer *evb); //FRAGRAND + void AddHandshake (struct evbuffer *evb); + bin_t AddData (struct evbuffer *evb); + void AddAck (struct evbuffer *evb); + void AddHave (struct evbuffer *evb); + void AddHint (struct evbuffer *evb); + void AddUncleHashes (struct evbuffer *evb, bin_t pos); + void AddPeakHashes (struct evbuffer *evb); + void AddPex (struct evbuffer *evb); + void OnPexReq(void); + void AddPexReq(struct evbuffer *evb); + void BackOffOnLosses (float ratio=0.5); + tint SwitchSendControl (send_control_t control_mode); + tint NextSendTime (); + tint KeepAliveNextSendTime (); + tint PingPongNextSendTime (); + tint CwndRateNextSendTime (); + tint SlowStartNextSendTime (); + tint AimdNextSendTime (); + tint LedbatNextSendTime (); + /** Arno: return true if this peer has complete file. May be fuzzy if Peak Hashes not in */ + bool IsComplete(); + /** Arno: return (UDP) port for this channel */ + uint16_t GetMyPort(); + bool IsDiffSenderOrDuplicate(Address addr, uint32_t chid); + + static int MAX_REORDERING; + static tint TIMEOUT; + static tint MIN_DEV; + static tint MAX_SEND_INTERVAL; + static tint LEDBAT_TARGET; + static float LEDBAT_GAIN; + static tint LEDBAT_DELAY_BIN; + static bool SELF_CONN_OK; + static tint MAX_POSSIBLE_RTT; + static tint MIN_PEX_REQUEST_INTERVAL; + static FILE* debug_file; + + const std::string id_string () const; + /** A channel is "established" if had already sent and received packets. */ + bool is_established () { return peer_channel_id_ && own_id_mentioned_; } + FileTransfer& transfer() { return *transfer_; } + HashTree * hashtree() { return transfer_->hashtree(); } + const Address& peer() const { return peer_; } + const Address& recv_peer() const { return recv_peer_; } + tint ack_timeout () { + tint dev = dev_avg_ < MIN_DEV ? MIN_DEV : dev_avg_; + tint tmo = rtt_avg_ + dev * 4; + return tmo < 30*TINT_SEC ? tmo : 30*TINT_SEC; + } + uint32_t id () const { return id_; } + + // MORESTATS + uint64_t raw_bytes_up() { return raw_bytes_up_; } + uint64_t raw_bytes_down() { return raw_bytes_down_; } + uint64_t bytes_up() { return bytes_up_; } + uint64_t bytes_down() { return bytes_down_; } + + static int DecodeID(int scrambled); + static int EncodeID(int unscrambled); + static Channel* channel(int i) { + return i storage_files_t; + + /* + * Class representing the persistent storage layer. Supports a swarm + * stored as multiple files. + * + * This is implemented by storing a multi-file specification in chunk 0 + * (and further if needed). This spec lists what other files the swarm + * contains and their sizes. E.g. + * + * META-INF-multifilespec.txt 113 + * seeder/190557.ts 249798796 + * seeder/berta.dat 2395920988 + * seeder/bunny.ogg 166825767 + * + * The concatenation of these files (starting with the multi-file spec with + * pseudo filename META-INF-multifile-spec.txt) are the contents of the + * swarm. + */ + class Storage : public Operational { + + public: + + static const std::string MULTIFILE_PATHNAME; + static const std::string MULTIFILE_PATHNAME_FILE_SEP; + static const int MULTIFILE_MAX_PATH = 2048; + static const int MULTIFILE_MAX_LINE = MULTIFILE_MAX_PATH+1+32+1; + + typedef enum { + STOR_STATE_INIT, + STOR_STATE_MFSPEC_SIZE_KNOWN, + STOR_STATE_MFSPEC_COMPLETE, + STOR_STATE_SINGLE_FILE + } storage_state_t; + + typedef std::vector storage_files_t; + + /** convert multi-file spec filename (UTF-8 encoded Unicode) to OS name and vv. */ + static std::string spec2ospn(std::string specpn); + static std::string os2specpn(std::string ospn); + + /** Create Storage from specified path and destination dir if content turns about to be a multi-file */ + Storage(std::string ospathname, std::string destdir,int transferfd); + ~Storage(); + + /** UNIX pread approximation. Does change file pointer. Thread-safe if no concurrent writes */ + ssize_t Read(void *buf, size_t nbyte, int64_t offset); // off_t not 64-bit dynamically on Win32 + + /** UNIX pwrite approximation. Does change file pointer. Is not thread-safe */ + ssize_t Write(const void *buf, size_t nbyte, int64_t offset); + + /** Link to HashTree */ + void SetHashTree(HashTree *ht) { ht_ = ht; } + + /** Size of content according to multi-file spec, -1 if unknown or single file */ + int64_t GetSizeFromSpec(); + + /** Size reserved for storage */ + int64_t GetReservedSize(); + + /** 0 for single file, spec size for multi-file */ + int64_t GetMinimalReservedSize(); + + /** Change size reserved for storage */ + int ResizeReserved(int64_t size); + + /** Return the operating system path for this Storage */ + std::string GetOSPathName() { return os_pathname_; } + + /** Return the root hash of the content being stored */ + std::string roothashhex() { if (ht_ == NULL) return "0000000000000000000000000000000000000000"; else return ht_->root_hash().hex(); } + + /** Return the destination directory for this Storage */ + std::string GetDestDir() { return destdir_; } + + /** Whether Storage is ready to be used */ + bool IsReady() { return state_ == STOR_STATE_SINGLE_FILE || state_ == STOR_STATE_MFSPEC_COMPLETE; } + + /** Return the list of StorageFiles for this Storage, empty if not multi-file */ + storage_files_t GetStorageFiles() { return sfs_; } + + /** Return a one-time callback when swift starts allocating disk space */ + void AddOneTimeAllocationCallback(ProgressCallback cb) { alloc_cb_ = cb; } + + protected: + storage_state_t state_; + + std::string os_pathname_; + std::string destdir_; + + /** HashTree this Storage is linked to */ + HashTree *ht_; + + int64_t spec_size_; + + storage_files_t sfs_; + int single_fd_; + int64_t reserved_size_; + int64_t total_size_from_spec_; + StorageFile *last_sf_; + + int transfer_fd_; + ProgressCallback alloc_cb_; + + int WriteSpecPart(StorageFile *sf, const void *buf, size_t nbyte, int64_t offset); + std::pair WriteBuffer(StorageFile *sf, const void *buf, size_t nbyte, int64_t offset); + StorageFile * FindStorageFile(int64_t offset); + int ParseSpec(StorageFile *sf); + int OpenSingleFile(); + + }; + + class ZeroState + { + public: + ZeroState(); + ~ZeroState(); + static ZeroState *GetInstance(); + void SetContentDir(std::string contentdir); + void SetConnectTimeout(tint timeout); + FileTransfer * Find(Sha1Hash &root_hash); + + + static void LibeventCleanCallback(int fd, short event, void *arg); + + protected: + static ZeroState *__singleton; + + struct event evclean_; + std::string contentdir_; + + /* Arno, 2012-07-20: A very slow peer can keep a transfer alive + for a long time (3 minute channel close timeout not reached). + This causes problems on Mac where there are just 256 file + descriptors per process and this problem causes all of them + to be used. + */ + tint connect_timeout_; + }; + + + /*************** The top-level API ****************/ + /** Start listening a port. Returns socket descriptor. */ + int Listen (Address addr); + /** Stop listening to a port. */ + void Shutdown (int sock_des=-1); + + /** Open a file, start a transmission; fill it with content for a given + root hash and tracker (optional). If "force_check_diskvshash" is true, the + hashtree state will be (re)constructed from the file on disk (if any). + If not, open will try to reconstruct the hashtree state from + the .mhash and .mbinmap files on disk. .mhash files are created + automatically, .mbinmap files must be written by checkpointing the + transfer by calling FileTransfer::serialize(). If the reconstruction + fails, it will hashcheck anyway. Roothash is optional for new files or + files already hashchecked and checkpointed. If "check_netwvshash" is + false, no uncle hashes will be sent and no data will be verified against + then on receipt. In this mode, checking disk contents against hashes + no longer works on restarts, unless checkpoints are used. + */ + int Open (std::string filename, const Sha1Hash& hash=Sha1Hash::ZERO,Address tracker=Address(), bool force_check_diskvshash=true, bool check_netwvshash=true, uint32_t chunk_size=SWIFT_DEFAULT_CHUNK_SIZE); + /** Get the root hash for the transmission. */ + const Sha1Hash& RootMerkleHash (int file) ; + /** Close a file and a transmission. */ + void Close (int fd) ; + /** Add a possible peer which participares in a given transmission. In the case + root hash is zero, the peer might be talked to regarding any transmission + (likely, a tracker, cache or an archive). */ + void AddPeer (Address address, const Sha1Hash& root=Sha1Hash::ZERO); + + /** UNIX pread approximation. Does change file pointer. Thread-safe if no concurrent writes */ + ssize_t Read(int fd, void *buf, size_t nbyte, int64_t offset); // off_t not 64-bit dynamically on Win32 + + /** UNIX pwrite approximation. Does change file pointer. Is not thread-safe */ + ssize_t Write(int fd, const void *buf, size_t nbyte, int64_t offset); + + /** Seek, i.e., move start of interest window */ + int Seek(int fd, int64_t offset, int whence); + + void SetTracker(const Address& tracker); + /** Set the default tracker that is used when Open is not passed a tracker + address. */ + + /** Returns size of the file in bytes, 0 if unknown. Might be rounded up to a kilobyte + before the transmission is complete. */ + uint64_t Size (int fdes); + /** Returns the amount of retrieved and verified data, in bytes. + A 100% complete transmission has Size()==Complete(). */ + uint64_t Complete (int fdes); + bool IsComplete (int fdes); + /** Returns the number of bytes that are complete sequentially, starting from the + beginning, till the first not-yet-retrieved packet. */ + uint64_t SeqComplete(int fdes, int64_t offset=0); + /***/ + int Find (Sha1Hash hash); + /** Returns the number of bytes in a chunk for this transmission */ + uint32_t ChunkSize(int fdes); + + /** Get the address bound to the socket descriptor returned by Listen() */ + Address BoundAddress(evutil_socket_t sock); + + void AddProgressCallback (int transfer,ProgressCallback cb,uint8_t agg); + void RemoveProgressCallback (int transfer,ProgressCallback cb); + void ExternallyRetrieved (int transfer,bin_t piece); + + + /** Must be called by any client using the library */ + void LibraryInit(void); + + int evbuffer_add_string(struct evbuffer *evb, std::string str); + int evbuffer_add_8(struct evbuffer *evb, uint8_t b); + int evbuffer_add_16be(struct evbuffer *evb, uint16_t w); + int evbuffer_add_32be(struct evbuffer *evb, uint32_t i); + int evbuffer_add_64be(struct evbuffer *evb, uint64_t l); + int evbuffer_add_hash(struct evbuffer *evb, const Sha1Hash& hash); + + uint8_t evbuffer_remove_8(struct evbuffer *evb); + uint16_t evbuffer_remove_16be(struct evbuffer *evb); + uint32_t evbuffer_remove_32be(struct evbuffer *evb); + uint64_t evbuffer_remove_64be(struct evbuffer *evb); + Sha1Hash evbuffer_remove_hash(struct evbuffer* evb); + + const char* tintstr(tint t=0); + std::string sock2str (struct sockaddr_in addr); + #define SWIFT_MAX_CONNECTIONS 20 + + void nat_test_update(void); + + // Arno: Save transfer's binmap for zero-hashcheck restart + int Checkpoint(int fdes); + + // SOCKTUNNEL + void CmdGwTunnelUDPDataCameIn(Address srcaddr, uint32_t srcchan, struct evbuffer* evb); + void CmdGwTunnelSendUDP(struct evbuffer *evb); // for friendship with Channel + +} // namespace end + +// #define SWIFT_MUTE + +#ifndef SWIFT_MUTE +#define dprintf(...) do { if (Channel::debug_file) fprintf(Channel::debug_file,__VA_ARGS__); } while (0) +#define dflush() fflush(Channel::debug_file) +#else +#define dprintf(...) do {} while(0) +#define dflush() do {} while(0) +#endif +#define eprintf(...) fprintf(stderr,__VA_ARGS__) + +#endif diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/SConscript tribler-6.2.0/Tribler/SwiftEngine/tests/SConscript --- tribler-6.2.0/Tribler/SwiftEngine/tests/SConscript 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/SConscript 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,108 @@ +import sys +import os + +Import("DEBUG") +Import("env") +Import("libs") +Import("libpath") + + +if DEBUG and sys.platform == "win32": + libs = ['libswift','gtestd'] + libs # order is important, crypto needs to be last +else: + libs = ['libswift','gtest'] + libs # order is important, crypto needs to be last + +cpppath = env["CPPPATH"] +if sys.platform == "win32": + cpppath += '..;' +else: + cpppath += '..:' +env.Append(CPPPATH=cpppath) + +env.Program( + target='binstest2', + source=['binstest2.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='binstest3', + source=['binstest3.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='binstest4', + source=['binstest4.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='dgramtest', + source=['dgramtest.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='hashtest', + source=['hashtest.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +# Arno: must be rewritten to libevent +#env.Program( +# target='ledbattest', +# source=['ledbattest.cpp'], +# CPPPATH=cpppath, +# LIBS=libs, +# LIBPATH=libpath ) + +# Arno: must be rewritten to libevent +#if sys.platform != "win32": +# # Arno: Needs getopt +# env.Program( +# target='ledbattest2', +# source=['ledbattest2.cpp'], +# CPPPATH=cpppath, +# LIBS=libs, +# LIBPATH=libpath ) + +env.Program( + target='freemap', + source=['freemap.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='bin64test', + source=['bin64test.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='transfertest', + source=['transfertest.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='connecttest', + source=['connecttest.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) + +env.Program( + target='storagetest', + source=['storagetest.cpp'], + CPPPATH=cpppath, + LIBS=libs, + LIBPATH=libpath ) diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/bin64test.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/bin64test.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/bin64test.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/bin64test.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,81 @@ +/* + * bintest.cpp + * bin++ + * + * Created by Victor Grishchenko on 3/9/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include "bin.h" +#include "bin_utils.h" +#include + +TEST(Bin64Test,InitGet) { + + EXPECT_EQ(0x1,bin_t(1,0).toUInt()); + EXPECT_EQ(0xB,bin_t(2,1).toUInt()); + EXPECT_EQ(0x2,bin_t(2,1).layer()); + EXPECT_EQ(34,bin_t(34,2345).layer()); + EXPECT_EQ(0x7ffffffffULL,bin_t(34,2345).layer_bits()); + EXPECT_EQ(1,bin_t(2,1).layer_offset()); + EXPECT_EQ(2345,bin_t(34,2345).layer_offset()); + EXPECT_EQ((1<<1) - 1,bin_t(0,123).layer_bits()); + EXPECT_EQ((1<<17) - 1,bin_t(16,123).layer_bits()); + +} + +TEST(Bin64Test,Navigation) { + + bin_t mid(4,18); + EXPECT_EQ(bin_t(5,9),mid.parent()); + EXPECT_EQ(bin_t(3,36),mid.left()); + EXPECT_EQ(bin_t(3,37),mid.right()); + EXPECT_EQ(bin_t(5,9),bin_t(4,19).parent()); + bin_t up32(30,1); + EXPECT_EQ(bin_t(31,0),up32.parent()); + +} + +TEST(Bin64Test,Overflows) { + + EXPECT_FALSE(bin_t::NONE.contains(bin_t(0,1))); + EXPECT_TRUE(bin_t::ALL.contains(bin_t(0,1))); + EXPECT_EQ(0,bin_t::NONE.base_length()); + EXPECT_EQ(bin_t::NONE,bin_t::NONE.twisted(123)); + /*EXPECT_EQ(bin64_t::NONE.parent(),bin64_t::NONE); + EXPECT_EQ(bin64_t::NONE.left(),bin64_t::NONE); + EXPECT_EQ(bin64_t::NONE.right(),bin64_t::NONE); + EXPECT_EQ(bin64_t::NONE,bin64_t(0,2345).left()); + EXPECT_EQ(bin64_t::NONE,bin64_t::ALL.parent()); +*/ +} + +TEST(Bin64Test, Advanced) { + + EXPECT_EQ(4,bin_t(2,3).base_length()); + EXPECT_FALSE(bin_t(1,1234).is_base()); + EXPECT_TRUE(bin_t(0,12345).is_base()); + EXPECT_EQ(bin_t(0,2),bin_t(1,1).base_left()); + bin_t peaks[64]; + int peak_count = gen_peaks(7,peaks); + EXPECT_EQ(3,peak_count); + EXPECT_EQ(bin_t(2,0),peaks[0]); + EXPECT_EQ(bin_t(1,2),peaks[1]); + EXPECT_EQ(bin_t(0,6),peaks[2]); + +} + +TEST(Bin64Test, Bits) { + bin_t all = bin_t::ALL, none = bin_t::NONE, big = bin_t(40,18); + uint32_t a32 = bin_toUInt32(all), n32 = bin_toUInt32(none), b32 = bin_toUInt32(big); + EXPECT_EQ(0x7fffffff,a32); + EXPECT_EQ(0xffffffff,n32); + EXPECT_EQ(bin_t::NONE,bin_fromUInt32(b32)); +} + +int main (int argc, char** argv) { + + testing::InitGoogleTest(&argc, argv); + return RUN_ALL_TESTS(); + +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/binstest2.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/binstest2.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/binstest2.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/binstest2.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,523 @@ +/* + * binstest2.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/22/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include "binmap.h" + +#include +#include +#include + + +using namespace swift; + +/* +TEST(BinsTest,Routines) { + + uint32_t cell = (3<<10) | (3<<14) | (3<<0); + uint16_t correct = (1<<5) | (1<<7) | (1<<0); + uint16_t joined = binmap_t::join32to16(cell); + EXPECT_EQ(correct,joined); + + uint32_t split = binmap_t::split16to32(correct); + EXPECT_EQ(cell,split); + + EXPECT_EQ(binmap_t::NOJOIN,binmap_t::join32to16(cell|4)); + +} +*/ + + +TEST(BinsTest,SetGet) { + + binmap_t bs; + bin_t b3(1,0), b2(0,1), b4(0,2), b6(1,1), b7(2,0); + bs.set(b3); + //bs.dump("set done"); + EXPECT_TRUE(bs.is_filled(b3)); + //bs.dump("set checkd"); + EXPECT_TRUE(bs.is_filled(b2)); + //bs.dump("get b2 done"); + EXPECT_TRUE(bs.is_filled(b3)); + //bs.dump("get b3 done"); + EXPECT_TRUE(bs.is_empty(b4)); + EXPECT_TRUE(bs.is_empty(b6)); + EXPECT_FALSE(bs.is_filled(b7)); + EXPECT_FALSE(bs.is_empty(b7)); + EXPECT_TRUE(bs.is_filled(b3)); + bs.set(bin_t(1,1)); + EXPECT_TRUE(bs.is_filled(bin_t(2,0))); + +} + +/* +TEST(BinsTest,Iterator) { + binmap_t b; + b.set(bin_t(3,1)); + iterator i(&b,bin_t(0,0),false); + while (!i.solid()) + i.left(); + EXPECT_EQ(bin_t(3,0),i.bin()); + EXPECT_EQ(false,i.deep()); + EXPECT_EQ(true,i.solid()); + EXPECT_EQ(binmap_t::EMPTY,*i); + i.next(); + EXPECT_EQ(bin_t(3,1),i.bin()); + EXPECT_EQ(false,i.deep()); + EXPECT_EQ(true,i.solid()); + EXPECT_EQ(binmap_t::FILLED,*i); + i.next(); + EXPECT_TRUE(i.end()); +} +*/ + +TEST(BinsTest,Chess) { + binmap_t chess16; + for(int i=0; i<16; i++) { + if (i&1) { + chess16.set(bin_t(0,i)); + } else { + chess16.reset(bin_t(0,i)); + } + } + + for(int i=0; i<16; i++) { + if (i&1) { + EXPECT_TRUE(chess16.is_filled(bin_t(0,i))); + } else { + EXPECT_TRUE(chess16.is_empty(bin_t(0,i))); + } + } + EXPECT_FALSE(chess16.is_empty(bin_t(4,0))); + for(int i=0; i<16; i+=2) + chess16.set(bin_t(0,i)); + EXPECT_TRUE(chess16.is_filled(bin_t(4,0))); + EXPECT_TRUE(chess16.is_filled(bin_t(2,3))); + + chess16.set(bin_t(4,1)); + EXPECT_TRUE(chess16.is_filled(bin_t(5,0))); +} + +TEST(BinsTest,Staircase) { + + const int TOPLAYR = 44; + binmap_t staircase; + for(int i=0;i(1<<3)) + tw = tw.left(); + tw = tw.twisted(1<<3); + EXPECT_EQ(bin_t(3,2),tw); + b.twist(0); + EXPECT_TRUE(b.is_filled(bin_t(3,2))); + EXPECT_TRUE(b.is_empty(bin_t(3,3))); +} +*/ + +TEST(BinsTest,SeqLength) { + binmap_t b; + b.set(bin_t(3,0)); + b.set(bin_t(1,4)); + b.set(bin_t(0,10)); + b.set(bin_t(3,2)); + EXPECT_EQ(11,b.find_empty().base_offset()); +} + +TEST(BinsTest,EmptyFilled) { + // 1112 3312 2111 .... + binmap_t b; + + EXPECT_TRUE(b.is_empty(bin_t::ALL)); + + b.set(bin_t(1,0)); + b.set(bin_t(0,2)); + b.set(bin_t(0,6)); + b.set(bin_t(1,5)); + b.set(bin_t(0,9)); + + EXPECT_FALSE(b.is_empty(bin_t::ALL)); + + EXPECT_TRUE(b.is_empty(bin_t(2,3))); + EXPECT_FALSE(b.is_filled(bin_t(2,3))); + //EXPECT_TRUE(b.is_solid(bin_t(2,3),binmap_t::MIXED)); + EXPECT_TRUE(b.is_filled(bin_t(1,0))); + EXPECT_TRUE(b.is_filled(bin_t(1,5))); + EXPECT_FALSE(b.is_filled(bin_t(1,3))); + + b.set(bin_t(0,3)); + b.set(bin_t(0,7)); + b.set(bin_t(0,8)); + + EXPECT_TRUE(b.is_filled(bin_t(2,0))); + EXPECT_TRUE(b.is_filled(bin_t(2,2))); + EXPECT_FALSE(b.is_filled(bin_t(2,1))); + + b.set(bin_t(1,2)); + EXPECT_TRUE(b.is_filled(bin_t(2,1))); +} + + +/*TEST(BinsTest,RangeOpTest) { + binmap_t a, b; + a.set(bin_t(0,0)); + a.set(bin_t(0,2)); + b.set(bin_t(0,1)); + b.set(bin_t(0,3)); + a.range_or(b,bin_t(1,0)); + EXPECT_TRUE(a.is_filled(bin_t(1,0))); + EXPECT_FALSE(a.is_filled(bin_t(1,1))); + + binmap_t f, s; + f.set(bin_t(3,0)); + s.set(bin_t(0,1)); + s.set(bin_t(0,4)); + f.range_remove(s,bin_t(2,1)); + + EXPECT_TRUE(f.is_filled(bin_t(2,0))); + EXPECT_FALSE(f.is_filled(bin_t(0,4))); + EXPECT_TRUE(f.is_filled(bin_t(0,5))); + + binmap_t z, x; + z.set(bin_t(1,0)); + z.set(bin_t(1,2)); + x.set(bin_t(0,1)); + x.set(bin_t(0,1)); +} +*/ + +/* +TEST(BinsTest,CoarseBitmap) { + binmap_t b; + b.set(bin_t(4,0)); + union {uint16_t i16[2]; uint32_t i32;}; + b.to_coarse_bitmap(i16,bin_t(5,0),0); + EXPECT_EQ((1<<16)-1,i32); + + b.set(bin_t(14,0)); + i32=0; + b.to_coarse_bitmap(i16,bin_t(15,0),10); + EXPECT_EQ((1<<16)-1,i32); + + binmap_t rough; + rough.set(bin_t(1,0)); + rough.set(bin_t(0,2)); + i32=0; + rough.to_coarse_bitmap(i16,bin_t(6,0),1); + EXPECT_EQ(1,i32); + + binmap_t ladder; + ladder.set(bin_t(6,2)); + ladder.set(bin_t(5,2)); + ladder.set(bin_t(4,2)); + ladder.set(bin_t(3,2)); + ladder.set(bin_t(2,2)); + ladder.set(bin_t(1,2)); + ladder.set(bin_t(0,2)); + i32=0; + ladder.to_coarse_bitmap(i16,bin_t(8,0),3); + EXPECT_EQ(0x00ff0f34,i32); + + binmap_t bin; + bin.set(bin_t(3,0)); + bin.set(bin_t(0,8)); + i32 = 0; + bin.to_coarse_bitmap(i16,bin_t(4,0),0); + EXPECT_EQ((1<<9)-1,i32); + + i32 = 0; + bin.to_coarse_bitmap(i16,bin_t(7,0),3); + EXPECT_EQ(1,i32); + + i32 = 0; + bin.to_coarse_bitmap(i16,bin_t(4,0),3); + EXPECT_EQ(1,i32); + + i32 = 0; + bin.to_coarse_bitmap(i16,bin_t(2,0),1); + EXPECT_EQ(3,i32&3); + + uint64_t bigint; + bigint = 0; + binmap_t bm; + bm.set(bin_t(6,0)); + bm.to_coarse_bitmap((uint16_t*)&bigint,bin_t(6,0),0); + EXPECT_EQ( 0xffffffffffffffffULL, bigint ); + +} +*/ + +/*TEST(BinsTest,AddSub) { + binmap_t b; + b|=15; + b-=1; + ASSERT_TRUE(b.contains(2)); + ASSERT_TRUE(b.contains(14)); + ASSERT_FALSE(b.contains(3)); + ASSERT_FALSE(b.contains(22)); + ASSERT_TRUE(b.contains(12)); + b-=13; + ASSERT_FALSE(b.contains(12)); + ASSERT_FALSE(b.contains(14)); + ASSERT_FALSE(b.contains(11)); + ASSERT_TRUE(b.contains(10)); +} + + +TEST(BinsTest,Peaks) { + bin::vec peaks = bin::peaks(11); + ASSERT_EQ(3,peaks.size()); + ASSERT_EQ(15,peaks[0]); + ASSERT_EQ(18,peaks[1]); + ASSERT_EQ(19,peaks[2]); +} + +TEST(BinsTest,Performance) { + binmap_t b; + std::set s; + clock_t start, end; + double b_time, s_time; + int b_size, s_size; + + start = clock(); + for(int i=1; i<(1<<20); i++) + b |= bin(i); + //b_size = b.bits.size(); + end = clock(); + b_time = ((double) (end - start)) / CLOCKS_PER_SEC; + //ASSERT_EQ(1,b.bits.size()); + + start = clock(); + for(int i=1; i<(1<<20); i++) + s.insert(i); + s_size = s.size(); + end = clock(); + s_time = ((double) (end - start)) / CLOCKS_PER_SEC; + + printf("bins: %f (%i), set: %f (%i)\n",b_time,b_size,s_time,s_size); +}*/ + +int main (int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + return RUN_ALL_TESTS(); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/binstest3.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/binstest3.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/binstest3.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/binstest3.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,696 @@ +/* + * binstest3.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/22/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + * ========================== + * Extended by Arno and Riccardo to hunt a bug in find_complement() with + * a range parameter. + */ +#include "binmap.h" +#include "binheap.h" + +#include +#include +#include + + +using namespace swift; + + +TEST(BinsTest,FindFiltered) { + + binmap_t data, filter; + data.set(bin_t(2,0)); + data.set(bin_t(2,2)); + data.set(bin_t(1,7)); + filter.set(bin_t(4,0)); + filter.reset(bin_t(2,1)); + filter.reset(bin_t(1,4)); + filter.reset(bin_t(0,13)); + + bin_t x = binmap_t::find_complement(data, filter, bin_t(4,0), 0); + EXPECT_EQ(bin_t(0,12),x); + +} + +TEST(BinsTest,FindFiltered1b) { + + binmap_t data, filter; + data.set(bin_t(2,0)); + data.set(bin_t(2,2)); + data.set(bin_t(1,7)); + filter.set(bin_t(4,0)); + filter.reset(bin_t(2,1)); + filter.reset(bin_t(1,4)); + filter.reset(bin_t(0,13)); + + char binstr[32]; + + bin_t s = bin_t(3,1); + fprintf(stderr,"Searching 0,12 from %s ", s.base_left().str(binstr ) ); + fprintf(stderr,"to %s\n", s.base_right().str(binstr ) ); + + bin_t x = binmap_t::find_complement(data, filter, s, 0); + EXPECT_EQ(bin_t(0,12),x); + +} + + +TEST(BinsTest,FindFiltered1c) { + + binmap_t data, filter; + data.set(bin_t(2,0)); + data.set(bin_t(2,2)); + data.set(bin_t(1,7)); + + filter.set(bin_t(4,0)); + filter.reset(bin_t(2,1)); + filter.reset(bin_t(1,4)); + //filter.reset(bin_t(0,13)); + + char binstr[32]; + + bin_t s = bin_t(3,1); + fprintf(stderr,"Searching 0,12x from %s ", s.base_left().str(binstr ) ); + fprintf(stderr,"to %s\n", s.base_right().str(binstr ) ); + + bin_t x = binmap_t::find_complement(data, filter, s, 0); + EXPECT_EQ(bin_t(0,12),x); + +} + + +TEST(BinsTest,FindFiltered2) { + + binmap_t data, filter; + for(int i=0; i<1024; i+=2) + data.set(bin_t(0,i)); + for(int j=0; j<1024; j+=2) + filter.set(bin_t(0,j)); + + fprintf(stderr,"test: width %d\n", filter.cells_number() ); + fprintf(stderr,"test: empty %llu\n", filter.find_empty().toUInt() ); + + + data.reset(bin_t(0,500)); + EXPECT_EQ(bin_t(0,500),binmap_t::find_complement(data, filter, bin_t(10,0), 0).base_left()); + data.set(bin_t(0,500)); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(10,0), 0).base_left()); + +} + + +// Range is strict subtree +TEST(BinsTest,FindFiltered3) { + + binmap_t data, filter; + for(int i=0; i<1024; i+=2) + data.set(bin_t(0,i)); + for(int j=0; j<1024; j+=2) + filter.set(bin_t(0,j)); + data.reset(bin_t(0,500)); + EXPECT_EQ(bin_t(0,500),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + data.set(bin_t(0,500)); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + +} + +// 1036 leaf tree + +TEST(BinsTest,FindFiltered4) { + + binmap_t data, filter; + for(int i=0; i<1036; i+=2) + data.set(bin_t(0,i)); + for(int j=0; j<1036; j+=2) + filter.set(bin_t(0,j)); + data.reset(bin_t(0,500)); + EXPECT_EQ(bin_t(0,500),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + data.set(bin_t(0,500)); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + +} + +// Make 8 bin hole in 1036 tree + +TEST(BinsTest,FindFiltered5) { + + binmap_t data, filter; + for(int i=0; i<1036; i++) //completely full + data.set(bin_t(0,i)); + for(int j=0; j<1036; j++) + filter.set(bin_t(0,j)); + + for (int j=496; j<=503; j++) + data.reset(bin_t(0,j)); + + EXPECT_EQ(bin_t(3,62),binmap_t::find_complement(data, filter, bin_t(9,0), 0) ); + EXPECT_EQ(bin_t(0,496),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); +} + + +// Use simple example tree from RFC +TEST(BinsTest,FindFiltered6) { + + binmap_t data, filter; + for(int i=0; i<14; i+=2) //completely full example tree + data.set(bin_t(i)); + for(int j=0; j<14; j+=2) + filter.set(bin_t(j)); + + for (int j=4; j<=6; j+=2) // reset leaves 4 and 6 (int) + data.reset(bin_t(j)); + + EXPECT_EQ(bin_t(1,1),binmap_t::find_complement(data, filter, bin_t(2,0), 0) ); + EXPECT_EQ(bin_t(0,2),binmap_t::find_complement(data, filter, bin_t(2,0), 0).base_left()); +} + + +// diff in right tree, range is left tree +TEST(BinsTest,FindFiltered7) { + + binmap_t data, filter; + for(int i=0; i<14; i+=2) //completely full example tree + data.set(bin_t(i)); + data.reset(bin_t(4)); // clear 4 + for(int j=0; j<14; j+=2) + filter.set(bin_t(j)); + filter.reset(bin_t(4)); + + for (int j=8; j<=10; j+=2) // make diff out of range + data.reset(bin_t(j)); + + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,0), 0) ); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,0), 0).base_left()); +} + + + +// diff in left tree, range is right tree +TEST(BinsTest,FindFiltered8) { + + binmap_t data, filter; + for(int i=0; i<14; i+=2) //completely full example tree + data.set(bin_t(i)); + data.reset(bin_t(4)); // clear 4 + for(int j=0; j<14; j+=2) + filter.set(bin_t(j)); + filter.reset(bin_t(4)); + + for (int j=4; j<=6; j+=2) // make diff out of range + data.reset(bin_t(j)); + + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,1), 0) ); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,1), 0).base_left()); +} + + +// reverse empty/full +TEST(BinsTest,FindFiltered9) { + + binmap_t data, filter; + for(int i=0; i<14; i+=2) //completely empty example tree + data.reset(bin_t(i)); + data.set(bin_t(4)); // clear 4 + for(int j=0; j<14; j+=2) + filter.reset(bin_t(j)); + filter.set(bin_t(4)); + + for (int j=4; j<=6; j+=2) // make diff out of range + data.set(bin_t(j)); + + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,1), 0) ); + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(2,1), 0).base_left()); +} + + +// Make 8 bin hole in 999 tree, left subtree + +TEST(BinsTest,FindFiltered10) { + + binmap_t data, filter; + for(int i=0; i<999; i++) //completely full + data.set(bin_t(0,i)); + for(int j=0; j<999; j++) + filter.set(bin_t(0,j)); + + for (int j=496; j<=503; j++) + data.reset(bin_t(0,j)); + + EXPECT_EQ(bin_t(3,62),binmap_t::find_complement(data, filter, bin_t(9,0), 0) ); + EXPECT_EQ(bin_t(0,496),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); +} + + +// Make 8 bin hole in 999 tree, right subtree, does not start a 8-bin substree +TEST(BinsTest,FindFiltered11) { + + binmap_t data, filter; + for(int i=0; i<999; i++) //completely full + data.set(bin_t(0,i)); + for(int j=0; j<999; j++) + filter.set(bin_t(0,j)); + + for (int j=514; j<=521; j++) + data.reset(bin_t(0,j)); + + EXPECT_EQ(bin_t(1,257),binmap_t::find_complement(data, filter, bin_t(9,1), 0) ); + EXPECT_EQ(bin_t(0,514),binmap_t::find_complement(data, filter, bin_t(9,1), 0).base_left()); +} + +// Make 8 bin hole in 999 tree, move hole +TEST(BinsTest,FindFiltered12) { + + binmap_t data, filter; + for(int i=0; i<999; i++) //completely full + data.set(bin_t(0,i)); + for(int j=0; j<999; j++) + filter.set(bin_t(0,j)); + + for (int x=0; x<999-8; x++) + { + fprintf(stderr,"x%u ", x); + for (int j=x; j<=x+7; j++) + data.reset(bin_t(0,j)); + + int subtree = (x <= 511) ? 0 : 1; + EXPECT_EQ(bin_t(0,x),binmap_t::find_complement(data, filter, bin_t(9,subtree), 0).base_left()); + + // Restore + for (int j=x; j<=x+7; j++) { + data.set(bin_t(0,j)); + } + } +} + + +// Make 8 bin hole in sparse 999 tree, move hole +TEST(BinsTest,FindFiltered13) { + + binmap_t data, filter; + for(int i=0; i<999; i+=2) // sparse + data.set(bin_t(0,i)); + for(int j=0; j<999; j+=2) + filter.set(bin_t(0,j)); + + for (int x=0; x<999-8; x++) + { + fprintf(stderr,"x%u ", x); + for (int j=x; j<=x+7; j++) + data.reset(bin_t(0,j)); + + int y = (x % 2) ? x+1 : x; + int subtree = (x <= 511) ? 0 : 1; + if (x < 511) + EXPECT_EQ(bin_t(0,y),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + else if (x == 511) // sparse bitmap 101010101..., so actual diff in next subtree + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + else + EXPECT_EQ(bin_t(0,y),binmap_t::find_complement(data, filter, bin_t(9,1), 0).base_left()); + + + for(int i=0; i<999; i+=2) // sparse + data.set(bin_t(0,i)); + } +} + + +// Make 8 bin hole in sparse 999 tree, move hole +TEST(BinsTest,FindFiltered14) { + + binmap_t data, filter; + for(int i=0; i<999; i+=2) // sparse + data.set(bin_t(0,i)); + for(int j=0; j<999; j+=2) + filter.set(bin_t(0,j)); + + // Add other diff + filter.set(bin_t(0,995)); + + for (int x=0; x<999-8; x++) + { + fprintf(stderr,"x%u ", x); + for (int j=x; j<=x+7; j++) + data.reset(bin_t(0,j)); + + int y = (x % 2) ? x+1 : x; + int subtree = (x <= 511) ? 0 : 1; + if (x < 511) + EXPECT_EQ(bin_t(0,y),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + else if (x == 511) // sparse bitmap 101010101..., so actual diff in next subtree + EXPECT_EQ(bin_t::NONE,binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); + else + EXPECT_EQ(bin_t(0,y),binmap_t::find_complement(data, filter, bin_t(9,1), 0).base_left()); + + + for(int i=0; i<999; i+=2) // sparse + data.set(bin_t(0,i)); + } +} + + + +// Make holes at 292, problematic in a specific experiment +TEST(BinsTest,FindFiltered15) { + + binmap_t data, filter; + for(int i=0; i<999; i++) // completely full + data.set(bin_t(0,i)); + for(int j=0; j<999; j++) + filter.set(bin_t(0,j)); + + data.reset(bin_t(292)); + data.reset(bin_t(296)); + data.reset(bin_t(514)); + data.reset(bin_t(998)); + + EXPECT_EQ(bin_t(292),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); +} + + + +// VOD like. Make first hole at 292. +TEST(BinsTest,FindFiltered16) { + + binmap_t data, filter; + for(int i=0; i<292/2; i++) // prefix full + data.set(bin_t(0,i)); + for(int i=147; i<999; i+=21) // postfix sparse + { + for (int x=0; x<8; x++) + data.set(bin_t(0,i+x)); + } + + for(int j=0; j<999; j++) + filter.set(bin_t(0,j)); + + EXPECT_EQ(bin_t(292),binmap_t::find_complement(data, filter, bin_t(9,0), 0).base_left()); +} + + +// VOD like. Make first hole at 292. +TEST(BinsTest,FindFiltered17) { + + binmap_t offer, ack_hint_out; + for(int i=0; i<999; i++) // offer completely full + offer.set(bin_t(0,i)); + + for(int i=0; i<292/2; i++) // request prefix full + ack_hint_out.set(bin_t(0,i)); + for(int i=147; i<999; i+=21) // request postfix sparse + { + for (int x=0; x<8; x++) + ack_hint_out.set(bin_t(0,i+x)); + } + + binmap_t binmap; + + // report the first bin we find + int layer = 0; + bin_t::uint_t twist = 0; + bin_t hint = bin_t::NONE; + while (hint.is_none() && layer <10) + { + char binstr[32]; + + bin_t curr = bin_t(layer++,0); + binmap.fill(offer); + binmap_t::copy(binmap, ack_hint_out, curr); + hint = binmap_t::find_complement(binmap, offer, twist); + binmap.clear(); + } + + EXPECT_EQ(bin_t(292),hint); +} + + +// VOD like. Make first hole at 292. Twisting + patching holes +TEST(BinsTest,FindFiltered19) { + + binmap_t offer, ack_hint_out; + for(int i=0; i<999; i++) // offer completely full + offer.set(bin_t(0,i)); + + for(int i=0; i<292/2; i++) // request prefix full + ack_hint_out.set(bin_t(0,i)); + for(int i=147; i<999; i+=21) // request postfix sparse + { + for (int x=0; x<8; x++) + ack_hint_out.set(bin_t(0,i+x)); + } + + binmap_t binmap; + + int layer = 0; + bin_t::uint_t twist = 0; + bin_t hint = bin_t::NONE; + while (!hint.contains(bin_t(292))) + { + char binstr[32]; + + twist = rand(); + + bin_t curr = bin_t(layer,0); + if (layer < 10) + layer++; + + binmap.fill(offer); + binmap_t::copy(binmap, ack_hint_out, curr); + hint = binmap_t::find_complement(binmap, offer, twist); + + if (!hint.is_none()) + fprintf(stderr,"Found alt "); + binmap.clear(); + + //patch hole + ack_hint_out.set(hint); + } + + char binstr[32],binstr2[32]; + EXPECT_EQ(bin_t(292),hint); +} + + +void create_ack_hint_out(binmap_t &ack_hint_out) +{ + ack_hint_out.clear(); + for(int i=0; i<292/2; i++) // request prefix full + ack_hint_out.set(bin_t(0,i)); + for(int i=147; i<999; i+=21) // request postfix sparse + { + for (int x=0; x<8; x++) + ack_hint_out.set(bin_t(0,i+x)); + } +} + + + +// VOD like. Make first hole at 292. Twisting + patching holes. Stalled +// at Playbackpos, looking increasingly higher layers. +TEST(BinsTest,FindFiltered20) { + + binmap_t offer, ack_hint_out; + for(int i=0; i<999; i++) // offer completely full + offer.set(bin_t(0,i)); + + create_ack_hint_out(ack_hint_out); + + binmap_t binmap; + + int layer = 0; + bin_t::uint_t twist = 0; + bin_t hint = bin_t::NONE; + + for (layer=0; layer<=9; layer++) + { + fprintf(stderr,"Layer %d\n", layer ); + while (!hint.contains(bin_t(292))) + { + char binstr[32]; + + twist = rand(); + + bin_t curr = bin_t(0,292/2); + for (int p=0; p +#include +#include + + +using namespace swift; + + +TEST(BinsTest,FindEmptyStart1){ + + binmap_t hole; + + for (int s=0; s<8; s++) + { + for (int i=s; i<8; i++) + { + hole.set(bin_t(3,0)); + hole.reset(bin_t(0,i)); + fprintf(stderr,"\ntest: from %llu want %llu\n", bin_t(0,s).toUInt(), bin_t(0,i).toUInt() ); + bin_t f = hole.find_empty(bin_t(0,s)); + EXPECT_EQ(bin_t(0,i),f); + } + } +} + + +uint64_t seqcomp(binmap_t *ack_out_,uint32_t chunk_size_,uint64_t size_, int64_t offset) +{ + bin_t binoff = bin_t(0,(offset - (offset % chunk_size_)) / chunk_size_); + + fprintf(stderr,"seqcomp: binoff is %llu\n", binoff.toUInt() ); + + bin_t nextempty = ack_out_->find_empty(binoff); + + fprintf(stderr,"seqcomp: nextempty is %llu\n", nextempty.toUInt() ); + + if (nextempty == bin_t::NONE || nextempty.base_offset() * chunk_size_ > size_) + return size_-offset; // All filled from offset + + bin_t::uint_t diffc = nextempty.layer_offset() - binoff.layer_offset(); + uint64_t diffb = diffc * chunk_size_; + if (diffb > 0) + diffb -= (offset % chunk_size_); + + return diffb; +} + + +TEST(BinsTest,FindEmptyStart2){ + + binmap_t hole; + + uint32_t chunk_size = 1024; + uint64_t size = 7*1024 + 15; + uint64_t incr = 237; + + //for (int64_t offset=0; offset +#include +#include +#include +#include "p2tp.h" + +using namespace std; +using namespace p2tp; + +class SimPeer; + +struct SimPacket { + SimPacket(int from, int to, const SimPacket* toack, bool data) ; + int peerfrom, peerto; + tint datatime; + tint acktime; + tint arrivaltime; +}; + +tint now = 0; + +/** very simplified; uplink is the bottleneck */ +class SimPeer { +public: + SimPeer (tint tt, tint lt, int qlen) : travtime(tt), latency(lt), queue_length(qlen) {} + int queue_length; + int travtime; + tint freetime; + tint latency; + int unackd; + int rcvd, sent; + queue packet_queue; + queue dropped_queue; + CongestionControl congc; + + void send(SimPacket pck) { + if (packet_queue.size()==queue_length) { + dropped_queue.push(pck); + return; + } + tint start = max(now,freetime); + tint done = pck.datatime ? start+travtime : start; + freetime = done; + pck.arrivaltime = done + latency; + packet_queue.push(pck); + } + + SimPacket recv () { + assert(!packet_queue.empty()); + SimPacket ret = packet_queue.front(); + packet_queue.pop(); + return ret; + } + + tint next_recv_time () const { + return packet_queue.empty() ? NEVER : packet_queue.front().arrivaltime; + } + + void turn () { + SimPacket rp = recv(); + SimPacket reply; + now = rp.arrivaltime; + if (rp.acktime) { + congc.RttSample(rp.arrivaltime-rp.acktime); + congc.OnCongestionEvent(CongestionControl::ACK_EV); + unackd--; + rcvd++; + } + if (rp.datatime) { + congc.OnCongestionEvent(CongestionControl::DATA_EV); + reply.acktime = reply.datatime; + } + if (!dropped_queue.empty() && dropped_queue.top().datatimeunackd) { + unackd++; + reply.datatime = now; + sent++; + } + rp.from->send(reply); + } +}; + +TEST(P2TP, TailDropTest) { + // two peers exchange packets over 100ms link with tail-drop discipline + // bw 1Mbits => travel time of 1KB is ~10ms + SimPeer a(10*MSEC,100*MSEC,20), b(10*MSEC,100*MSEC,20); + a.send(SimPacket(&b,now,0,0)); + while (now<60*60*SEC) + if (a.next_recv_time() +#include +#include +#include "swift.h" + + +using namespace swift; + +struct event evcompl; +int size, copy; + +void IsCompleteCallback(int fd, short event, void *arg) { + if (swift::SeqComplete(copy)!=size) + evtimer_add(&evcompl, tint2tv(TINT_SEC)); + else + event_base_loopexit(Channel::evbase, NULL); +} + +TEST(Connection,CwndTest) { + + Channel::evbase = event_base_new(); + + srand ( time(NULL) ); + + unlink("test_file0-copy.dat"); + struct stat st; + int ret = stat("test_file0.dat",&st); + + ASSERT_EQ(0,ret); + size = st.st_size;//, sizek = (st.st_size>>10) + (st.st_size%1024?1:0) ; + Channel::SELF_CONN_OK = true; + + int sock1 = swift::Listen(7001); + ASSERT_TRUE(sock1>=0); + + int file = swift::Open("test_file0.dat"); + FileTransfer* fileobj = FileTransfer::file(file); + //FileTransfer::instance++; + + swift::SetTracker(Address("127.0.0.1",7001)); + + copy = swift::Open("test_file0-copy.dat",fileobj->root_hash()); + + evtimer_assign(&evcompl, Channel::evbase, IsCompleteCallback, NULL); + evtimer_add(&evcompl, tint2tv(TINT_SEC)); + + //swift::Loop(TINT_SEC); + event_base_dispatch(Channel::evbase); + + //int count = 0; + //while (swift::SeqComplete(copy)!=size && count++<600) + // swift::Loop(TINT_SEC); + ASSERT_EQ(size,swift::SeqComplete(copy)); + + swift::Close(file); + swift::Close(copy); + + swift::Shutdown(sock1); + +} + + +int main (int argc, char** argv) { + + swift::LibraryInit(); + testing::InitGoogleTest(&argc, argv); + Channel::debug_file = stdout; + int ret = RUN_ALL_TESTS(); + return ret; + +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/dgramtest.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/dgramtest.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/dgramtest.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/dgramtest.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,115 @@ +/* + * dgramtest.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/13/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +//#include +#include "swift.h" // Arno: for LibraryInit + +using namespace swift; + +struct event_base *evbase; +struct event evrecv; + +void ReceiveCallback(int fd, short event, void *arg) { +} + +TEST(Datagram, AddressTest) { + Address addr("127.0.0.1:1000"); + EXPECT_EQ(INADDR_LOOPBACK,addr.ipv4()); + EXPECT_EQ(1000,addr.port()); + Address das2("node300.das2.ewi.tudelft.nl:20000"); + Address das2b("130.161.211.200:20000"); + EXPECT_EQ(das2.ipv4(),das2b.ipv4()); + EXPECT_EQ(20000,das2.port()); +} + + +TEST(Datagram, BinaryTest) { + evutil_socket_t socket = Channel::Bind(7001); + ASSERT_TRUE(socket>0); + struct sockaddr_in addr; + addr.sin_family = AF_INET; + addr.sin_port = htons(7001); + addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + const char * text = "text"; + const uint8_t num8 = 0xab; + const uint16_t num16 = 0xabcd; + const uint32_t num32 = 0xabcdef01; + const uint64_t num64 = 0xabcdefabcdeffULL; + char buf[1024]; + int i; + struct evbuffer *snd = evbuffer_new(); + evbuffer_add(snd, text, strlen(text)); + evbuffer_add_8(snd, num8); + evbuffer_add_16be(snd, num16); + evbuffer_add_32be(snd, num32); + evbuffer_add_64be(snd, num64); + int datalen = evbuffer_get_length(snd); + unsigned char *data = evbuffer_pullup(snd, datalen); + for(i=0; i0); + ASSERT_TRUE(sock2>0); + /*struct sockaddr_in addr1, addr2; + addr1.sin_family = AF_INET; + addr1.sin_port = htons(10001); + addr1.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + addr2.sin_family = AF_INET; + addr2.sin_port = htons(10002); + addr2.sin_addr.s_addr = htonl(INADDR_LOOPBACK);*/ + struct evbuffer *snd = evbuffer_new(); + evbuffer_add_32be(snd, 1234); + Channel::SendTo(sock1,Address("127.0.0.1:10002"),snd); + evbuffer_free(snd); + event_assign(&evrecv, evbase, sock2, EV_READ, ReceiveCallback, NULL); + event_add(&evrecv, NULL); + event_base_dispatch(evbase); + struct evbuffer *rcv = evbuffer_new(); + Address address; + Channel::RecvFrom(sock2, address, rcv); + uint32_t test = evbuffer_remove_32be(rcv); + ASSERT_EQ(1234,test); + evbuffer_free(rcv); + Channel::CloseSocket(sock1); + Channel::CloseSocket(sock2); +} + +int main (int argc, char** argv) { + swift::LibraryInit(); + evbase = event_base_new(); + testing::InitGoogleTest(&argc, argv); + return RUN_ALL_TESTS(); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/freemap.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/freemap.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/freemap.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/freemap.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,87 @@ +/* + * freemap.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/22/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include +#include +#include "binmap.h" + +using namespace swift; + +#ifdef _MSC_VER + #define RANDOM rand +#else + #define RANDOM random +#endif + +uint8_t rand_norm (uint8_t lim) { + long rnd = RANDOM() & ((1<>= 1; + } + return bits; +} + +TEST(FreemapTest,Freemap) { + binmap_t space; + const bin_t top(30,0); + space.reset(top); + typedef std::pair timebin_t; + typedef std::set ts_t; + ts_t to_free; + for (int t=0; t<1000000; t++) { + + if ((t % 1000) == 0) + printf("."); + + if (t<500000 || t>504000) { + uint8_t lr = rand_norm(28); + bin_t alloc = space.find_empty(); + while (alloc.layer()>lr) + alloc = alloc.left(); + ASSERT_NE(0ULL,~alloc.toUInt()); + EXPECT_TRUE(space.is_empty(alloc)); + space.set(alloc); + long dealloc_time = 1<first<=t) { + bin_t freebin = to_free.begin()->second; + to_free.erase(to_free.begin()); + space.reset(freebin); +#ifdef SHOWOUTPUT + printf("freed at %lli\n", + freebin.toUInt()); +#endif + } + // log: space taken, gaps, binmap cells, tree cells + int cells = space.cells_number(); + +#ifdef SHOWOUTPUT + printf("time %i cells used %i blocks %i\n", + t,cells,(int)to_free.size()); +#endif + //space.dump("space"); + } + for(ts_t::iterator i=to_free.begin(); i!=to_free.end(); i++) + space.reset(i->second); + EXPECT_TRUE(space.is_empty(top)); +} + +int main (int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + return RUN_ALL_TESTS(); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/hashtest.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/hashtest.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/hashtest.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/hashtest.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,127 @@ +/* + * hashtest.cpp + * serp++ + * + * Created by Victor Grishchenko on 3/12/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include "bin.h" +#include +#include "hashtree.h" +#include "swift.h" + +using namespace swift; + +char hash123[] = "a8fdc205a9f19cc1c7507a60c4f01b13d11d7fd0"; +char rooth123[] = "a8fdc205a9f19cc1c7507a60c4f01b13d11d7fd0"; + +char hash456a[] = "4d38c7459a659d769bb956c2d758d266008199a4"; +char hash456b[] = "a923e4b60d2a2a2a5ede87479e0314b028e3ae60"; +char rooth456[] = "5b53677d3a695f29f1b4e18ab6d705312ef7f8c3"; + + +TEST(Sha1HashTest,Trivial) { + Sha1Hash hash("123\n"); + EXPECT_STREQ(hash123,hash.hex().c_str()); +} + + +TEST(Sha1HashTest,OfferDataTest) { + Sha1Hash roothash123(true,hash123); + //for(bin_t pos(0,0); !pos.is_all(); pos=pos.parent()) + // roothash123 = Sha1Hash(roothash123,Sha1Hash::ZERO); + unlink("123"); + EXPECT_STREQ(rooth123,roothash123.hex().c_str()); + Storage storage("123"); + MmapHashTree tree(&storage,roothash123); + tree.OfferHash(bin_t(0,0),Sha1Hash(true,hash123)); + ASSERT_EQ(1,tree.size_in_chunks()); + ASSERT_TRUE(tree.OfferData(bin_t(0,0), "123\n", 4)); + unlink("123"); + ASSERT_EQ(4,tree.size()); +} + + +TEST(Sha1HashTest,SubmitTest) { + FILE* f123 = fopen("123","wb+"); + fprintf(f123, "123\n"); + fclose(f123); + Storage storage("123"); + MmapHashTree ht123(&storage); + EXPECT_STREQ(hash123,ht123.hash(bin_t(0,0)).hex().c_str()); + EXPECT_STREQ(rooth123,ht123.root_hash().hex().c_str()); + EXPECT_EQ(4,ht123.size()); +} + + +TEST(Sha1HashTest,OfferDataTest2) { + char data456a[1024]; // 2 chunks with cs 1024, 3 nodes in tree + for (int i=0; i<1024; i++) + data456a[i] = '$'; + char data456b[4]; + for (int i=0; i<4; i++) + data456b[i] = '$'; + + FILE* f456 = fopen("456","wb"); + fwrite(data456a,1,1024,f456); + fwrite(data456b,1,4,f456); + fclose(f456); + + Sha1Hash roothash456(Sha1Hash(true,hash456a),Sha1Hash(true,hash456b)); + unlink("456"); + EXPECT_STREQ(rooth456,roothash456.hex().c_str()); + Storage storage("456"); + MmapHashTree tree(&storage,roothash456); + tree.OfferHash(bin_t(1,0),roothash456); + tree.OfferHash(bin_t(0,0),Sha1Hash(true,hash456a)); + tree.OfferHash(bin_t(0,1),Sha1Hash(true,hash456b)); + ASSERT_EQ(2,tree.size_in_chunks()); + ASSERT_TRUE(tree.OfferData(bin_t(0,0), data456a, 1024)); + ASSERT_TRUE(tree.OfferData(bin_t(0,1), data456b, 4)); + unlink("456"); + ASSERT_EQ(1028,tree.size()); +} + + +/*TEST(Sha1HashTest,HashFileTest) { + uint8_t a [1024], b[1024], c[1024]; + memset(a,'a',1024); + memset(b,'b',1024); + memset(c,'c',1024); + Sha1Hash aaahash(a,1024), bbbhash(b,1024), ccchash(c,1024); + Sha1Hash abhash(aaahash,bbbhash), c0hash(ccchash,Sha1Hash::ZERO); + Sha1Hash aabbccroot(abhash,c0hash); + for(bin pos=bin(7); pos>sys.stderr,"SwiftProcess: __init__: Running",args,"workdir",self.workdir + + self.stdoutfile = tempfile.NamedTemporaryFile(delete=False) + + #self.popen = subprocess.Popen(args,stdout=subprocess.PIPE,cwd=self.workdir) + self.popen = subprocess.Popen(args,stdout=self.stdoutfile,cwd=self.workdir) + + self.setUpPostSession() + + def setUpPreSession(self): + self.binpath = os.path.join("..","swift") + self.listenport = random.randint(10001,10999) + # NSSA control socket + self.cmdport = random.randint(11001,11999) + # content web server + self.httpport = random.randint(12001,12999) + + self.workdir = '.' + self.destdir = None + self.filename = None + + def setUpPostSession(self): + pass + + def tearDown(self): + """ unittest test tear down code """ + if self.popen is not None: + self.popen.kill() + + + +MULTIFILE_PATHNAME = "META-INF-multifilespec.txt" + +def filelist2spec(filelist): + # TODO: verify that this gives same sort as C++ CreateMultiSpec + filelist.sort() + + specbody = "" + totalsize = 0L + for pathname,flen in filelist: + specbody += pathname+" "+str(flen)+"\n" + totalsize += flen + + specsize = len(MULTIFILE_PATHNAME)+1+0+1+len(specbody) + numstr = str(specsize) + numstr2 = str(specsize+len(str(numstr))) + if (len(numstr) == len(numstr2)): + specsize += len(numstr) + else: + specsize += len(numstr)+(len(numstr2)-len(numstr)) + + spec = MULTIFILE_PATHNAME+" "+str(specsize)+"\n" + spec += specbody + return spec + + + +def bytestr2int(b): + if b == "": + return None + else: + return int(b) + + +def rangestr2triple(rangestr,length): + # Handle RANGE query + bad = False + type, seek = string.split(rangestr,'=') + if seek.find(",") != -1: + # - Range header contains set, not supported at the moment + bad = True + else: + firstbytestr, lastbytestr = string.split(seek,'-') + firstbyte = bytestr2int(firstbytestr) + lastbyte = bytestr2int(lastbytestr) + + if length is None: + # - No length (live) + bad = True + elif firstbyte is None and lastbyte is None: + # - Invalid input + bad = True + elif firstbyte >= length: + bad = True + elif lastbyte >= length: + if firstbyte is None: + """ If the entity is shorter than the specified + suffix-length, the entire entity-body is used. + """ + lastbyte = length-1 + else: + bad = True + + if bad: + return (-1,-1,-1) + + if firstbyte is not None and lastbyte is None: + # "100-" : byte 100 and further + nbytes2send = length - firstbyte + lastbyte = length - 1 + elif firstbyte is None and lastbyte is not None: + # "-100" = last 100 bytes + nbytes2send = lastbyte + firstbyte = length - lastbyte + lastbyte = length - 1 + + else: + nbytes2send = lastbyte+1 - firstbyte + + return (firstbyte,lastbyte,nbytes2send) + + + + +class TestFrameMultiFileSeek(TestAsServer): + """ + Framework for multi-file tests. + """ + + def setUpPreSession(self): + TestAsServer.setUpPreSession(self) + self.cmdport = None + self.destdir = tempfile.mkdtemp() + + print >>sys.stderr,"test: destdir is",self.destdir + + self.setUpFileList() + + idx = self.filelist[0][0].find("/") + specprefix = self.filelist[0][0][0:idx] + + prefixdir = os.path.join(self.destdir,specprefix) + os.mkdir(prefixdir) + + # Create content + for fn,s in self.filelist: + osfn = fn.replace("/",os.sep) + fullpath = os.path.join(self.destdir,osfn) + f = open(fullpath,"wb") + data = fn[len(specprefix)+1] * s + f.write(data) + f.close() + + # Create spec + self.spec = filelist2spec(self.filelist) + + fullpath = os.path.join(self.destdir,MULTIFILE_PATHNAME) + f = open(fullpath,"wb") + f.write(self.spec) + f.close() + + self.filename = fullpath + + def setUpFileList(self): + self.filelist = [] + # Minimum 1 entry + + def setUpPostSession(self): + TestAsServer.setUpPostSession(self) + + # Allow it to write root hash + time.sleep(2) + + f = open(self.stdoutfile.name,"rb") + output = f.read(1024) + f.close() + + prefix = "Root hash: " + idx = output.find(prefix) + if idx != -1: + self.roothashhex = output[idx+len(prefix):idx+len(prefix)+40] + else: + self.assert_(False,"Could not read roothash from swift output") + + print >>sys.stderr,"test: setUpPostSession: roothash is",self.roothashhex + + self.urlprefix = "http://127.0.0.1:"+str(self.httpport)+"/"+self.roothashhex + + def test_read_all(self): + + url = self.urlprefix + req = urllib2.Request(url) + resp = urllib2.urlopen(req) + data = resp.read() + + # Read and compare content + if data[0:len(self.spec)] != self.spec: + self.assert_(False,"returned content doesn't match spec") + offset = len(self.spec) + for fn,s in self.filelist: + osfn = fn.replace("/",os.sep) + fullpath = os.path.join(self.destdir,osfn) + f = open(fullpath,"rb") + content = f.read() + f.close() + + if data[offset:offset+s] != content: + self.assert_(False,"returned content doesn't match file "+fn ) + + offset += s + + self.assertEqual(offset, len(data), "returned less content than expected" ) + + + def test_read_file0(self): + wanttup = self.filelist[0] + self._test_read_file(wanttup) + + def test_read_file1(self): + if len(self.filelist) > 1: + wanttup = self.filelist[1] + self._test_read_file(wanttup) + + def test_read_file2(self): + if len(self.filelist) > 2: + wanttup = self.filelist[2] + self._test_read_file(wanttup) + + def _test_read_file(self,wanttup): + url = self.urlprefix+"/"+wanttup[0] + req = urllib2.Request(url) + resp = urllib2.urlopen(req) + data = resp.read() + resp.close() + + osfn = wanttup[0].replace("/",os.sep) + fullpath = os.path.join(self.destdir,osfn) + f = open(fullpath,"rb") + content = f.read() + f.close() + + if data != content: + self.assert_(False,"returned content doesn't match file "+osfn ) + + self.assertEqual(len(content), len(data), "returned less content than expected" ) + + def test_read_file0_range(self): + wanttup = self.filelist[0] + self._test_read_file_range(wanttup,"-2") + self._test_read_file_range(wanttup,"0-2") + self._test_read_file_range(wanttup,"2-") + self._test_read_file_range(wanttup,"4-10") + + def test_read_file1_range(self): + if len(self.filelist) > 1: + wanttup = self.filelist[1] + self._test_read_file_range(wanttup,"-2") + self._test_read_file_range(wanttup,"0-2") + self._test_read_file_range(wanttup,"2-") + self._test_read_file_range(wanttup,"4-10") + + def test_read_file2_range(self): + if len(self.filelist) > 2: + wanttup = self.filelist[2] + self._test_read_file_range(wanttup,"-2") + self._test_read_file_range(wanttup,"0-2") + self._test_read_file_range(wanttup,"2-") + self._test_read_file_range(wanttup,"4-10") + + + def _test_read_file_range(self,wanttup,rangestr): + url = self.urlprefix+"/"+wanttup[0] + req = urllib2.Request(url) + val = "bytes="+rangestr + req.add_header("Range", val) + (firstbyte,lastbyte,nbytes) = rangestr2triple(val,wanttup[1]) + + print >>sys.stderr,"test: Requesting",firstbyte,"to",lastbyte,"total",nbytes,"from",wanttup[0] + + resp = urllib2.urlopen(req) + data = resp.read() + resp.close() + + osfn = wanttup[0].replace("/",os.sep) + fullpath = os.path.join(self.destdir,osfn) + f = open(fullpath,"rb") + content = f.read() + f.close() + + #print >>sys.stderr,"test: got",`data` + #print >>sys.stderr,"test: want",`content[firstbyte:lastbyte+1]` + + if data != content[firstbyte:lastbyte+1]: + self.assert_(False,"returned content doesn't match file "+osfn ) + + self.assertEqual(nbytes, len(data), "returned less content than expected" ) + + +class TestMFSAllAbove1K(TestFrameMultiFileSeek): + """ + Concrete test of files all > 1024 bytes + """ + + def setUpFileList(self): + self.filelist = [] + self.filelist.append(("MyCollection/anita.ts",1234)) + self.filelist.append(("MyCollection/harry.ts",5000)) + self.filelist.append(("MyCollection/sjaak.ts",24567)) + + +class TestMFS1stSmall(TestFrameMultiFileSeek): + """ + Concrete test with 1st file fitting in 1st chunk (i.e. spec+file < 1024) + """ + def setUpFileList(self): + self.filelist = [] + self.filelist.append(("MyCollection/anita.ts",123)) + self.filelist.append(("MyCollection/harry.ts",5000)) + self.filelist.append(("MyCollection/sjaak.ts",24567)) + + +def test_suite(): + suite = unittest.TestSuite() + suite.addTest(unittest.makeSuite(TestMFSAllAbove1K)) + suite.addTest(unittest.makeSuite(TestMFS1stSmall)) + + return suite + + +def main(): + unittest.main(defaultTest='test_suite',argv=[sys.argv[0]]) + +if __name__ == "__main__": + main() + + \ No newline at end of file diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/ledbattest.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/ledbattest.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/ledbattest.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/ledbattest.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,184 @@ +/* + * ledbattest.cpp + * + * BROKEN: Arno: must be rewritten to libevent + * + * Created by Victor Grishchenko on 3/22/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include +#include +#include +#include "swift.h" +#include + +using namespace swift; +using namespace std; + +/** + TODO + * losses + * smooth rate + * seq 12345 stop + * busy pipe => negative cwnd +*/ + +TEST(Datagram,LedbatTest) { + + int MAX_REORDERING = 3; + tint TARGET = 25*TINT_MSEC; + float GAIN = 1.0/TARGET; + int seq_off = 0; + float cwnd = 1; + tint DELAY_BIN = TINT_SEC*30; + tint min_delay = TINT_NEVER; + tint rtt_avg = TINT_NEVER>>4, dev_avg = TINT_NEVER>>4; + tint last_bin_time = 0; + tint last_drop_time = 0; + int delay_bin = 0; + deque history, delay_history; + tint min_delay_bins[4] = {TINT_NEVER,TINT_NEVER, + TINT_NEVER,TINT_NEVER}; + tint cur_delays[4] = {TINT_NEVER,TINT_NEVER, + TINT_NEVER,TINT_NEVER}; + tint last_sec = 0; + int sec_ackd = 0; + + evutil_socket_t send_sock = Datagram::Bind(10001); // bind sending socket + evutil_socket_t ack_sock = Datagram::Bind(10002); // bind receiving socket + struct sockaddr_in send_to, ack_to; + send_to.sin_family = AF_INET; + send_to.sin_port = htons(10002); + send_to.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + ack_to.sin_family = AF_INET; + ack_to.sin_port = htons(10001); + ack_to.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + uint8_t* garbage = (uint8_t*) malloc(1024); + evutil_socket_t socks[2] = {send_sock,ack_sock}; + evutil_socket_t sock2read; + tint wait_time = 100*TINT_MSEC; + + while (sock2read = Datagram::Wait(2,socks,wait_time)) { + tint now = Datagram::Time(); + if (sock2read==ack_sock) { + Datagram data(ack_sock); // send an acknowledgement + data.Recv(); + int seq = data.Pull32(); + Datagram ack(ack_sock,ack_to); + ack.Push32(seq); + ack.Push64(now); + if (4+8!=ack.Send()) + fprintf(stderr,"short write\n"); + fprintf(stderr,"%lli rcvd%i\n",now/TINT_SEC,seq); + //cc->OnDataRecv(bin64_t(0,seq)); + // TODO: peer cwnd !!! + continue; + } + if (sock2read==send_sock) { // process an acknowledgement + Datagram ack(send_sock); + ack.Recv(); + int seq = ack.Pull32(); + tint arrival_time = ack.Pull64(); + seq -= seq_off; + if (seq<0) + continue; + if (seq>=history.size()) + continue; + if (history[seq]==0) + continue; + tint send_time = history[seq]; + history[seq] = 0; + if (seq>MAX_REORDERING*2) { //loss + if (last_drop_time delay) + min_delay_bins[delay_bin] = delay; + if (delay < min_delay) + min_delay = delay; + cur_delays[(seq_off+seq)%4] = delay; + tint current_delay = TINT_NEVER; + for(int i=0; i<4; i++) + if (current_delay > cur_delays[i]) + current_delay = cur_delays[i]; // FIXME avg + tint queueing_delay = current_delay - min_delay; + // adjust cwnd + tint off_target = TARGET - queueing_delay; + //cerr<<"\t"< +#include +#ifdef _MSC_VER + #include "compat/stdint.h" + #include +#else + #include + #include + #include +#endif +#include +#include +#include "datagram.h" +#include "swift.h" +#include + +using namespace swift; +using namespace std; + +/** + TODO + * losses + * smooth rate + * seq 12345 stop + * busy pipe => negative cwnd +*/ + +unsigned long dest_addr; +int send_port = 10001; +int ack_port = 10002; + +TEST(Datagram,LedbatTest) { + + int MAX_REORDERING = 3; + tint TARGET = 25*TINT_MSEC; + float GAIN = 1.0/TARGET; + int seq_off = 0; + float cwnd = 1; + tint DELAY_BIN = TINT_SEC*30; + tint min_delay = TINT_NEVER; + tint rtt_avg = TINT_NEVER>>4, dev_avg = TINT_NEVER>>4; + tint last_bin_time = 0; + tint last_drop_time = 0; + int delay_bin = 0; + deque history, delay_history; + tint min_delay_bins[4] = {TINT_NEVER,TINT_NEVER, + TINT_NEVER,TINT_NEVER}; + tint cur_delays[4] = {TINT_NEVER,TINT_NEVER, + TINT_NEVER,TINT_NEVER}; + tint last_sec = 0; + int sec_ackd = 0; + + // bind sending socket + evutil_socket_t send_sock = Datagram::Bind(Address(INADDR_ANY,send_port)); + // bind receiving socket + evutil_socket_t ack_sock = Datagram::Bind(Address(INADDR_ANY,ack_port)); + struct sockaddr_in send_to, ack_to; + memset(&send_to, 0, sizeof(struct sockaddr_in)); + memset(&ack_to, 0, sizeof(struct sockaddr_in)); + send_to.sin_family = AF_INET; + send_to.sin_port = htons(ack_port); + send_to.sin_addr.s_addr = dest_addr; + ack_to.sin_family = AF_INET; + ack_to.sin_port = htons(send_port); + ack_to.sin_addr.s_addr = dest_addr; + uint8_t* garbage = (uint8_t*) malloc(1024); + evutil_socket_t socks[2] = {send_sock,ack_sock}; + evutil_socket_t sock2read; + tint wait_time = 100*TINT_MSEC; + + while (sock2read = Datagram::Wait(2,socks,wait_time)) { + tint now = Datagram::Time(); + if (sock2read==ack_sock) { + Datagram data(ack_sock); // send an acknowledgement + data.Recv(); + int seq = data.Pull32(); + Datagram ack(ack_sock,ack_to); + ack.Push32(seq); + ack.Push64(now); + if (4+8!=ack.Send()) + fprintf(stderr,"short write\n"); + fprintf(stderr,"%lli rcvd%i\n",now/TINT_SEC,seq); + // TODO: peer cwnd !!! + continue; + } + if (sock2read==send_sock) { // process an acknowledgement + Datagram ack(send_sock); + ack.Recv(); + int seq = ack.Pull32(); + tint arrival_time = ack.Pull64(); + seq -= seq_off; + if (seq<0) + continue; + if (seq>=history.size()) + continue; + if (history[seq]==0) + continue; + tint send_time = history[seq]; + history[seq] = 0; + if (seq>MAX_REORDERING*2) { //loss + if (last_drop_time delay) + min_delay_bins[delay_bin] = delay; + if (delay < min_delay) + min_delay = delay; + cur_delays[(seq_off+seq)%4] = delay; + tint current_delay = TINT_NEVER; + for(int i=0; i<4; i++) + if (current_delay > cur_delays[i]) + current_delay = cur_delays[i]; // FIXME avg + tint queueing_delay = current_delay - min_delay; + // adjust cwnd + tint off_target = TARGET - queueing_delay; + //cerr<<"\t"< $STORE/leecher$i.log ) & + TOKILL="$TOKILL $!" + sleep 4; +done + +sleep 10 + +for p in $TOKILL; do + kill -9 $p +done + +for i in `seq 1 $PEERCOUNT`; do + cat $STORE/leecher$i.log | grep sent | awk '{print $5}' | \ + sort | uniq -c > $STORE/peers$i.txt + peers=`wc -l < $STORE/peers$i.txt` + if [ $peers -ne $PEERCOUNT ]; then + echo Peer $i has $peers peers + fi +done diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/test.bat tribler-6.2.0/Tribler/SwiftEngine/tests/test.bat --- tribler-6.2.0/Tribler/SwiftEngine/tests/test.bat 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/test.bat 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,7 @@ +bin64test.exe +binstest2.exe +connecttest.exe +dgramtest.exe +freemap.exe +hashtest.exe +transfertest.exe Binary files /tmp/VTBT8INcEc/tribler-6.2.0/Tribler/SwiftEngine/tests/test_file0.dat and /tmp/aqQsbkQX0D/tribler-6.2.0/Tribler/SwiftEngine/tests/test_file0.dat differ diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/test_tunnel.py tribler-6.2.0/Tribler/SwiftEngine/tests/test_tunnel.py --- tribler-6.2.0/Tribler/SwiftEngine/tests/test_tunnel.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/test_tunnel.py 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,221 @@ +# Written by Arno Bakker +# see LICENSE.txt for license information +# +# TODO: split up test_? LocalRepos create 2 + +import sys +import os +import unittest +from threading import Event, Thread, currentThread, Condition +from socket import error as socketerror +from time import sleep +from traceback import print_exc +import shutil +import random +import socket +import subprocess + +from M2Crypto import Rand + +DEBUG = False + + +NREPEATS = 10 + + +# Thread must come as first parent class! +class UDPListener(Thread): + def __init__(self,testcase,port): + Thread.__init__(self) + self.setDaemon(True) + self.testcase = testcase + self.port = port + + self.myss = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + self.myss.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + self.myss.bind(('', self.port)) + + print >>sys.stderr,"test: udp: Bound to port",self.port + + def run(self): + while True: + msg = self.myss.recv(5000) + print >>sys.stderr,"test: udp: Got",len(msg) + self.testcase.assertEqual(len(msg),4+self.testcase.randsize) + prefix = msg[0:4] + data = msg[4:] + self.testcase.assertEqual(prefix,"\xff\xff\xff\xff") + self.testcase.assertEqual(data,self.testcase.data) + self.testcase.notify() + + +class TestTunnel(unittest.TestCase): + """ + Test for swift ability to tunnel data from CMD TCP connections over UDP. + """ + + def setUp(self): + + self.cond = Condition() + + self.peer1port = 1234 + self.peer1 = UDPListener(self,self.peer1port) + self.peer1.start() + + self.binpath = os.path.join("..","swift") + self.destdir = "." + + self.cmdport = random.randint(11001,11999) # NSSA control socket + self.httpport = random.randint(12001,12999) # content web server + self.swiftport = random.randint(13001,13999) # content web server + + # Security: only accept commands from localhost, enable HTTP gw, + # no stats/webUI web server + args=[] + args.append(str(self.binpath)) + args.append("-c") # command port + args.append("127.0.0.1:"+str(self.cmdport)) + args.append("-g") # HTTP gateway port + args.append("127.0.0.1:"+str(self.httpport)) + args.append("-l") + args.append("127.0.0.1:"+str(self.swiftport)) + args.append("-o") + args.append(str(self.destdir)) + args.append("-w") + args.append("-B") # DEBUG Hack + args.append("swiftout.log") + + print >>sys.stderr,"test: SwiftProcess: Running",args + + self.popen = subprocess.Popen(args,close_fds=True,cwd=self.destdir) + + self.udpsendport = random.randint(14001,14999) # + + sleep(2) # let server threads start + + def tearDown(self): + sleep(5) + self.popen.kill() + + def notify(self): + self.cond.acquire() + self.cond.notify() + self.cond.release() + + + def wait(self): + self.cond.acquire() + self.cond.wait() + self.cond.release() + + def test_tunnel_send(self): + self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + self.s.connect(("127.0.0.1", self.cmdport)) + + print >>sys.stderr,"test: Send over TCP, receive on UDP" + for i in range(0,NREPEATS): + self.randsize = random.randint(1,2048) + self.data = Rand.rand_bytes(self.randsize) + cmd = "TUNNELSEND 127.0.0.1:"+str(self.peer1port)+"/ffffffff "+str(self.randsize)+"\r\n"; + self.s.send(cmd+self.data) + # Read at UDPListener + self.wait() + + print >>sys.stderr,"test: Separate TUNNEL cmd from data on TCP" + for i in range(0,NREPEATS): + self.randsize = random.randint(1,2048) + self.data = Rand.rand_bytes(self.randsize) + cmd = "TUNNELSEND 127.0.0.1:"+str(self.peer1port)+"/ffffffff "+str(self.randsize)+"\r\n"; + self.s.send(cmd) + sleep(.1) + self.s.send(self.data) + # Read at UDPListener + self.wait() + + print >>sys.stderr,"test: Add command after TUNNEL" + for i in range(0,NREPEATS): + self.randsize = random.randint(1,2048) + self.data = Rand.rand_bytes(self.randsize) + cmd = "TUNNELSEND 127.0.0.1:"+str(self.peer1port)+"/ffffffff "+str(self.randsize)+"\r\n"; + cmd2 = "SETMOREINFO 979152e57a82d8781eb1f2cd0c4ab8777e431012 1\r\n" + self.s.send(cmd+self.data+cmd2) + # Read at UDPListener + self.wait() + + print >>sys.stderr,"test: Send data in parts" + for i in range(0,NREPEATS): + self.randsize = random.randint(1,2048) + self.data = Rand.rand_bytes(self.randsize) + cmd = "TUNNELSEND 127.0.0.1:"+str(self.peer1port)+"/ffffffff "+str(self.randsize)+"\r\n"; + self.s.send(cmd) + self.s.send(self.data[0:self.randsize/2]) + self.s.send(self.data[self.randsize/2:]) + # Read at UDPListener + self.wait() + + print >>sys.stderr,"test: Send UDP, receive TCP" + self.s2 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + totaldata = '' + for i in range(0,NREPEATS): + self.randsize = random.randint(1,2048) + self.data = Rand.rand_bytes(self.randsize) + + # Send data over UDP + print >>sys.stderr,"test: TCP: Sending swift UDP bytes",self.randsize + swiftmsg = "\xff\xff\xff\xff"+self.data + nsend = self.s2.sendto(swiftmsg,0,("127.0.0.1",self.swiftport)) + + # Receive data via TCP + print >>sys.stderr,"test: TCP: Recv" + crlfidx=-1 + while crlfidx == -1: + gotdata = self.s.recv(5000) + print >>sys.stderr,"test: TCP: Got cmd bytes",len(gotdata) + if len(gotdata) == 0: + break + totaldata += gotdata + crlfidx = totaldata.find('\r\n') + + cmd = totaldata[0:crlfidx] + print >>sys.stderr,"test: TCP: Got cmd",cmd + + totaldata = totaldata[crlfidx+2:] # strip cmd + + words = cmd.split() + if words[0] == "TUNNELRECV": + srcstr = words[1] + size = int(words[2]) + + while len(totaldata) < size: + gotdata = self.s.recv(5000) + print >>sys.stderr,"test: TCP: Got tunnel bytes",len(gotdata) + if len(gotdata) == 0: + break + totaldata += gotdata + + tunneldata=totaldata[0:size] + totaldata = totaldata[size:] + self.assertEqual(self.randsize,len(tunneldata)) + self.assertEqual(self.data,tunneldata) + print >>sys.stderr,"test: TCP: Done" + + else: + self.assertEqual(words[0],"TUNNELRECV") + + + + + + +def test_suite(): + suite = unittest.TestSuite() + suite.addTest(unittest.makeSuite(TestTunnel)) + + return suite + + +def main(): + unittest.main(defaultTest='test_suite',argv=[sys.argv[0]]) + +if __name__ == "__main__": + main() diff -Nru tribler-6.2.0/Tribler/SwiftEngine/tests/transfertest.cpp tribler-6.2.0/Tribler/SwiftEngine/tests/transfertest.cpp --- tribler-6.2.0/Tribler/SwiftEngine/tests/transfertest.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/tests/transfertest.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,155 @@ +/* + * transfertest.cpp + * swift + * + * Created by Victor Grishchenko on 10/7/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +//#include +//#include +#include "swift.h" +#include "compat.h" +#include + +using namespace swift; + +const char* BTF = "test_file"; + +Sha1Hash A,B,C,D,E,AB,CD,ABCD,E0,E000,ABCDE000,ROOT; + + +TEST(TransferTest,TBHeap) { + tbheap tbh; + ASSERT_TRUE(tbh.is_empty()); + tbh.push(tintbin(3,bin_t::NONE)); + tbh.push(tintbin(1,bin_t::NONE)); + ASSERT_EQ(2,tbh.size()); + tbh.push(tintbin(2,bin_t::ALL)); + ASSERT_EQ(1,tbh.pop().time); + ASSERT_EQ(bin_t::ALL,tbh.peek().bin); + ASSERT_EQ(2,tbh.pop().time); + ASSERT_EQ(3,tbh.pop().time); +} + + +TEST(TransferTest,TransferFile) { + + AB = Sha1Hash(A,B); + CD = Sha1Hash(C,D); + ABCD = Sha1Hash(AB,CD); + E0 = Sha1Hash(E,Sha1Hash::ZERO); + E000 = Sha1Hash(E0,Sha1Hash::ZERO); + ABCDE000 = Sha1Hash(ABCD,E000); + ROOT = ABCDE000; + //for (bin_t pos(3,0); !pos.is_all(); pos=pos.parent()) { + // ROOT = Sha1Hash(ROOT,Sha1Hash::ZERO); + //printf("m %lli %s\n",(uint64_t)pos.parent(),ROOT.hex().c_str()); + //} + + // now, submit a new file + + FileTransfer* seed_transfer = new FileTransfer(BTF); + MmapHashTree* seed = seed_transfer->hashtree(); + EXPECT_TRUE(A==seed->hash(bin_t(0,0))); + EXPECT_TRUE(E==seed->hash(bin_t(0,4))); + EXPECT_TRUE(ABCD==seed->hash(bin_t(2,0))); + EXPECT_TRUE(ROOT==seed->root_hash()); + EXPECT_TRUE(ABCD==seed->peak_hash(0)); + EXPECT_TRUE(E==seed->peak_hash(1)); + EXPECT_TRUE(ROOT==seed->root_hash()); + EXPECT_EQ(4100,seed->size()); + EXPECT_EQ(5,seed->size_in_chunks()); + EXPECT_EQ(4100,seed->complete()); + EXPECT_EQ(4100,seed->seq_complete()); + EXPECT_EQ(bin_t(2,0),seed->peak(0)); + + // retrieve it + unlink("copy"); + FileTransfer* leech_transfer = new FileTransfer("copy",seed->root_hash()); + MmapHashTree* leech = leech_transfer->hashtree(); + leech_transfer->picker().Randomize(0); + // transfer peak hashes + for(int i=0; ipeak_count(); i++) + leech->OfferHash(seed->peak(i),seed->peak_hash(i)); + ASSERT_EQ(5<<10,leech->size()); + ASSERT_EQ(5,leech->size_in_chunks()); + ASSERT_EQ(0,leech->complete()); + EXPECT_EQ(bin_t(2,0),leech->peak(0)); + // transfer data and hashes + // ABCD E000 + // AB CD E0 0 + // AAAA BBBB CCCC DDDD E 0 0 0 + // calculated leech->OfferHash(bin64_t(1,0), seed->hashes[bin64_t(1,0)]); + leech->OfferHash(bin_t(1,1), seed->hash(bin_t(1,1))); + for (int i=0; i<5; i++) { + if (i==2) { // now: stop, save, start + delete leech_transfer; + leech_transfer = new FileTransfer("copy",seed->root_hash(),false); + leech = leech_transfer->hashtree(); + leech_transfer->picker().Randomize(0); + EXPECT_EQ(2,leech->chunks_complete()); + EXPECT_EQ(bin_t(2,0),leech->peak(0)); + } + bin_t next = leech_transfer->picker().Pick(seed->ack_out(),1,TINT_NEVER); + ASSERT_NE(bin_t::NONE,next); + ASSERT_TRUE(next.base_offset()<5); + uint8_t buf[1024]; //size_t len = seed->storer->ReadData(next,&buf); + size_t len = seed->get_storage()->Read(buf,1024,next.base_offset()<<10); + bin_t sibling = next.sibling(); + if (sibling.base_offset()size_in_chunks()) + leech->OfferHash(sibling, seed->hash(sibling)); + uint8_t memo = *buf; + *buf = 'z'; + EXPECT_FALSE(leech->OfferData(next, (char*)buf, len)); + fprintf(stderr,"offer of bad data was refused, OK\n"); + *buf = memo; + EXPECT_TRUE(leech->OfferData(next, (char*)buf, len)); + } + EXPECT_EQ(4100,leech->size()); + EXPECT_EQ(5,leech->size_in_chunks()); + EXPECT_EQ(4100,leech->complete()); + EXPECT_EQ(4100,leech->seq_complete()); + +} +/* + FIXME + - always rehashes (even fresh files) + */ + +int main (int argc, char** argv) { + + unlink("test_file"); + unlink("copy"); + unlink("test_file.mhash"); + unlink("copy.mhash"); + + int f = open(BTF,O_RDWR|O_CREAT|O_TRUNC,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); + if (f < 0) + { + eprintf("Error opening %s\n",BTF); + return -1; + } + uint8_t buf[1024]; + memset(buf,'A',1024); + A = Sha1Hash(buf,1024); + write(f,buf,1024); + memset(buf,'B',1024); + B = Sha1Hash(buf,1024); + write(f,buf,1024); + memset(buf,'C',1024); + C = Sha1Hash(buf,1024); + write(f,buf,1024); + memset(buf,'D',1024); + D = Sha1Hash(buf,1024); + write(f,buf,1024); + memset(buf,'E',4); + E = Sha1Hash(buf,4); + write(f,buf,4); + close(f); + + testing::InitGoogleTest(&argc, argv); + int ret = RUN_ALL_TESTS(); + + return ret; +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/transfer.cpp tribler-6.2.0/Tribler/SwiftEngine/transfer.cpp --- tribler-6.2.0/Tribler/SwiftEngine/transfer.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/transfer.cpp 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,415 @@ +/* + * transfer.cpp + * some transfer-scope code + * + * Created by Victor Grishchenko on 10/6/09. + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include +#include +#include +#include +#include "swift.h" + +#include "ext/seq_picker.cpp" // FIXME FIXME FIXME FIXME +#include "ext/vod_picker.cpp" + +using namespace swift; + +std::vector FileTransfer::files(20); + +#define BINHASHSIZE (sizeof(bin64_t)+sizeof(Sha1Hash)) + + +#define TRACKER_RETRY_INTERVAL_START (5*TINT_SEC) +#define TRACKER_RETRY_INTERVAL_EXP 1.1 // exponent used to increase INTERVAL_START +#define TRACKER_RETRY_INTERVAL_MAX (1800*TINT_SEC) // 30 minutes + +// FIXME: separate Bootstrap() and Download(), then Size(), Progress(), SeqProgress() + +FileTransfer::FileTransfer(std::string filename, const Sha1Hash& root_hash, bool force_check_diskvshash, bool check_netwvshash, uint32_t chunk_size, bool zerostate) : + Operational(), fd_(files.size()+1), cb_installed(0), mychannels_(), + speedzerocount_(0), tracker_(), tracker_retry_interval_(TRACKER_RETRY_INTERVAL_START), + tracker_retry_time_(NOW), zerostate_(zerostate) +{ + if (files.size()Randomize(rand()&63); + } + else + { + // ZEROHASH + hashtree_ = (HashTree *)new ZeroHashTree(storage_,root_hash,chunk_size,hash_filename,binmap_filename); + } + + init_time_ = Channel::Time(); + cur_speed_[DDIR_UPLOAD] = MovingAverageSpeed(); + cur_speed_[DDIR_DOWNLOAD] = MovingAverageSpeed(); + max_speed_[DDIR_UPLOAD] = DBL_MAX; + max_speed_[DDIR_DOWNLOAD] = DBL_MAX; + + // SAFECLOSE + evtimer_assign(&evclean_,Channel::evbase,&FileTransfer::LibeventCleanCallback,this); + evtimer_add(&evclean_,tint2tv(5*TINT_SEC)); + + UpdateOperational(); +} + + +// SAFECLOSE +void FileTransfer::LibeventCleanCallback(int fd, short event, void *arg) +{ + // Arno, 2012-02-24: Why-oh-why, update NOW + Channel::Time(); + + FileTransfer *ft = (FileTransfer *)arg; + if (ft == NULL) + return; + + // STL and MS and conditional delete from set not a happy place :-( + channels_t delset; + channels_t::iterator iter; + bool hasestablishedpeers=false; + for (iter=ft->mychannels_.begin(); iter!=ft->mychannels_.end(); iter++) + { + Channel *c = *iter; + if (c != NULL) { + if (c->IsScheduled4Close()) + delset.push_back(c); + + if (c->is_established ()) { + hasestablishedpeers = true; + //fprintf(stderr,"%s peer %s\n", ft->hashtree()->root_hash().hex().c_str(), c->peer().str() ); + } + } + } + for (iter=delset.begin(); iter!=delset.end(); iter++) + { + Channel *c = *iter; + dprintf("%s #%u clean cb close\n",tintstr(),c->id()); + c->Close(); + delete c; // Does erase from transfer() list of channels + } + + // Arno, 2012-02-24: Check for liveliness. + ft->ReConnectToTrackerIfAllowed(hasestablishedpeers); + + // Reschedule cleanup + evtimer_add(&(ft->evclean_),tint2tv(5*TINT_SEC)); +} + + +void FileTransfer::ReConnectToTrackerIfAllowed(bool hasestablishedpeers) +{ + // If I'm not connected to any + // peers, try to contact the tracker again. + if (!hasestablishedpeers) + { + if (NOW > tracker_retry_time_) + { + ConnectToTracker(); + + tracker_retry_interval_ *= TRACKER_RETRY_INTERVAL_EXP; + if (tracker_retry_interval_ > TRACKER_RETRY_INTERVAL_MAX) + tracker_retry_interval_ = TRACKER_RETRY_INTERVAL_MAX; + tracker_retry_time_ = NOW + tracker_retry_interval_; + } + } + else + { + tracker_retry_interval_ = TRACKER_RETRY_INTERVAL_START; + tracker_retry_time_ = NOW + tracker_retry_interval_; + } +} + + +void FileTransfer::ConnectToTracker() +{ + if (!IsOperational()) + return; + + Channel *c = NULL; + if (tracker_ != Address()) + c = new Channel(this,INVALID_SOCKET,tracker_); + else if (Channel::tracker!=Address()) + c = new Channel(this); +} + + +Channel * FileTransfer::FindChannel(const Address &addr, Channel *notc) +{ + channels_t::iterator iter; + for (iter=mychannels_.begin(); iter!=mychannels_.end(); iter++) + { + Channel *c = *iter; + if (c != NULL) { + if (c != notc && (c->peer() == addr || c->recv_peer() == addr)) { + return c; + } + } + } + return NULL; +} + + +void FileTransfer::UpdateOperational() +{ + if ((hashtree_ != NULL && !hashtree_->IsOperational()) || !storage_->IsOperational()) + SetBroken(); +} + + +void Channel::CloseTransfer (FileTransfer* trans) { + for(int i=0; itransfer_==trans) + { + //fprintf(stderr,"Channel::CloseTransfer: delete #%i\n", Channel::channels[i]->id()); + Channel::channels[i]->Close(); // ARNO + delete Channel::channels[i]; + } +} + + +void swift::AddProgressCallback (int transfer,ProgressCallback cb,uint8_t agg) { + + //fprintf(stderr,"swift::AddProgressCallback: transfer %i\n", transfer ); + + FileTransfer* trans = FileTransfer::file(transfer); + if (!trans) + return; + + //fprintf(stderr,"swift::AddProgressCallback: ft obj %p %p\n", trans, cb ); + + trans->cb_agg[trans->cb_installed] = agg; + trans->callbacks[trans->cb_installed] = cb; + trans->cb_installed++; +} + + +void swift::ExternallyRetrieved (int transfer,bin_t piece) { + FileTransfer* trans = FileTransfer::file(transfer); + if (!trans) + return; + trans->ack_out()->set(piece); // that easy +} + + +void swift::RemoveProgressCallback (int transfer, ProgressCallback cb) { + + //fprintf(stderr,"swift::RemoveProgressCallback: transfer %i\n", transfer ); + + FileTransfer* trans = FileTransfer::file(transfer); + if (!trans) + return; + + //fprintf(stderr,"swift::RemoveProgressCallback: transfer %i ft obj %p %p\n", transfer, trans, cb ); + + for(int i=0; icb_installed; i++) + if (trans->callbacks[i]==cb) + trans->callbacks[i]=trans->callbacks[--trans->cb_installed]; + + for(int i=0; icb_installed; i++) + { + fprintf(stderr,"swift::RemoveProgressCallback: transfer %i remain %p\n", transfer, trans->callbacks[i] ); + } +} + + +FileTransfer::~FileTransfer () +{ + Channel::CloseTransfer(this); + delete hashtree_; + delete storage_; + files[fd()] = NULL; + if (!IsZeroState()) + { + delete picker_; + delete availability_; + } + + // Arno, 2012-02-06: Cancel cleanup timer, otherwise chaos! + evtimer_del(&evclean_); +} + + +FileTransfer* FileTransfer::Find (const Sha1Hash& root_hash) { + for(int i=0; iroot_hash()==root_hash) + return files[i]; + return NULL; +} + + +int swift:: Find (Sha1Hash hash) { + FileTransfer* t = FileTransfer::Find(hash); + if (t) + return t->fd(); + return -1; +} + + + +bool FileTransfer::OnPexAddIn (const Address& addr) { + + //fprintf(stderr,"FileTransfer::OnPexAddIn: %s\n", addr.str() ); + // Arno: this brings safety, but prevents private swift installations. + // TODO: detect public internet. + //if (addr.is_private()) + // return false; + // Gertjan fix: PEX redo + if (hs_in_.size()transfer().fd() != this->fd()) { + /* Channel was closed or is not associated with this FileTransfer (anymore). */ + hs_in_[i] = hs_in_[0]; + hs_in_.pop_front(); + i--; + continue; + } + if (!c->is_established()) + continue; + choose_from.push_back(hs_in_[i]); + } + if (choose_from.size() == 0) + return -1; + + return choose_from[rand() % choose_from.size()].toUInt(); +} + +void FileTransfer::OnRecvData(int n) +{ + // Got n ~ 32K + cur_speed_[DDIR_DOWNLOAD].AddPoint((uint64_t)n); +} + +void FileTransfer::OnSendData(int n) +{ + // Sent n ~ 1K + cur_speed_[DDIR_UPLOAD].AddPoint((uint64_t)n); +} + + +void FileTransfer::OnSendNoData() +{ + // AddPoint(0) everytime we don't AddData gives bad speed measurement + // batch 32 such events into 1. + speedzerocount_++; + if (speedzerocount_ >= 32) + { + cur_speed_[DDIR_UPLOAD].AddPoint((uint64_t)0); + speedzerocount_ = 0; + } +} + + +double FileTransfer::GetCurrentSpeed(data_direction_t ddir) +{ + return cur_speed_[ddir].GetSpeedNeutral(); +} + + +void FileTransfer::SetMaxSpeed(data_direction_t ddir, double m) +{ + max_speed_[ddir] = m; + // Arno, 2012-01-04: Be optimistic, forget history. + cur_speed_[ddir].Reset(); +} + + +double FileTransfer::GetMaxSpeed(data_direction_t ddir) +{ + return max_speed_[ddir]; +} + + +uint32_t FileTransfer::GetNumLeechers() +{ + uint32_t count = 0; + channels_t::iterator iter; + for (iter=mychannels_.begin(); iter!=mychannels_.end(); iter++) + { + Channel *c = *iter; + if (c != NULL) + if (!c->IsComplete()) // incomplete? + count++; + } + return count; +} + + +uint32_t FileTransfer::GetNumSeeders() +{ + uint32_t count = 0; + channels_t::iterator iter; + for (iter=mychannels_.begin(); iter!=mychannels_.end(); iter++) + { + Channel *c = *iter; + if (c != NULL) + if (c->IsComplete()) // complete? + count++; + } + return count; +} + + +void FileTransfer::AddPeer(Address &peer) +{ + Channel *c = new Channel(this,INVALID_SOCKET,peer); +} diff -Nru tribler-6.2.0/Tribler/SwiftEngine/win32-build.bat tribler-6.2.0/Tribler/SwiftEngine/win32-build.bat --- tribler-6.2.0/Tribler/SwiftEngine/win32-build.bat 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/win32-build.bat 2013-08-07 12:50:12.000000000 +0000 @@ -0,0 +1,2 @@ +CALL c:\python273\Scripts\scons +copy swift.exe ..\.. diff -Nru tribler-6.2.0/Tribler/SwiftEngine/zerohashtree.cpp tribler-6.2.0/Tribler/SwiftEngine/zerohashtree.cpp --- tribler-6.2.0/Tribler/SwiftEngine/zerohashtree.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/zerohashtree.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,202 @@ +/* + * zerohashtree.cpp + * a hashtree interface implemented by reading hashes from a prepared .mhash + * file on disk directly, to save memory. + * + * Created by Victor Grishchenko, Arno Bakker + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ + +#include "hashtree.h" +#include "bin_utils.h" +//#include +#include "sha1.h" +#include +#include +#include +#include +#include "compat.h" +#include "swift.h" + +#include + + +using namespace swift; + + +/** 0 H a s h t r e e */ + + +ZeroHashTree::ZeroHashTree (Storage *storage, const Sha1Hash& root_hash, uint32_t chunk_size, std::string hash_filename, std::string binmap_filename) : +HashTree(), storage_(storage), root_hash_(root_hash), peak_count_(0), hash_fd_(0), + size_(0), sizec_(0), complete_(0), completec_(0), +chunk_size_(chunk_size) +{ + // MULTIFILE + storage_->SetHashTree(this); + + hash_fd_ = open_utf8(hash_filename.c_str(),ROOPENFLAGS,S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH); + if (hash_fd_<0) { + print_error("cannot open hash file"); + SetBroken(); + return; + } + + if (!RecoverPeakHashes()) + { + dprintf("%s zero hashtree could not recover peak hashes, fatal\n",tintstr() ); + SetBroken(); + } +} + +/** Precondition: root hash known */ +bool ZeroHashTree::RecoverPeakHashes() +{ + int64_t ret = storage_->GetReservedSize(); + if (ret < 0) + return false; + + uint64_t size = ret; + uint64_t sizek = (size + chunk_size_-1) / chunk_size_; + + // Arno: Calc location of peak hashes, read them from hash file and check if + // they match to root hash. If so, load hashes into memory. + bin_t peaks[64]; + int peak_count = gen_peaks(sizek,peaks); + for(int i=0; isize()) + return false; // if no valid peak hashes found + + // Arno, 2012-09-26: Reset by OfferPeakHash + complete_ = size_ = size; + + return true; +} + + +bool ZeroHashTree::OfferPeakHash (bin_t pos, const Sha1Hash& hash) { + char bin_name_buf[32]; + dprintf("%s zero hashtree offer peak %s\n",tintstr(),pos.str(bin_name_buf)); + + //assert(!size_); + if (peak_count_) { + bin_t last_peak = peaks_[peak_count_-1]; + if ( pos.layer()>=last_peak.layer() || + pos.base_offset()!=last_peak.base_offset()+last_peak.base_length() ) + peak_count_ = 0; + } + peaks_[peak_count_] = pos; + //peak_hashes_[peak_count_] = hash; + peak_count_++; + // check whether peak hash candidates add up to the root hash + Sha1Hash mustbe_root = DeriveRoot(); + if (mustbe_root!=root_hash_) + return false; + for(int i=0; i= 0) { + if (p.is_left()) { + p = p.parent(); + hash = Sha1Hash(hash,Sha1Hash::ZERO); + } else { + if (c<0 || peaks_[c]!=p.sibling()) + return Sha1Hash::ZERO; + hash = Sha1Hash(peak_hash(c),hash); + p = p.parent(); + c--; + } + } + // fprintf(stderr,"derive: root bin is %lli covers %lli\n", p.toUInt(), p.base_length() ); + return hash; +} + +const Sha1Hash& ZeroHashTree::peak_hash (int i) const { + // switch to peak_hashes_ when caching enabled + return hash(peak(i)); +} + + +const Sha1Hash& ZeroHashTree::hash (bin_t pos) const +{ + // RISKY BUSINESS + static Sha1Hash hash; + int ret = file_seek(hash_fd_,pos.toUInt()*sizeof(Sha1Hash)); + if (ret < 0) + { + print_error("reading zero hashtree"); + return Sha1Hash::ZERO; + } + ret = read(hash_fd_,&hash,sizeof(Sha1Hash)); + if (ret < 0 || ret !=sizeof(Sha1Hash)) + return Sha1Hash::ZERO; + else + { + //fprintf(stderr,"read hash %llu %s\n", pos.toUInt(), hash.hex().c_str() ); + return hash; + } +} + + +bin_t ZeroHashTree::peak_for (bin_t pos) const +{ + int pi=0; + while (pi= 0) + { + close(hash_fd_); + } +} + diff -Nru tribler-6.2.0/Tribler/SwiftEngine/zerostate.cpp tribler-6.2.0/Tribler/SwiftEngine/zerostate.cpp --- tribler-6.2.0/Tribler/SwiftEngine/zerostate.cpp 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/SwiftEngine/zerostate.cpp 2013-08-07 12:50:11.000000000 +0000 @@ -0,0 +1,202 @@ +/* + * zerostate.cpp + * manager for starting on-demand transfers that serve content and hashes + * directly from disk (so little state in memory). Requires content (named + * as roothash-in-hex), hashes (roothash-in-hex.mhash file) and checkpoint + * (roothash-in-hex.mbinmap) to be present on disk. + * + * Created by Arno Bakker + * Copyright 2009-2012 TECHNISCHE UNIVERSITEIT DELFT. All rights reserved. + * + */ +#include "swift.h" +#include "compat.h" + +using namespace swift; + + +ZeroState * ZeroState::__singleton = NULL; + +#define CLEANUP_INTERVAL 30 // seconds + +ZeroState::ZeroState() : contentdir_("."), connect_timeout_(TINT_NEVER) +{ + if (__singleton == NULL) + { + __singleton = this; + } + + //fprintf(stderr,"ZeroState: registering clean up\n"); + evtimer_assign(&evclean_,Channel::evbase,&ZeroState::LibeventCleanCallback,this); + evtimer_add(&evclean_,tint2tv(CLEANUP_INTERVAL*TINT_SEC)); +} + + +ZeroState::~ZeroState() +{ + //fprintf(stderr,"ZeroState: deconstructor\n"); + + // Arno, 2012-02-06: Cancel cleanup timer, otherwise chaos! + evtimer_del(&evclean_); +} + + +void ZeroState::LibeventCleanCallback(int fd, short event, void *arg) +{ + //fprintf(stderr,"zero clean: enter\n"); + + // Arno, 2012-02-24: Why-oh-why, update NOW + Channel::Time(); + + ZeroState *zs = (ZeroState *)arg; + if (zs == NULL) + return; + + // See which zero state FileTransfers have no clients + std::set delset; + for(int i=0; iIsZeroState()) + continue; + + // Arno, 2012-07-20: Some weirdness on Win7 when we use GetChannels() + // all the time. Map/set iterators incompatible?! + channels_t channels = ft->GetChannels(); + if (channels.size() == 0) + { + // Ain't go no clients, cleanup transfer. + delset.insert(ft); + } + else if (zs->connect_timeout_ != TINT_NEVER) + { + // Garbage collect really slow connections, essential on Mac. + dprintf("%s zero clean %s has %d peers\n",tintstr(),ft->root_hash().hex().c_str(), ft->GetChannels().size() ); + channels_t::iterator iter2; + for (iter2=channels.begin(); iter2!=channels.end(); iter2++) { + Channel *c = *iter2; + if (c != NULL) + { + //fprintf(stderr,"%s F%u zero clean %s opentime %lld connect %lld\n",tintstr(),ft->fd(), c->peer().str(), (NOW-c->GetOpenTime()), zs->connect_timeout_ ); + // Garbage collect channels when open for long and slow upload + if ((NOW-c->GetOpenTime()) > zs->connect_timeout_) + { + //fprintf(stderr,"%s F%u zero clean %s opentime %lld ulspeed %lf\n",tintstr(),ft->fd(), c->peer().str(), (NOW-c->GetOpenTime())/TINT_SEC, ft->GetCurrentSpeed(DDIR_UPLOAD) ); + fprintf(stderr,"%s F%u zero clean %s close slow channel\n",tintstr(),ft->fd(), c->peer().str() ); + c->Close(); + delete c; + } + } + } + if (ft->GetChannels().size() == 0) + { + // Ain't go no clients left, cleanup transfer. + delset.insert(ft); + } + } + } + + // Delete 0-state FileTransfers sans peers + std::set::iterator iter; + for (iter=delset.begin(); iter!=delset.end(); iter++) + { + FileTransfer *ft = *iter; + dprintf("%s F%u zero clean close\n",tintstr(),ft->fd() ); + //fprintf(stderr,"%s F%u zero clean close\n",tintstr(),ft->fd() ); + swift::Close(ft->fd()); + } + + // Reschedule cleanup + evtimer_add(&(zs->evclean_),tint2tv(CLEANUP_INTERVAL*TINT_SEC)); +} + + + +ZeroState * ZeroState::GetInstance() +{ + //fprintf(stderr,"ZeroState::GetInstance: %p\n", Channel::evbase ); + if (__singleton == NULL) + { + new ZeroState(); + } + return __singleton; +} + + +void ZeroState::SetContentDir(std::string contentdir) +{ + contentdir_ = contentdir; +} + +void ZeroState::SetConnectTimeout(tint timeout) +{ + //fprintf(stderr,"ZeroState: SetConnectTimeout: %lld\n", timeout/TINT_SEC ); + connect_timeout_ = timeout; +} + + +FileTransfer * ZeroState::Find(Sha1Hash &root_hash) +{ + //fprintf(stderr,"swift: zero: Got request for %s\n",root_hash.hex().c_str() ); + + //std::string file_name = "content.avi"; + std::string file_name = contentdir_+FILE_SEP+root_hash.hex(); + uint32_t chunk_size=SWIFT_DEFAULT_CHUNK_SIZE; + + dprintf("%s #0 zero find %s from %s\n",tintstr(),file_name.c_str(), getcwd_utf8().c_str() ); + + std::string reqfilename = file_name; + int ret = file_exists_utf8(reqfilename); + if (ret < 0 || ret == 0 || ret == 2) + return NULL; + reqfilename = file_name+".mbinmap"; + ret = file_exists_utf8(reqfilename); + if (ret < 0 || ret == 0 || ret == 2) + return NULL; + reqfilename = file_name+".mhash"; + ret = file_exists_utf8(reqfilename); + if (ret < 0 || ret == 0 || ret == 2) + return NULL; + + FileTransfer *ft = new FileTransfer(file_name,root_hash,false,true,chunk_size,true); + if (ft->hashtree() == NULL || !ft->hashtree()->is_complete()) + { + // Safety catch + return NULL; + } + else + return ft; +} + + +void Channel::OnDataZeroState(struct evbuffer *evb) +{ + dprintf("%s #%u zero -data, don't need it, am a seeder\n",tintstr(),id_); +} + +void Channel::OnHaveZeroState(struct evbuffer *evb) +{ + uint32_t binint = evbuffer_remove_32be(evb); + // Forget about it, i.e.. don't build peer binmap. +} + +void Channel::OnHashZeroState(struct evbuffer *evb) +{ + dprintf("%s #%u zero -hash, don't need it, am a seeder\n",tintstr(),id_); +} + +void Channel::OnPexAddZeroState(struct evbuffer *evb) +{ + uint32_t ipv4 = evbuffer_remove_32be(evb); + uint16_t port = evbuffer_remove_16be(evb); + // Forget about it +} + +void Channel::OnPexReqZeroState(struct evbuffer *evb) +{ + // Ignore it +} + diff -Nru tribler-6.2.0/Tribler/dispersy/authentication.py tribler-6.2.0/Tribler/dispersy/authentication.py --- tribler-6.2.0/Tribler/dispersy/authentication.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/authentication.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,356 @@ +""" +This module provides the Authentication policy. + +Each Dispersy message that is send has an Authentication policy associated to it. This policy +dictates how the message is authenticated, i.e. how the message is associated to the sender or +creator of this message. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +from .meta import MetaObject + + +class Authentication(MetaObject): + + """ + The Authentication baseclass. + """ + + class Implementation(MetaObject.Implementation): + + """ + The implementation of an Authentication policy. + """ + + @property + def is_signed(self): + """ + True when the message is (correctly) signed, False otherwise. + @rtype: bool + """ + raise NotImplementedError() + + def setup(self, message_impl): + if __debug__: + from .message import Message + assert isinstance(message_impl, Message.Implementation) + + def setup(self, message): + """ + Setup the Authentication meta part. + + Setup is called after the meta message is initially created. This allows us to initialize + the authentication meta part with, if required, information available to the meta message + itself. This gives us access to, among other, the community instance and the other meta + policies. + + @param message: The meta message. Note that self is message.authentication. + @type message: Message + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + +class NoAuthentication(Authentication): + + """ + The NoAuthentication policy can be used when a message is not owned, i.e. signed, by anyone. + + A message that uses the no-authentication policy does not contain any identity information nor a + signature. This makes the message smaller --from a storage and bandwidth point of view-- and + cheaper --from a CPU point of view-- to generate. However, the message becomes less secure as + everyone can generate and modify it as they please. This makes this policy ill suited for + gossiping purposes. + """ + class Implementation(Authentication.Implementation): + + @property + def is_signed(self): + return True + + +class MemberAuthentication(Authentication): + + """ + The MemberAuthentication policy can be used when a message is owned, i.e. signed, bye one + member. + + A message that uses the member-authentication policy will add an identifier to the message that + indicates the creator of the message. This identifier can be either the public key or the sha1 + digest of the public key. The former is relatively large but uniquely identifies the member, + while the latter is relatively small but might not uniquely identify the member, although, this + will uniquely identify the member when combined with the signature. + + Furthermore, a signature over the entire message is appended to ensure that no one else can + modify the message or impersonate the creator. Using the default curve, NID-sect233k1, each + signature will be 58 bytes long. + + The member-authentication policy is used to sign a message, associating it to a specific member. + This lies at the foundation of Dispersy where specific members are permitted specific actions. + Furthermore, permissions can only be obtained by having another member, who is allowed to do so, + give you this permission in the form of a signed message. + """ + class Implementation(Authentication.Implementation): + + def __init__(self, meta, member, is_signed=False): + """ + Initialize a new MemberAuthentication.Implementation instance. + + This method should only be called through the MemberAuthentication.implement(member, + is_signed) method. + + @param meta: The MemberAuthentication instance + @type meta: MemberAuthentication + + @param member: The member that will own, i.e. sign, this message. + @type member: Member + + @param is_signed: Indicates if the message is signed or not. Should only be given when + decoding a message. + @type is_signed: bool + """ + if __debug__: + from .member import Member + assert isinstance(member, Member) + assert isinstance(is_signed, bool) + super(MemberAuthentication.Implementation, self).__init__(meta) + self._member = member + self._is_signed = is_signed + + @property + def encoding(self): + """ + How the member identifier is encoded (public key or sha1-digest over public key). + @rtype: string + @note: This property is obtained from the meta object. + """ + return self._meta._encoding + + @property + def member(self): + """ + The owner of the message. + @rtype: Member + """ + return self._member + + @property + def is_signed(self): + return self._is_signed + + def set_signature(self, signature): + self._is_signed = True + + def __init__(self, encoding="sha1"): + """ + Initialize a new MemberAuthentication instance. + + Depending on the encoding parameter the member is identified in a different way. The + options below are available: + + - sha1: where the public key of the member is made into a 20 byte sha1 digest and added to + the message. + + - bin: where the public key of the member is added to the message, prefixed with its + length. + + Obviously sha1 results in smaller messages with the disadvantage that the same sha1 digest + could be mapped to multiple members. Retrieving the correct member from the sha1 digest is + handled by dispersy when an incoming message is decoded. + + @param encoding: How the member identifier is encoded (bin or sha1) + @type encoding: string + """ + assert isinstance(encoding, str) + assert encoding in ("bin", "sha1") + self._encoding = encoding + + @property + def encoding(self): + """ + How the member identifier is encoded (bin or sha1). + @rtype: string + """ + return self._encoding + + +class DoubleMemberAuthentication(Authentication): + + """ + The DoubleMemberAuthentication policy can be used when a message needs to be signed by two + members. + + A message that uses the double-member-authentication policy is signed by two member. Similar to + the member-authentication policy the message contains two identifiers where the first indicates + the creator and the second indicates the members that added her signature. + + Dispersy is responsible for obtaining the signatures of the different members and handles this + using the messages dispersy-signature-request and dispersy-signature-response, defined below. + Creating a double signed message is performed using the following steps: first Alice creates a + message (M) where M uses the double-member-authentication policy. At this point M consists of + the community identifier, the conversion identifier, the message identifier, the member + identifier for both Alice and Bob, optional resolution information, optional distribution + information, optional destination information, the message payload, and \0 bytes for the two + signatures. + + Message M is then wrapped inside a dispersy-signature-request message (R) and send to Bob. When + Bob receives this request he can optionally apply changes to M2 and add his signature. Assuming + that he does the new message M2, which now includes Bob's signature while Alice's is still \0, + is wrapped in a dispersy-signature-response message (E) and sent back to Alice. If Alice agrees + with the (possible) changes in M2 she can add her own signature and M2 is stored, updated, and + forwarded to other nodes in the community. + """ + class Implementation(Authentication.Implementation): + + def __init__(self, meta, members, signatures=[]): + """ + Initialize a new DoubleMemberAuthentication.Implementation instance. + + This method should only be called through the MemberAuthentication.implement(members, + signatures) method. + + @param members: The members that will need to sign this message, in this order. The + first member will considered the owner of the message. + @type members: list containing Member instances + + @param signatures: The available, and verified, signatures for each member. Should only + be given when decoding a message. + @type signatures: list containing strings + """ + if __debug__: + from .member import Member + assert isinstance(members, list), type(members) + assert len(members) == 2 + assert all(isinstance(member, Member) for member in members) + assert isinstance(signatures, list) + assert all(isinstance(signature, str) for signature in signatures) + assert len(signatures) == 0 or len(signatures) == 2 + super(DoubleMemberAuthentication.Implementation, self).__init__(meta) + self._members = members + self._regenerate_packet_func = None + + # will contain the list of signatures as they are received + # from dispersy-signature-response messages + if signatures: + self._signatures = signatures + else: + self._signatures = ["", ""] + + @property + def allow_signature_func(self): + """ + The function that is called whenever a dispersy-signature-request is received. + @rtype: callable function + @note: This property is obtained from the meta object. + """ + return self._meta._allow_signature_func + + @property + def encoding(self): + """ + How the member identifier is encoded (public key or sha1-digest over public key). + @rtype: string + @note: This property is obtained from the meta object. + """ + return self._meta._encoding + + @property + def member(self): + """ + The message owner, i.e. the first member in self.members. + @rtype: Member + @note: This property is obtained from the meta object. + """ + return self._members[0] + + @property + def members(self): + """ + The members that sign, of should sign, the message. + @rtype: list or tuple containing Member instances + """ + return self._members + + @property + def signed_members(self): + """ + The members and their signatures. + + The signed members can be used to see from what members we have a valid signature. A + list is given with (signature, Member) tuples, where the signature is either a verified + signature or an empty string. + + @rtype: list containing (string, Member) tules + """ + return zip(self._signatures, self._members) + + @property + def is_signed(self): + return all(self._signatures) + + def set_signature(self, member, signature): + """ + Set a verified signature for a specific member. + + This method adds a new signature. Note that the signature is assumed to be valid at + this point. When the message is encoded the new signature will be included. + + @param member: The Member that made the signature. + @type member: Member + + @param signature: The signature for this message. + @type signature: string + """ + # todo: verify the signature + assert member in self._members + assert member.signature_length == len(signature) + self._signatures[self._members.index(member)] = signature + self._regenerate_packet_func() + + def setup(self, message_impl): + if __debug__: + from .message import Message + assert isinstance(message_impl, Message.Implementation) + self._regenerate_packet_func = message_impl.regenerate_packet + + def __init__(self, allow_signature_func, encoding="sha1"): + """ + Initialize a new DoubleMemberAuthentication instance. + + When someone wants to create a double signed message, the Community.create_signature_request + method can be used. This will send dispersy-signature-request messages to all Members that + have not yet signed and will wait until replies are received, or a timeout occurs. + + When a member receives a request to add her signature to a message, the allow_signature_func + function is called. When this function returns True a signature is generated and send back + to the requester. + + @param allow_signature_func: The function that is called when a signature request is + received. Must return True to add a signature, False not to. + @type allow_signature_func: callable function + """ + assert hasattr(allow_signature_func, "__call__"), "ALLOW_SIGNATURE_FUNC must be callable" + assert isinstance(encoding, str) + assert encoding in ("bin", "sha1") + self._allow_signature_func = allow_signature_func + self._encoding = encoding + + @property + def allow_signature_func(self): + """ + The function that is called whenever a dispersy-signature-request is received. + @rtype: callable function + """ + return self._allow_signature_func + + @property + def encoding(self): + """ + How the member identifier is encoded (bin or sha1). + @rtype: string + """ + return self._encoding diff -Nru tribler-6.2.0/Tribler/dispersy/bloomfilter.py tribler-6.2.0/Tribler/dispersy/bloomfilter.py --- tribler-6.2.0/Tribler/dispersy/bloomfilter.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/bloomfilter.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,745 @@ +""" +This module provides the bloom filter support. + +The Bloom filter, conceived by Burton Howard Bloom in 1970, is a space-efficient probabilistic data +structure that is used to test whether an element is a member of a set. False positives are +possible, but false negatives are not. Elements can be added to the set, but not removed (though +this can be addressed with a counting filter). The more elements that are added to the set, the +larger the probability of false positives. + +Initial Bloomfilter implementation based on pybloom by Jay Baird and Bob +Ippolito . Simplified, and optimized to use just python code. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +import logging +logger = logging.getLogger(__name__) + +from hashlib import sha1, sha256, sha384, sha512, md5 +from math import ceil, log +from struct import Struct +from binascii import hexlify, unhexlify + +from .decorator import Constructor, constructor + +if __debug__: + from time import time + from .decorator import attach_profiler + + +class BloomFilter(Constructor): + + def _init_(self, m_size, k_functions, prefix, filter_): + assert isinstance(m_size, int) + assert 0 < m_size + assert m_size % 8 == 0, "size must be a multiple of eight (%d)" % m_size + assert isinstance(k_functions, int) + assert 0 < k_functions <= m_size + assert isinstance(prefix, str) + assert 0 <= len(prefix) < 256 + assert isinstance(filter_, (int, long)) + + self._m_size = m_size + self._k_functions = k_functions + self._prefix = prefix + self._filter = filter_ + + if __debug__: + hypothetical_error_rates = [0.4, 0.3, 0.2, 0.1, 0.01, 0.001, 0.0001] + logger.debug("m size: %d ~%d bytes", m_size, m_size / 8) + logger.debug("k functions: %d", k_functions) + logger.debug("prefix: %s", prefix.encode("HEX")) + logger.debug("filter: %s", filter_) + logger.debug("hypothetical error rate: %s", " | ".join("%.4f" % hypothetical_error_rate for hypothetical_error_rate in hypothetical_error_rates)) + logger.debug("hypothetical capacity: %s", " | ".join("%6d" % self.get_capacity(hypothetical_error_rate) for hypothetical_error_rate in hypothetical_error_rates)) + + # determine hash function + if m_size >= (1 << 31): + fmt_code, chunk_size = "Q", 8 + elif m_size >= (1 << 15): + fmt_code, chunk_size = "L", 4 + else: + fmt_code, chunk_size = "H", 2 + + # we need at most chunk_size * k bits from our hash function + bits_required = chunk_size * k_functions * 8 + assert bits_required <= 512, "Combining multiple hashfunctions is not implemented, cannot create a hash for %d bits" % bits_required + + if bits_required > 384: + hashfn = sha512 + elif bits_required > 256: + hashfn = sha384 + elif bits_required > 160: + hashfn = sha256 + elif bits_required > 128: + hashfn = sha1 + else: + hashfn = md5 + + self._fmt_unpack = Struct(">" + (fmt_code * k_functions) + ("x" * (hashfn().digest_size - bits_required / 8))).unpack + self._salt = hashfn(prefix) + + @constructor(str, int) + def _init_bytes_k_(self, bytes_, k_functions, prefix=""): + assert isinstance(bytes_, str) + assert 0 < len(bytes_) + logger.debug("constructing bloom filter based on %d bytes and k_functions %d", len(bytes_), k_functions) + + filter = long(hexlify(bytes_[::-1]), 16) + self._init_(len(bytes_) * 8, k_functions, prefix, filter) + + @constructor(int, float) + def _init_m_f(self, m_size, f_error_rate, prefix=""): + assert isinstance(m_size, int) + assert 0 < m_size + assert m_size % 8 == 0, "size must be a multiple of eight (%d)" % m_size + assert isinstance(f_error_rate, float) + assert 0 < f_error_rate < 1 + # calculate others + # self._n = int(m * ((log(2) ** 2) / abs(log(f)))) + # self._k = int(ceil(log(2) * (m / self._n))) + logger.debug("constructing bloom filter based on m_size %d bits and f_error_rate %f", m_size, f_error_rate) + self._init_(m_size, self._get_k_functions(m_size, self._get_n_capacity(m_size, f_error_rate)), prefix, 0) + + @constructor(float, int) + def _init_n_f(self, f_error_rate, n_capacity, prefix=""): + assert isinstance(f_error_rate, float) + assert 0 < f_error_rate < 1 + assert isinstance(n_capacity, int) + assert 0 < n_capacity + m_size = abs((n_capacity * log(f_error_rate)) / (log(2) ** 2)) + m_size = int(ceil(m_size / 8.0) * 8) + logger.debug("constructing bloom filter based on f_error_rate %d and %d capacity", f_error_rate, n_capacity) + self._init_(m_size, self._get_k_functions(m_size, n_capacity), prefix, 0) + + def add(self, key): + """ + Add KEY to the BloomFilter. + """ + filter_ = self._filter + h = self._salt.copy() + h.update(key) + for pos in self._fmt_unpack(h.digest()): + filter_ |= 1 << (pos % self._m_size) + self._filter = filter_ + + def add_keys(self, keys): + """ + Add a sequence of KEYS to the BloomFilter. + """ + filter_ = self._filter + salt_copy = self._salt.copy + m_size = self._m_size + fmt_unpack = self._fmt_unpack + + for key in keys: + assert isinstance(key, str) + h = salt_copy() + h.update(key) + + # 04/05/12 Boudewijn: using a list instead of a generator is significantly faster. + # while generators are more memory efficient, this list will be relatively short. + # 07/05/12 Niels: using no list at all is even more efficient/faster + for pos in fmt_unpack(h.digest()): + filter_ |= 1 << (pos % m_size) + + self._filter = filter_ + + def clear(self): + """ + Set all bits in the filter to zero. + """ + self._filter = 0 + + def __contains__(self, key): + filter_ = self._filter + m_size_ = self._m_size + + h = self._salt.copy() + h.update(key) + + for pos in self._fmt_unpack(h.digest()): + if not filter_ & (1 << (pos % m_size_)): + return False + return True + + def not_filter(self, iterator): + """ + Yields all tuples in iterator where the first element in the tuple is NOT in the bloom + filter. + """ + filter_ = self._filter + salt_copy = self._salt.copy + m_size = self._m_size + fmt_unpack = self._fmt_unpack + + for tup in iterator: + assert isinstance(tup, tuple) + assert len(tup) > 0 + assert isinstance(tup[0], str) + h = salt_copy() + h.update(tup[0]) + + # 04/05/12 Boudewijn: using a list instead of a generator is significantly faster. + # while generators are more memory efficient, this list will be relatively short. + # 07/05/12 Niels: using no list at all is even more efficient/faster + for pos in fmt_unpack(h.digest()): + if not filter_ & (1 << (pos % m_size)): + yield tup + break + + def _get_k_functions(self, m_size, n_capacity): + return int(ceil(log(2) * m_size / n_capacity)) + + def _get_n_capacity(self, m_size, f_error_rate): + return int(m_size * (log(2) ** 2 / abs(log(f_error_rate)))) + + def get_capacity(self, f_error_rate): + """ + Returns the capacity given a certain error rate. + @rtype: int + """ + assert isinstance(f_error_rate, float) + assert 0 < f_error_rate < 1 + return self._get_n_capacity(self._m_size, f_error_rate) + + def get_bits_checked(self): + return sum(1 if self._filter & (1 << i) else 0 for i in range(self._m_size)) + + @property + def size(self): + """ + The size of the bloom filter in bits (m). + @rtype: int + """ + return self._m_size + + @property + def functions(self): + """ + The number of functions used for each item (k). + """ + return self._k_functions + + @property + def prefix(self): + """ + The prefix. + @rtype: string + """ + return self._prefix + + @property + def bytes(self): + # hex should be m_size/4, hex is 16 instead of 8 -> hence half the number of "hexes" in m_size + hex = '%x' % self._filter + padding = '0' * (self._m_size /4 - len(hex)) + return unhexlify(padding + hex)[::-1] + +if __debug__: + def _test_behavior(): + length = 1024 + f_error_rate = 0.15 + m_size = length * 8 + + b = BloomFilter(m_size, f_error_rate) + assert len(b.bytes) == length, b.bytes + + for i in xrange(1000): + b.add(str(i)) + print b.size, b.get_capacity(f_error_rate), b.bytes.encode("HEX") + + d = BloomFilter(b.bytes, b.functions) + assert b.size == d.size + assert b.functions == d.functions + assert b.bytes == d.bytes + for i in xrange(1000): + assert str(i) in d + print d.size, d.get_capacity(f_error_rate), d.bytes.encode("HEX") + + def _performance_test(): + def test2(bits, count, constructor=BloomFilter): + generate_begin = time() + ok = 0 + data = [(i, sha1(str(i)).digest()) for i in xrange(count)] + create_begin = time() + bloom = constructor(0.0001, bits) + fill_begin = time() + for i, h in data: + if i % 2 == 0: + bloom.add(h) + check_begin = time() + for i, h in data: + if (h in bloom) == (i % 2 == 0): + ok += 1 + write_begin = time() + string = str(bloom) + write_end = time() + + print "generate: {generate:.1f}; create: {create:.1f}; fill: {fill:.1f}; check: {check:.1f}; write: {write:.1f}".format(generate=create_begin - generate_begin, create=fill_begin -create_begin, fill=check_begin-fill_begin, check=write_begin-check_begin, write=write_end-write_begin) + print string.encode("HEX")[:100], "{len} bytes; ({ok}/{total} ~{part:.0%})".format(len=len(string), ok=ok, total=count, part=1.0 * ok /count) + + def test(bits, count, constructor=BloomFilter): + ok = 0 + create_begin = time() + bloom = constructor(0.0001, bits) + fill_begin = time() + for i in xrange(count): + if i % 2 == 0: + bloom.add(str(i)) + check_begin = time() + for i in xrange(count): + if (str(i) in bloom) == (i % 2 == 0): + ok += 1 + write_begin = time() + string = str(bloom) + write_end = time() + + print "create: {create:.1f}; fill: {fill:.1f}; check: {check:.1f}; write: {write:.1f}".format(create=fill_begin - create_begin, fill=check_begin -fill_begin, check=write_begin-check_begin, write=write_end-write_begin) + print string.encode("HEX")[:100], "{len} bytes; ({ok}/{total} ~{part:.0%})".format(len=len(string), ok=ok, total=count, part=1.0 * ok /count) + + b = BloomFilter(100, 0.0001) + b.add("Hello") + data = str(b) + + # c = BloomFilter(data, 0) + # assert "Hello" in c + # assert not "Bye" in c + + test2(10, 10, FasterBloomFilter) + test2(10, 100, FasterBloomFilter) + test2(100, 100, FasterBloomFilter) + test2(100, 1000, FasterBloomFilter) + test2(1000, 1000, FasterBloomFilter) + test2(1000, 10000, FasterBloomFilter) + test2(10000, 10000, FasterBloomFilter) + test2(10000, 100000, FasterBloomFilter) + + test(10, 10, FasterBloomFilter) + test(10, 100, FasterBloomFilter) + test(100, 100, FasterBloomFilter) + test(100, 1000, FasterBloomFilter) + test(1000, 1000, FasterBloomFilter) + test(1000, 10000, FasterBloomFilter) + test(10000, 10000, FasterBloomFilter) + test(10000, 100000, FasterBloomFilter) + test(100000, 100000, FasterBloomFilter) + test(100000, 1000000, FasterBloomFilter) + + # test2(10, 10) + # test2(10, 100) +# generate: 0.0; create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000001d000000241400480001840684024080408012800008012424018008a0401001080280008500241000 45 bytes; (10/10 ~100%) +# generate: 0.0; create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000001d000000bfbedf7fbafff4bffff7fdb7efdffe8df74f9fff6dbffb7bed7fdaf9ae76dfefffebffdb03 45 bytes; (90/100 ~90%) + test2(100, 100) + test2(100, 1000) + +# generate: 0.0; create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000002001000002050100400001820008020388084422108050c0b41440804a003044204020082804000049820c880420 368 bytes; (100/100 ~100%) +# generate: 0.0; create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a000000200100009eedefcc77df2fff1feffe5fdeeefebffefe7fddffb77bf1cff574ddbedffafdbffffdf6fdef7f9ebf7f 368 bytes; (919/1000 ~92%) + + test2(1000, 1000) + test2(1000, 10000) + +# generate: 0.0; create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000003c0b0000a203040502001140c0000010840900420a06152400042000004222010090000022861000824010102001 3603 bytes; (1000/1000 ~100%) +# generate: 0.0; create: 0.0; fill: 0.1; check: 0.1; write: 0.0 +# 0a0000003c0b0000fad3ffeffffdfb7efb5efffcfefffceffb7fffb7df3ffff99f7bffd5fdd7f65d76e7ff2f9feffcda7fff 3603 bytes; (9279/10000 ~93%) + + test2(10000, 10000) + test2(10000, 100000) + +# generate: 0.0; create: 0.0; fill: 0.1; check: 0.1; write: 0.0 +# 0a00000054700000205286262400208041034085040005524802d8667048204220001214805020502002600408060080d009 35953 bytes; (10000/10000 ~100%) +# generate: 0.2; create: 0.0; fill: 0.7; check: 1.3; write: 0.0 +# 0a00000054700000fbfffffeffffffbbfffffff7edbfffffff7fdffff7dbffffffffffbf9efafffbfffff5dddbdfffffd7ff 35953 bytes; (92622/100000 ~93%) + + # test(10, 10) + # test(10, 100) + +# create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000001d00000081012001030240322100040400440c510024402060400100010410088c0005020a18020100 45 bytes; (10/10 ~100%) +# create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000001d000000ebfff7fbefdedfbbeffffdeee7ddbf7fb7fdff77ffff77f5d74dff9efdffffffef7f9e3f03 45 bytes; (92/100 ~92%) + + test(100, 100) + test(100, 1000) + +# create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000002001000000108007008010210218120a0802824800806a20911008424200a00a0000114000100009466002820916 368 bytes; (100/100 ~100%) +# create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a000000200100007ff7f777fabadfffd7fddfdf29dfdefe77fc7bedfffc7df37e7ff9ffbbfff57fb7feffcfdffd7ffffdbf 368 bytes; (915/1000 ~92%) + + test(1000, 1000) + test(1000, 10000) + +# create: 0.0; fill: 0.0; check: 0.0; write: 0.0 +# 0a0000003c0b00000146869100238482200450100090040002000010000006244000000c4a0141040402210802000c208010 3603 bytes; (1000/1000 ~100%) +# create: 0.0; fill: 0.1; check: 0.1; write: 0.0 +# 0a0000003c0b0000f7ffffbbdbfbefffeffff7ff5cffff27f6defffadff76ef5fbfbecffdfd7fdee77f7ffdffea07dfebbdf 3603 bytes; (9279/10000 ~93%) + + test(10000, 10000) + test(10000, 100000) + +# create: 0.0; fill: 0.1; check: 0.1; write: 0.0 +# 0a00000054700000130050403102c002410c410200a100700200cc0c0007620100142c408c4a82080082000a866d1818a211 35953 bytes; (10000/10000 ~100%) +# create: 0.0; fill: 0.8; check: 1.4; write: 0.0 +# 0a000000547000009ffefff7fdffecff7dffffbeeefffffefffdffeef9efffffebff7ffdffffbfffd7ffeeefff7ffdfbffff 35953 bytes; (92520/100000 ~93%) + + test(100000, 100000) + test(100000, 1000000) + + def _taste_test(): + def pri(f, m, invert=False): + set_bits = 0 + for c in f._bytes.tostring(): + s = "{0:08d}".format(int(bin(ord(c))[2:])) + for bit in s: + if invert: + if bit == "0": + bit = "1" + else: + bit = "0" + if bit == "1": + set_bits += 1 + print s, + percent = 100 * set_bits / f.bits + print "= {0:2d} bits or {1:2d}%:".format(set_bits, percent), m + + def gen(l, m): + if len(l) <= 10: + for e in l: + f = BloomFilter(NUM_SLICES, BITS_PER_SLICE) + f.add(e) + pri(f, e) + f = BloomFilter(NUM_SLICES, BITS_PER_SLICE) + map(f.add, l) + if len(l) <= 10: + pri(f, m + ": " + ", ".join(l)) + else: + pri(f, m + ": " + l[0] + "..." + l[-1]) + return f + + NUM_SLICES, BITS_PER_SLICE = 1, 25 + + # a = gen(["kittens", "puppies"], "User A") + # b = gen(["beer", "bars"], "User B") + # c = gen(["puppies", "beer"], "User C") + + # a = gen(map(str, xrange(0, 150)), "User A") + # b = gen(map(str, xrange(100, 250)), "User B") + # c = gen(map(str, xrange(200, 350)), "User C") + + a = gen(map(str, xrange(0, 10)), "User A") + b = gen(map(str, xrange(5, 15)), "User B") + c = gen(map(str, xrange(10, 20)), "User C") + + if True: + print + pri(a & b, "A AND B --> 50%") + pri(a & c, "A AND C --> 0%") + pri(b & c, "B AND C --> 50%") + if True: + print + pri(a ^ b, "A XOR B --> 50%", invert=True) + pri(a ^ c, "A XOR C --> 0%", invert=True) + pri(b ^ c, "B XOR C --> 50%", invert=True) + + def _test_documentation(): + alice = ["cake", "lemonade", "kittens", "puppies"] + for x in alice: + b = BloomFilter(1, 32) + b.add(x) + logger.debug(x) + logger.debug(b._bytes.tostring().encode("HEX")) + + bob = ["cake", "lemonade", "beer", "pubs"] + + carol = ["beer", "booze", "women", "pubs"] + for x in carol: + b = BloomFilter(1, 32) + b.add(x) + logger.debug(x) + logger.debug(b._bytes.tostring().encode("HEX")) + + a = BloomFilter(1, 32) + map(a.add, alice) + logger.debug(alice) + logger.debug(a._bytes.tostring().encode("HEX")) + + b = BloomFilter(1, 32) + map(b.add, bob) + logger.debug(bob) + logger.debug(b._bytes.tostring().encode("HEX")) + + c = BloomFilter(1, 32) + map(c.add, carol) + logger.debug(carol) + logger.debug(c._bytes.tostring().encode("HEX")) + + logger.debug("Alice bic Bob: %s", a.bic_occurrence(b)) + logger.debug("Alice bic Carol: %s", a.bic_occurrence(c)) + logger.debug("Bob bic Carol: %s", b.bic_occurrence(c)) + + def _test_occurrence(): + a = BloomFilter(1, 16) + b = BloomFilter(1, 16) + assert a.and_occurrence(b) == 0 + assert a.xor_occurrence(b) == 0 + assert a.and_occurrence(a) == 0 + assert a.xor_occurrence(a) == 0 + assert b.and_occurrence(a) == 0 + assert b.xor_occurrence(a) == 0 + assert b.and_occurrence(b) == 0 + assert b.xor_occurrence(b) == 0 + + a.add("a1") + a.add("a2") + a.add("a3") + b.add("b1") + b.add("b2") + + logger.debug(a._bytes.tostring().encode("HEX")) + logger.debug(b._bytes.tostring().encode("HEX")) + + assert a.and_occurrence(b) == 1 + assert a.xor_occurrence(b) == 3 + + def _test_save_load(): + a = BloomFilter(1000, 0.1) + data = ["%i" % i for i in xrange(1000)] + map(a.add, data) + + print a._num_slices, a._bits_per_slice + + binary = str(a) + open("bloomfilter-out.data", "w+").write(binary) + print "Write binary:", len(binary) + + try: + binary = open("bloomfilter-in.data", "r").read() + except IOError: + print "Input file unavailable" + else: + print "Read binary:", len(binary) + b = BloomFilter(binary, 0) + print b._num_slices, b._bits_per_slice + + for d in data: + assert d in b + for d in ["%i" % i for i in xrange(10000, 1100)]: + assert not d in b + + # def _test_false_positives(constructor = BloomFilter): + # for error_rate in [0.0001, 0.001, 0.01, 0.1, 0.4]: + # a = constructor(error_rate, 1024*8) + # p(a) + + # data = ["%i" % i for i in xrange(int(a.capacity))] + # map(a.add, data) + + # errors = 0 + # for i in xrange(100000): + # if "X%i" % i in a: + # errors += 1 + + # print "Errors:", errors, "/", i + 1, " ~ ", errors / (i + 1.0) + # print + + def _test_false_positives(constructor=BloomFilter): + for error_rate in [0.001, 0.01, 0.1, 0.5]: + begin = time() + # if constructor == BloomFilter: + # a = constructor(error_rate, 1024*8) + # capacity = a.capacity + # else: + a = constructor(1024 * 8, error_rate) + capacity = a.get_capacity(error_rate) + print "capacity:", capacity, " error-rate:", error_rate, "bits:", a.size, "bytes:", a.size / 8 + + data = ["%i" % i for i in xrange(capacity)] + map(a.add, data) + + errors = 0 + for i in xrange(200000): + if "X%i" % i in a: + errors += 1 + end = time() + + print "%.3f" % (end -begin), "Errors:", errors, "/", i + 1, " ~ ", errors / (i + 1.0) + print + + def _test_prefix_false_positives(constructor=BloomFilter): + for error_rate in [0.0001, 0.001, 0.01, 0.1, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]: + a = constructor(error_rate, 10374, prefix="A") + b = constructor(error_rate, 10374, prefix="B") + c = constructor(error_rate, 10374, prefix="C") + d = constructor(error_rate, 10374, prefix="D") + p(a) + print "Estimated errors:", a.error_rate, "->", a.error_rate * b.error_rate, "->", a.error_rate * b.error_rate * c.error_rate, "->", a.error_rate * b.error_rate * c.error_rate * d.error_rate + + # we fill each bloomfilter up to its capacity + data = ["%i" % i for i in xrange(a.capacity)] + map(a.add, data) + map(b.add, data) + map(c.add, data) + map(d.add, data) + + errors = 0 + two_errors = 0 + three_errors = 0 + four_errors = 0 + + # we check what happens if we check twice the capacity + for i in xrange(a.capacity * 2): + if "X%i" % i in a: + errors += 1 + if "X%i" % i in b: + two_errors += 1 + if "X%i" % i in c: + three_errors += 1 + if "X%i" % i in d: + four_errors += 1 + + print "Errors:", errors, "~", errors / (i + 1.0), "Two-Errors:", two_errors, "~", two_errors / (i + 1.0), "Three-Errors:", three_errors, "~", three_errors / (i + 1.0), four_errors, "~", four_errors / (i + 1.0) + print + + def _test_performance(): + from time import clock + from struct import pack + from random import random + + from .database import Database + + class TestDatabase(Database): + + def check_database(self, *args): + pass + + db = TestDatabase(u"test.db") + + DATA_COUNT = 1000 + RUN_COUNT = 1000 + + db.execute(u"CREATE TABLE data10 (id INTEGER PRIMARY KEY AUTOINCREMENT, public_key TEXT, global_time INTEGER)") + db.execute(u"CREATE TABLE data500 (id INTEGER PRIMARY KEY AUTOINCREMENT, packet TEXT)") + db.execute(u"CREATE TABLE data1500 (id INTEGER PRIMARY KEY AUTOINCREMENT, packet TEXT)") + db.executemany(u"INSERT INTO data10 (public_key, global_time) VALUES (?, ?)", ((buffer("".join(chr(int(random() * 256)) for _ in xrange(83))), int(random() * 2 ** 32)) for _ in xrange(DATA_COUNT))) + db.executemany(u"INSERT INTO data500 (packet) VALUES (?)", ((buffer("".join(chr(int(random() * 256)) for _ in xrange(500))),) for _ in xrange(DATA_COUNT))) + db.executemany(u"INSERT INTO data1500 (packet) VALUES (?)", ((buffer("".join(chr(int(random() * 256)) for _ in xrange(1500))),) for _ in xrange(DATA_COUNT))) + + b10 = BloomFilter(1000, 0.1) + for public_key, global_time in db.execute(u"SELECT public_key, global_time FROM data10"): + b10.add(str(public_key) + pack("!Q", global_time)) + + b500 = BloomFilter(1000, 0.1) + for packet, in db.execute(u"SELECT packet FROM data500"): + b500.add(str(packet)) + + b1500 = BloomFilter(1000, 0.1) + for packet, in db.execute(u"SELECT packet FROM data1500"): + b1500.add(str(packet)) + + check10 = [] + check500 = [] + check1500 = [] + + for _ in xrange(RUN_COUNT): + start = clock() + for public_key, global_time in db.execute(u"SELECT public_key, global_time FROM data10"): + if not str(public_key) + pack("!Q", global_time) in b10: + raise RuntimeError("err") + end = clock() + check10.append(end - start) + + start = clock() + for packet, in db.execute(u"SELECT packet FROM data500"): + if not str(packet) in b500: + raise RuntimeError("err") + end = clock() + check500.append(end - start) + + start = clock() + for packet, in db.execute(u"SELECT packet FROM data1500"): + if not str(packet) in b1500: + raise RuntimeError("err") + end = clock() + check1500.append(end - start) + + print DATA_COUNT, "*", RUN_COUNT, "=", DATA_COUNT * RUN_COUNT + print "check" + print "10 ", sum(check10) + print "500 ", sum(check500) + print "1500", sum(check1500) + + def _test_size(): + # 01/11/11 currently bloom filters get 10240 bits of space + b = BloomFilter(10240, 0.01) + b = BloomFilter(128 * 2, 0.01) + + @attach_profiler + def _test_performance(): + b = BloomFilter(1024 * 8, 0.01) + + data = [str(i) for i in xrange(b.get_capacity(0.01))] + testdata = [str(i) for i in xrange(len(data) * 2)] + b.add_keys(data) + + # for i in testdata: + # test = i in b + import sys + + t1 = time() + for i in range(1000): + b.bytes + + t2 = time() + bytes = b.bytes + for i in range(1000): + b2 = BloomFilter(bytes, b.functions) + + print >> sys.stderr, time() - t2, t2 - t1 + + def p(b, postfix=""): + # print "capacity:", b.capacity, "error-rate:", b.error_rate, "num-slices:", b.num_slices, "bits-per-slice:", b.bits_per_slice, "bits:", b.size, "bytes:", b.size / 8, "packet-bytes:", b.size / 8 + 51 + 60 + 16 + 8, postfix + print "error-rate", b.error_rate, "bits:", b.size, "bytes:", b.size / 8, "packet-bytes:", b.size / 8 + 51 + 60 + 16 + 8, postfix + + if __name__ == "__main__": + # _test_behavior() + # _performance_test() + # _taste_test() + # _test_occurrence() + # _test_documentation() + # _test_save_load() + # _test_performance() + # _test_false_positives() + # _test_prefix_false_positives() + # _test_prefix_false_positives(FasterBloomFilter) + # _test_behavior(FasterBloomFilter) + # _test_size() + _test_performance() + + # MTU = 1500 # typical MTU + # MTU = 576 # ADSL + # DISP = 51 + 60 + 16 + 8 + # BITS = 9583 # currently used bloom filter size + # BITS = (MTU - 20 - 8 - DISP) * 8 # size allowed by MTU (typical header) + # BITS = (MTU - 60 - 8 - DISP) * 8 # size allowed by MTU (max header) + + # b1 = BloomFilter(1000, 0.01) + # p(b1) + # b2 = BloomFilter(0.01, b1.size) + # p(b2) + # b3 = BloomFilter(0.001, BITS) + # p(b3) + # b3 = BloomFilter(0.01, BITS) + # p(b3) + # b3 = BloomFilter(0.1, BITS) + # p(b3) + # b4 = BloomFilter(0.5, BITS) + # p(b4) diff -Nru tribler-6.2.0/Tribler/dispersy/bootstrap.py tribler-6.2.0/Tribler/dispersy/bootstrap.py --- tribler-6.2.0/Tribler/dispersy/bootstrap.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/bootstrap.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,65 @@ +import os +from socket import gethostbyname + +from .candidate import BootstrapCandidate + +_trackers = [(u"dispersy1.tribler.org", 6421), + (u"dispersy2.tribler.org", 6422), + (u"dispersy3.tribler.org", 6423), + (u"dispersy4.tribler.org", 6424), + (u"dispersy5.tribler.org", 6425), + (u"dispersy6.tribler.org", 6426), + (u"dispersy7.tribler.org", 6427), + (u"dispersy8.tribler.org", 6428), + + (u"dispersy1b.tribler.org", 6421), + (u"dispersy2b.tribler.org", 6422), + (u"dispersy3b.tribler.org", 6423), + (u"dispersy4b.tribler.org", 6424), + (u"dispersy5b.tribler.org", 6425), + (u"dispersy6b.tribler.org", 6426), + (u"dispersy7b.tribler.org", 6427), + (u"dispersy8b.tribler.org", 6428)] + +# _trackers = [(u"kayapo.tribler.org", 6431)] + + +def get_bootstrap_hosts(working_directory): + """ + Reads WORKING_DIRECTORY/bootstraptribler.txt and returns the hosts therein, otherwise it + returns _TRACKERS. + """ + trackers = [] + filename = os.path.join(working_directory, "bootstraptribler.txt") + try: + for line in open(filename, "r"): + line = line.strip() + if not line.startswith("#"): + host, port = line.split() + trackers.append((host.decode("UTF-8"), int(port))) + except: + pass + + if trackers: + return trackers + else: + return _trackers + + +def get_bootstrap_candidates(dispersy): + """ + Returns a list with all known bootstrap peers. + + Bootstrap peers are retrieved from WORKING_DIRECTORY/bootstraptribler.txt if it exits. + Otherwise it is created using the trackers defined in _TRACKERS. + + Each bootstrap peer gives either None or a Candidate. None values can be caused by + malfunctioning DNS. + """ + def get_candidate(host, port): + try: + return BootstrapCandidate((gethostbyname(host), port), False) + except: + return None + + return [get_candidate(host, port) for host, port in get_bootstrap_hosts(dispersy.working_directory)] diff -Nru tribler-6.2.0/Tribler/dispersy/callback.py tribler-6.2.0/Tribler/dispersy/callback.py --- tribler-6.2.0/Tribler/dispersy/callback.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/callback.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,746 @@ +""" +A callback thread running Dispersy. +""" +import logging +logger = logging.getLogger(__name__) + +from heapq import heappush, heappop +from thread import get_ident +from threading import Thread, Lock, Event +from time import sleep, time +from types import GeneratorType, TupleType +from sys import exc_info + +from .decorator import attach_profiler + +if __debug__: + from atexit import register as atexit_register + from inspect import getsourcefile, getsourcelines + + +class Callback(object): + if __debug__: + @staticmethod + def _debug_call_to_string(call): + if isinstance(call, TupleType): + call = call[0] + + elif isinstance(call, GeneratorType): + pass + + else: + assert call is None, type(call) + return str(call) + + try: + source_file = getsourcefile(call)[-25:] + except (TypeError, IndexError): + source_file = "" + + try: + line_number = getsourcelines(call)[1] + except (TypeError, IOError, IndexError): + line_number = -1 + + if source_file == "" and line_number == -1: + return call.__name__ + else: + return "%s@%s:%d" % (call.__name__, source_file, line_number) + + def __init__(self, name="Generic-Callback"): + assert isinstance(name, str), type(name) + + # _name will be given to the thread when it is started + self._name = name + + # _event is used to wakeup the thread when new actions arrive + self._event = Event() + self._event_set = self._event.set + self._event_is_set = self._event.isSet + + # _lock is used to protect variables that are written to on multiple threads + self._lock = Lock() + + # _thread_ident is used to detect when methods are called from the same thread + self._thread_ident = 0 + + # _state contains the current state of the thread. it is protected by _lock and follows the + # following states: + # + # --> fatal-exception -> STATE_EXCEPTION + # / + # STATE_INIT -> start() -> PLEASE_RUN -> STATE_RUNNING + # \ \ + # --------------> stop() -> PLEASE_STOP -> STATE_FINISHED + # + self._state = "STATE_INIT" + logger.debug("STATE_INIT") + + # _exception is set to SystemExit, KeyboardInterrupt, GeneratorExit, or AssertionError when + # any of the registered callbacks raises any of these exceptions. in this case _state will + # be set to STATE_EXCEPTION. it is protected by _lock + self._exception = None + self._exception_traceback = None + + # _exception_handlers contains a list with callable functions of methods. all handlers are + # called whenever an exception occurs. first parameter is the exception, second parameter + # is a boolean indicating if the exception is fatal (i.e. True indicates SystemExit, + # KeyboardInterrupt, GeneratorExit, or AssertionError) + self._exception_handlers = [] + + # _id contains a running counter to ensure that every scheduled callback has its own unique + # identifier. it is protected by _lock. tasks will get u"dispersy-#" assigned + self._id = 0 + + # _requests are ordered by deadline and moved to -expired- when they need to be handled + # (deadline, priority, root_id, (call, args, kargs), callback) + self._requests = [] + + # expired requests are ordered and handled by priority + # (priority, root_id, None, (call, args, kargs), callback) + self._expired = [] + + # _requests_mirror and _expired_mirror contains the same list as _requests and _expired, + # respectively. when the callback closes _requests is set to a new empty list while + # _requests_mirror continues to point to the existing one. because all task 'deletes' are + # done on the _requests_mirror list, these actions will still be allowed while no new tasks + # will be accepted. + self._requests_mirror = self._requests + self._expired_mirror = self._expired + + if __debug__: + def must_close(callback): + assert callback.is_finished + atexit_register(must_close, self) + self._debug_call_name = None + + @property + def ident(self): + return self._thread_ident + + @property + def is_current_thread(self): + """ + Returns True when called on this Callback thread. + """ + return self._thread_ident == get_ident() + + @property + def is_running(self): + """ + Returns True when the state is STATE_RUNNING. + """ + return self._state == "STATE_RUNNING" + + @property + def is_finished(self): + """ + Returns True when the state is either STATE_FINISHED, STATE_EXCEPTION or STATE_INIT. In either case the + thread is no longer running. + """ + return self._state == "STATE_FINISHED" or self._state == "STATE_EXCEPTION" or self._state == "STATE_INIT" + + @property + def exception(self): + """ + Returns the exception that caused the thread to exit when when any of the registered callbacks + raises either SystemExit, KeyboardInterrupt, GeneratorExit, or AssertionError. + """ + return self._exception + + @property + def exception_traceback(self): + """ + Returns the traceback of the exception that caused the thread to exit when when any of the registered callbacks + """ + return self._exception_traceback + + def attach_exception_handler(self, func): + """ + Attach a new exception notifier. + + FUNC will be called whenever a registered call raises an exception. The first parameter will be the raised + exception, the second parameter will be a boolean indicating if the exception was fatal. FUNC should return a + boolean, if any of the attached exception handlers returns True the exception is considered fatal. + + Fatal exceptions are SystemExit, KeyboardInterrupt, GeneratorExit, or AssertionError. These exceptions will + cause the Callback thread to exit. The Callback thread will continue to function on all other exceptions. + """ + assert callable(func), "handler must be callable" + with self._lock: + assert not func in self._exception_handlers, "handler was already attached" + self._exception_handlers.append(func) + + def detach_exception_handler(self, func): + """ + Detach an existing exception notifier. + """ + assert callable(func), "handler must be callable" + with self._lock: + assert func in self._exception_handlers, "handler is not attached" + self._exception_handlers.remove(func) + + def _call_exception_handlers(self, exception, fatal): + with self._lock: + exception_handlers = self._exception_handlers[:] + for exception_handler in exception_handlers: + try: + if exception_handler(exception, fatal): + fatal = True + except Exception as exception: + logger.exception("%s", exception) + assert False, "the exception handler should not cause an exception" + return fatal + + def register(self, call, args=(), kargs=None, delay=0.0, priority=0, id_=u"", callback=None, callback_args=(), callback_kargs=None, include_id=False): + """ + Register CALL to be called. + + The call will be made with ARGS and KARGS as arguments and keyword arguments, respectively. + ARGS must be a tuple and KARGS must be a dictionary. + + CALL may return a generator object that will be repeatedly called until it raises the + StopIteration exception. The generator can yield floating point values to reschedule the + generator after that amount of seconds counted from the scheduled start of the call. It is + possible to yield other values, however, these are currently undocumented. + + The call will be made after DELAY seconds. DELAY must be a floating point value. + + When multiple calls should be, or should have been made, the PRIORITY will decide the order + at which the calls are made. Calls with a higher PRIORITY will be handled before calls with + a lower PRIORITY. PRIORITY must be an integer. The default PRIORITY is 0. The order will + be undefined for calls with the same PRIORITY. + + Each call is identified with an ID_. A unique unicode identifier, based on an auto + increment counter, will be assigned when no ID_ is specified. Specified id's must be + unicode strings. Registering multiple calls with the same ID_ is allowed, all calls will be + handled normally, however, all these calls will be removed if the associated ID_ is + unregistered. + + Once the call is performed the optional CALLBACK is registered to be called immediately. + The first parameter of the CALLBACK will always be either the returned value or the raised + exception. If CALLBACK_ARGS is given it will be appended to the first argument. If + CALLBACK_KARGS is given it is added to the callback as keyword arguments. + + When INCLUDE_ID is True then the assigned identifier is given as the first argument to CALL. + + Returns the assigned identifier. + + Example: + > callback.register(my_func, delay=10.0) + > -> my_func() will be called after 10.0 seconds + + Example: + > def my_generator(): + > while True: + > print "foo" + > yield 1.0 + > callback.register(my_generator) + > -> my_generator will be called immediately printing "foo", subsequently "foo" will be + printed at 1.0 second intervals + """ + assert callable(call), "CALL must be callable" + assert isinstance(args, tuple), "ARGS has invalid type: %s" % type(args) + assert kargs is None or isinstance(kargs, dict), "KARGS has invalid type: %s" % type(kargs) + assert isinstance(delay, float), "DELAY has invalid type: %s" % type(delay) + assert isinstance(priority, int), "PRIORITY has invalid type: %s" % type(priority) + assert isinstance(id_, unicode), "ID_ has invalid type: %s" % type(id_) + assert callback is None or callable(callback), "CALLBACK must be None or callable" + assert isinstance(callback_args, tuple), "CALLBACK_ARGS has invalid type: %s" % type(callback_args) + assert callback_kargs is None or isinstance(callback_kargs, dict), "CALLBACK_KARGS has invalid type: %s" % type(callback_kargs) + assert isinstance(include_id, bool), "INCLUDE_ID has invalid type: %d" % type(include_id) + logger.debug("register %s after %.2f seconds", call, delay) + + with self._lock: + if not id_: + self._id += 1 + id_ = u"dispersy-#%d" % self._id + + if delay <= 0.0: + heappush(self._expired, + (-priority, + time(), + id_, + None, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + else: + heappush(self._requests, + (delay + time(), + -priority, + id_, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + + # wakeup if sleeping + if not self._event_is_set(): + self._event_set() + return id_ + + def persistent_register(self, id_, call, args=(), kargs=None, delay=0.0, priority=0, callback=None, callback_args=(), callback_kargs=None, include_id=False): + """ + Register CALL to be called only if ID_ has not already been registered. + + Aside from the different behavior of ID_, all parameters behave as in register(...). + + Example: + > callback.persistent_register(u"my-id", my_func, ("first",), delay=60.0) + > callback.persistent_register(u"my-id", my_func, ("second",)) + > -> my_func("first") will be called after 60 seconds, my_func("second") will not be called at all + + Example: + > callback.register(my_func, ("first",), delay=60.0, id_=u"my-id") + > callback.persistent_register(u"my-id", my_func, ("second",)) + > -> my_func("first") will be called after 60 seconds, my_func("second") will not be called at all + """ + assert isinstance(id_, unicode), "ID_ has invalid type: %s" % type(id_) + assert id_, "ID_ may not be empty" + assert callable(call), "CALL must be callable" + assert isinstance(args, tuple), "ARGS has invalid type: %s" % type(args) + assert kargs is None or isinstance(kargs, dict), "KARGS has invalid type: %s" % type(kargs) + assert isinstance(delay, float), "DELAY has invalid type: %s" % type(delay) + assert isinstance(priority, int), "PRIORITY has invalid type: %s" % type(priority) + assert callback is None or callable(callback), "CALLBACK must be None or callable" + assert isinstance(callback_args, tuple), "CALLBACK_ARGS has invalid type: %s" % type(callback_args) + assert callback_kargs is None or isinstance(callback_kargs, dict), "CALLBACK_KARGS has invalid type: %s" % type(callback_kargs) + assert isinstance(include_id, bool), "INCLUDE_ID has invalid type: %d" % type(include_id) + logger.debug("persistent register %s after %.2f seconds", call, delay) + + with self._lock: + for tup in self._requests: + if tup[2] == id_: + break + + else: + # not found in requests + for tup in self._expired: + if tup[2] == id_: + break + + else: + # not found in expired + if delay <= 0.0: + heappush(self._expired, + (-priority, + time(), + id_, + None, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + + else: + heappush(self._requests, + (delay + time(), + -priority, + id_, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + + # wakeup if sleeping + if not self._event_is_set(): + self._event_set() + + return id_ + + def replace_register(self, id_, call, args=(), kargs=None, delay=0.0, priority=0, callback=None, callback_args=(), callback_kargs=None, include_id=False): + """ + Replace (if present) the currently registered call ID_ with CALL. + + This is a faster way to handle an unregister and register call. All parameters behave as in + register(...). + """ + assert isinstance(id_, unicode), "ID_ has invalid type: %s" % type(id_) + assert id_, "ID_ may not be empty" + assert callable(call), "CALL must be callable" + assert isinstance(args, tuple), "ARGS has invalid type: %s" % type(args) + assert kargs is None or isinstance(kargs, dict), "KARGS has invalid type: %s" % type(kargs) + assert isinstance(delay, float), "DELAY has invalid type: %s" % type(delay) + assert isinstance(priority, int), "PRIORITY has invalid type: %s" % type(priority) + assert callback is None or callable(callback), "CALLBACK must be None or callable" + assert isinstance(callback_args, tuple), "CALLBACK_ARGS has invalid type: %s" % type(callback_args) + assert callback_kargs is None or isinstance(callback_kargs, dict), "CALLBACK_KARGS has invalid type: %s" % type(callback_kargs) + assert isinstance(include_id, bool), "INCLUDE_ID has invalid type: %d" % type(include_id) + logger.debug("replace register %s after %.2f seconds", call, delay) + + with self._lock: + # un-register + for index, tup in enumerate(self._requests_mirror): + if tup[2] == id_: + self._requests_mirror[index] = (tup[0], tup[1], id_, None, None) + logger.debug("in _requests: %s", id_) + + for index, tup in enumerate(self._expired_mirror): + if tup[2] == id_: + self._expired_mirror[index] = (tup[0], tup[1], id_, tup[3], None, None) + logger.debug("in _expired: %s", id_) + + # register + if delay <= 0.0: + heappush(self._expired, + (-priority, + time(), + id_, + None, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + + else: + heappush(self._requests, + (delay + time(), + -priority, + id_, + (call, args + (id_,) if include_id else args, {} if kargs is None else kargs), + None if callback is None else (callback, callback_args, {} if callback_kargs is None else callback_kargs))) + + # wakeup if sleeping + if not self._event_is_set(): + self._event_set() + return id_ + + def unregister(self, id_): + """ + Unregister a callback using the ID_ obtained from the register(...) method + """ + assert isinstance(id_, unicode), "ROOT_ID has invalid type: %s" % type(id_) + assert id_, "ID_ may not be empty" + logger.debug("unregister %s", id_) + + with self._lock: + # un-register + for index, tup in enumerate(self._requests_mirror): + if tup[2] == id_: + self._requests_mirror[index] = (tup[0], tup[1], id_, None, None) + logger.debug("in _requests: %s", id_) + + for index, tup in enumerate(self._expired_mirror): + if tup[2] == id_: + self._expired_mirror[index] = (tup[0], tup[1], id_, tup[2], None, None) + logger.debug("in _expired: %s", id_) + + def call(self, call, args=(), kargs=None, delay=0.0, priority=0, id_=u"", include_id=False, timeout=0.0, default=None): + """ + Register a blocking CALL to be made, waits for the call to finish, and returns or raises the + result. + + TIMEOUT gives the maximum amount of time to wait before un-registering CALL. No timeout + will occur when TIMEOUT is 0.0. When a timeout occurs the DEFAULT value is returned. + TIMEOUT is unused when called from the same thread. + + DEFAULT can be anything. The DEFAULT value is returned when a TIMEOUT occurs. Note: as of 24/05/13 when + DEFAULT is an Exception instance it will no longer be raised. + + For the arguments CALL, ARGS, KARGS, DELAY, PRIORITY, ID_, and INCLUDE_ID: see the register(...) method. + """ + assert isinstance(timeout, float) + assert 0.0 <= timeout + assert self._thread_ident + + def callback(result): + if isinstance(result, Exception): + container[1] = exc_info() + + else: + container[0] = result + + event.set() + + # result container with [RETURN-VALUE, EXC_INFO-TUPLE] + container = [default, None] + + if self._thread_ident == get_ident(): + if kargs: + container[0] = call(*args, **kargs) + else: + container[0] = call(*args) + + if isinstance(container[0], GeneratorType): + logger.warning("using callback.call from the same thread on a generator can cause deadlocks") + for delay in container[0]: + sleep(delay) + + container[0] = default + + else: + event = Event() + + # register the call + self.register(call, args, kargs, delay, priority, id_, callback, include_id=include_id) + + # wait for call to finish + event.wait(None if timeout == 0.0 else timeout) + + if container[1]: + type_, value, traceback = container[1] + raise type_, value, traceback + + else: + return container[0] + + def start(self, wait=True): + """ + Start the asynchronous thread. + + Creates a new thread and calls the _loop() method. + """ + assert self._state == "STATE_INIT", "Already (done) running" + assert isinstance(wait, bool), "WAIT has invalid type: %s" % type(wait) + with self._lock: + self._state = "STATE_PLEASE_RUN" + logger.debug("STATE_PLEASE_RUN") + + thread = Thread(target=self._loop, name=self._name) + thread.daemon = True + thread.start() + + if wait: + # Wait until the thread has started + while self._state == "STATE_PLEASE_RUN": + sleep(0.01) + + return self.is_running + + def stop(self, timeout=10.0, exception=None): + """ + Stop the asynchronous thread. + + When called from the same thread this method will return immediately. When called from a + different thread the method will wait at most TIMEOUT seconds before returning. + + Returns True when the callback thread is finished, otherwise returns False. + """ + assert isinstance(timeout, float) + if self._state == "STATE_RUNNING": + with self._lock: + if exception: + self._exception = exception + self._exception_traceback = exc_info()[2] + self._state = "STATE_PLEASE_STOP" + logger.debug("STATE_PLEASE_STOP") + + # wakeup if sleeping + self._event.set() + + # 05/04/13 Boudewijn: we must also wait when self._state != RUNNING. This can occur when + # stop() has already been called from SELF._THREAD_IDENT, changing the state to PLEASE_STOP. + if not self._thread_ident == get_ident(): + while self._state == "STATE_PLEASE_STOP" and timeout > 0.0: + sleep(0.01) + timeout -= 0.01 + + if not self.is_finished: + logger.warning("unable to stop the callback within the allowed time") + + return self.is_finished + + def loop(self): + """ + Use the calling thread for this Callback instance. + """ + + with self._lock: + self._state = "STATE_PLEASE_RUN" + logger.debug("STATE_PLEASE_RUN") + + self._loop() + + @attach_profiler + def _loop(self): + if __debug__: + time_since_expired = 0 + + # put some often used methods and object in the local namespace + actual_time = 0 + event_clear = self._event.clear + event_wait = self._event.wait + event_is_set = self._event.isSet + expired = self._expired + get_timestamp = time + lock = self._lock + requests = self._requests + + self._thread_ident = get_ident() + + with lock: + if self._state == "STATE_PLEASE_RUN": + self._state = "STATE_RUNNING" + logger.debug("STATE_RUNNING") + + while True: + actual_time = get_timestamp() + + with lock: + # check if we should continue to run + if self._state != "STATE_RUNNING": + break + + # move expired requests from REQUESTS to EXPIRED + while requests and requests[0][0] <= actual_time: + # notice that the deadline and priority entries are switched, hence, the entries in + # the EXPIRED list are ordered by priority instead of deadline + deadline, priority, root_id, call, callback = heappop(requests) + heappush(expired, (priority, deadline, root_id, None, call, callback)) + + if expired: + if __debug__ and len(expired) > 10: + if not time_since_expired: + time_since_expired = actual_time + + # we need to handle the next call in line + priority, deadline, root_id, _, call, callback = heappop(expired) + wait = 0.0 + + if __debug__: + self._debug_call_name = self._debug_call_to_string(call) + + # ignore removed tasks + if call is None: + continue + + else: + # there is nothing to handle + wait = requests[0][0] - actual_time if requests else 300.0 + if __debug__: + logger.debug("nothing to handle, wait %.2f seconds", wait) + if time_since_expired: + diff = actual_time - time_since_expired + if diff > 1.0: + logger.warning("took %.2f to process expired queue", diff) + time_since_expired = 0 + + if event_is_set(): + event_clear() + + if wait: + logger.debug("wait at most %.3fs before next call, still have %d calls in queue", wait, len(requests)) + event_wait(wait) + + else: + if __debug__: + logger.debug("---- call %s (priority:%d, id:%s)", self._debug_call_name, priority, root_id) + debug_call_start = time() + + # call can be either: + # 1. a generator + # 2. a (callable, args, kargs) tuple + + try: + if isinstance(call, TupleType): + # callback + result = call[0](*call[1], **call[2]) + if isinstance(result, GeneratorType): + # we only received the generator, no actual call has been made to the + # function yet, therefore we call it again immediately + call = result + + elif callback: + with lock: + heappush(expired, (priority, actual_time, root_id, None, (callback[0], (result,) + callback[1], callback[2]), None)) + + if isinstance(call, GeneratorType): + # start next generator iteration + result = call.next() + assert isinstance(result, float), [type(result), call] + assert result >= 0.0, [result, call] + with lock: + heappush(requests, (get_timestamp() + result, priority, root_id, call, callback)) + + except StopIteration: + if callback: + with lock: + heappush(expired, (priority, actual_time, root_id, None, (callback[0], (result,) + callback[1], callback[2]), None)) + + except (SystemExit, KeyboardInterrupt, GeneratorExit) as exception: + with lock: + self._state = "STATE_EXCEPTION" + self._exception = exception + self._exception_traceback = exc_info()[2] + self._call_exception_handlers(exception, True) + logger.exception("attempting proper shutdown") + + except Exception as exception: + if callback: + with lock: + heappush(expired, (priority, actual_time, root_id, None, (callback[0], (exception,) + callback[1], callback[2]), None)) + + if self._call_exception_handlers(exception, False): + # one or more of the exception handlers returned True, we will consider this + # exception to be fatal and quit + logger.error("reassessing as fatal exception, attempting proper shutdown") + with lock: + self._state = "STATE_EXCEPTION" + self._exception = exception + self._exception_traceback = exc_info()[2] + else: + logger.exception("keep running regardless of exception") + + if __debug__: + debug_call_duration = time() - debug_call_start + if debug_call_duration > 1.0: + logger.warning("%.2f call %s (priority:%d, id:%s)", debug_call_duration, self._debug_call_name, priority, root_id) + else: + logger.debug("%.2f call %s (priority:%d, id:%s)", debug_call_duration, self._debug_call_name, priority, root_id) + + with lock: + # allowing us to refuse any new tasks. _requests_mirror and _expired_mirror will still + # allow tasks to be removed + self._requests = [] + self._expired = [] + + # call all expired tasks and send GeneratorExit exceptions to expired generators, note that + # new tasks will not be accepted + logger.debug("there are %d expired tasks", len(expired)) + while expired: + _, _, _, _, call, callback = heappop(expired) + if isinstance(call, TupleType): + try: + result = call[0](*call[1], **call[2]) + except Exception as exception: + logger.exception("%s", exception) + else: + if isinstance(result, GeneratorType): + # we only received the generator, no actual call has been made to the + # function yet, therefore we call it again immediately + call = result + + elif callback: + try: + callback[0](result, *callback[1], **callback[2]) + except Exception as exception: + logger.exception("%s", exception) + + if isinstance(call, GeneratorType): + logger.debug("raise Shutdown in %s", call) + try: + call.close() + except Exception as exception: + logger.exception("%s", exception) + + if callback: + logger.debug("inform callback for %s", call) + try: + callback[0](RuntimeError("Early shutdown"), *callback[1], **callback[2]) + except Exception as exception: + logger.exception("%s", exception) + + # send GeneratorExit exceptions to scheduled generators + logger.debug("there are %d scheduled tasks", len(requests)) + while requests: + _, _, _, call, callback = heappop(requests) + if isinstance(call, GeneratorType): + logger.debug("raise Shutdown in %s", call) + try: + call.close() + except Exception as exception: + logger.exception("%s", exception) + + if callback: + logger.debug("inform callback for %s", call) + try: + callback[0](RuntimeError("Early shutdown"), *callback[1], **callback[2]) + except Exception as exception: + logger.exception("%s", exception) + + # set state to finished + with lock: + logger.debug("STATE_FINISHED") + self._state = "STATE_FINISHED" diff -Nru tribler-6.2.0/Tribler/dispersy/candidate.py tribler-6.2.0/Tribler/dispersy/candidate.py --- tribler-6.2.0/Tribler/dispersy/candidate.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/candidate.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,320 @@ +import logging +logger = logging.getLogger(__name__) + +if __debug__: + from .member import Member + + def is_address(address): + assert isinstance(address, tuple), type(address) + assert len(address) == 2, len(address) + assert isinstance(address[0], str), type(address[0]) + assert address[0], address[0] + assert not address[0] == "0.0.0.0", address + assert isinstance(address[1], int), type(address[1]) + assert address[1] >= 0, address[1] + return True + + +# delay and lifetime values are chosen to ensure that a candidate will not exceed 60.0 or 30.0 +# seconds. However, taking into account round trip time and processing delay we to use smaller +# values without conflicting with the next 5.0 walk cycle. Hence, we pick 2.5 seconds below the +# actual cutoff point. +CANDIDATE_ELIGIBLE_DELAY = 27.5 +CANDIDATE_ELIGIBLE_BOOTSTRAP_DELAY = 57.5 +CANDIDATE_WALK_LIFETIME = 57.5 +CANDIDATE_STUMBLE_LIFETIME = 57.5 +CANDIDATE_INTRO_LIFETIME = 27.5 +CANDIDATE_LIFETIME = 180.0 +assert isinstance(CANDIDATE_ELIGIBLE_DELAY, float) +assert isinstance(CANDIDATE_ELIGIBLE_BOOTSTRAP_DELAY, float) +assert isinstance(CANDIDATE_WALK_LIFETIME, float) +assert isinstance(CANDIDATE_STUMBLE_LIFETIME, float) +assert isinstance(CANDIDATE_INTRO_LIFETIME, float) +assert isinstance(CANDIDATE_LIFETIME, float) + + +class Candidate(object): + + def __init__(self, sock_addr, tunnel): + assert is_address(sock_addr), sock_addr + assert isinstance(tunnel, bool), type(tunnel) + self._sock_addr = sock_addr + self._tunnel = tunnel + + @property + def sock_addr(self): + return self._sock_addr + + @sock_addr.setter + def sock_addr(self, sock_addr): + self._sock_addr = sock_addr + + @property + def tunnel(self): + return self._tunnel + + def get_destination_address(self, wan_address): + assert is_address(wan_address), wan_address + return self._sock_addr + + def __str__(self): + return "{%s:%d}" % self._sock_addr + + +class WalkCandidate(Candidate): + + """ + A Candidate instance represents a communication endpoint with one or more member/community + pairs. + + A WalkCandidate is added and removed by the Dispersy random walker when events occur. These + events results in the following marks: + + - WALK: we sent an introduction-request. Viable up to CANDIDATE_WALK_LIFETIME seconds after the + message was sent. + + - STUMBLE: we received an introduction-request. Viable up to CANDIDATE_STUMBLE_LIFETIME seconds + after the message was received. + + - INTRO: we know about this candidate through hearsay. Viable up to CANDIDATE_INACTIVE seconds + after the introduction-response message (talking about the candidate) was received. + """ + + def __init__(self, sock_addr, tunnel, lan_address, wan_address, connection_type): + assert is_address(sock_addr), sock_addr + assert isinstance(tunnel, bool), type(tunnel) + assert is_address(lan_address) + assert is_address(wan_address) + assert isinstance(connection_type, unicode) and connection_type in (u"unknown", u"public", u"symmetric-NAT") + + super(WalkCandidate, self).__init__(sock_addr, tunnel) + self._lan_address = lan_address + self._wan_address = wan_address + self._connection_type = connection_type + + # Member instances that this Candidate is associated with + self._associations = set() + + # properties to determine the category + self._timeout_adjustment = 0.0 + self._last_walk = 0.0 + self._last_stumble = 0.0 + self._last_intro = 0.0 + + # the highest global time that one of the walks reported from this Candidate + self._global_time = 0 + + if __debug__: + if not (self.sock_addr == self._lan_address or self.sock_addr == self._wan_address): + logger.error("Either LAN %s or the WAN %s should be SOCK_ADDR %s", self._lan_address, self._wan_address, self.sock_addr) + assert False + + @property + def lan_address(self): + return self._lan_address + + @property + def wan_address(self): + return self._wan_address + + @property + def connection_type(self): + return self._connection_type + + def get_destination_address(self, wan_address): + assert is_address(wan_address), wan_address + return self._lan_address if wan_address[0] == self._wan_address[0] else self._wan_address + + def merge(self, other): + assert isinstance(other, WalkCandidate), type(other) + self._associations.update(other._associations) + self._timeout_adjustment = max(self._timeout_adjustment, other._timeout_adjustment) + self._last_walk = max(self._last_walk, other._last_walk) + self._last_stumble = max(self._last_stumble, other._last_stumble) + self._last_intro = max(self._last_intro, other._last_intro) + self._global_time = max(self._global_time, other._global_time) + + @property + def global_time(self): + return self._global_time + + @global_time.setter + def global_time(self, global_time): + self._global_time = max(self._global_time, global_time) + + def associate(self, member): + """ + Once it is confirmed that the candidate is represented by a member, i.e. though a 3-way + handshake, the member can be associated with the candidate. + """ + assert isinstance(member, Member) + self._associations.add(member) + + def is_associated(self, member): + """ + Check if the member is associated with this candidate. + """ + assert isinstance(member, Member) + return member in self._associations + + def disassociate(self, member): + """ + Remove the association with a member. + """ + assert isinstance(member, Member) + self._associations.remove(member) + + def get_members(self): + """ + Returns all unique Member instances associated to this candidate. + """ + return self._associations + + def is_obsolete(self, now): + """ + Returns True if this candidate exceeded the CANDIDATE_LIFETIME. + """ + return max(self._last_walk, self._last_stumble, self._last_intro) + CANDIDATE_LIFETIME < now + + def age(self, now): + """ + Returns the time between NOW and the most recent walk or stumble. + """ + return now - max(self._last_walk, self._last_stumble) + + def inactive(self, now): + """ + Called to set this candidate to inactive. + """ + self._last_walk = now - CANDIDATE_WALK_LIFETIME + self._last_stumble = now - CANDIDATE_STUMBLE_LIFETIME + self._last_intro = now - CANDIDATE_INTRO_LIFETIME + + def obsolete(self, now): + """ + Called to set this candidate to obsolete. + """ + self._last_walk = now - CANDIDATE_LIFETIME + self._last_stumble = now - CANDIDATE_LIFETIME + self._last_intro = now - CANDIDATE_LIFETIME + + def is_eligible_for_walk(self, now): + """ + Returns True when this candidate is eligible for taking a step. + + A candidate is eligible when: + - SELF is either walk, stumble, or intro; and + - the previous step is more than CANDIDATE_ELIGIBLE_DELAY ago. + """ + return (self._last_walk + CANDIDATE_ELIGIBLE_DELAY <= now and + (self._last_walk + self._timeout_adjustment <= now < self._last_walk + CANDIDATE_WALK_LIFETIME or + now < self._last_stumble + CANDIDATE_STUMBLE_LIFETIME or + now < self._last_intro + CANDIDATE_INTRO_LIFETIME)) + + @property + def last_walk(self): + return self._last_walk + + @property + def last_stumble(self): + return self._last_stumble + + @property + def last_intro(self): + return self._last_intro + + def get_category(self, now): + """ + Returns the category (u"walk", u"stumble", u"intro", or u"none") depending on the current + time NOW. + """ + if self._last_walk + self._timeout_adjustment <= now < self._last_walk + CANDIDATE_WALK_LIFETIME: + return u"walk" + + if now < self._last_stumble + CANDIDATE_STUMBLE_LIFETIME: + return u"stumble" + + if now < self._last_intro + CANDIDATE_INTRO_LIFETIME: + return u"intro" + + return u"none" + + def walk(self, now, timeout_adjustment): + """ + Called when we are about to send an introduction-request to this candidate. + """ + self._last_walk = now + self._timeout_adjustment = timeout_adjustment + + def walk_response(self): + """ + Called when we received an introduction-response to this candidate. + """ + self._timeout_adjustment = 0.0 + + def stumble(self, now): + """ + Called when we receive an introduction-request from this candidate. + """ + self._last_stumble = now + + def intro(self, now): + """ + Called when we receive an introduction-response introducing this candidate. + """ + self._last_intro = now + + def update(self, tunnel, lan_address, wan_address, connection_type): + assert isinstance(tunnel, bool) + assert lan_address == ("0.0.0.0", 0) or is_address(lan_address), lan_address + assert wan_address == ("0.0.0.0", 0) or is_address(wan_address), wan_address + assert isinstance(connection_type, unicode), type(connection_type) + assert connection_type in (u"unknown", u"public", "symmetric-NAT"), connection_type + self._tunnel = tunnel + if lan_address != ("0.0.0.0", 0): + self._lan_address = lan_address + if wan_address != ("0.0.0.0", 0): + self._wan_address = wan_address + # someone can also reset from a known connection_type to unknown (i.e. it now believes it is + # no longer public nor symmetric NAT) + self._connection_type = u"public" if connection_type == u"unknown" and lan_address == wan_address else connection_type + + if __debug__: + if not (self.sock_addr == self._lan_address or self.sock_addr == self._wan_address): + logger.error("Either LAN %s or the WAN %s should be SOCK_ADDR %s", self._lan_address, self._wan_address, self.sock_addr) + + def __str__(self): + if self._sock_addr == self._lan_address == self._wan_address: + return "{%s:%d}" % self._lan_address + elif self._sock_addr in (self._lan_address, self._wan_address): + return "{%s:%d %s:%d}" % (self._lan_address[0], self._lan_address[1], self._wan_address[0], self._wan_address[1]) + else: + # should not occur + return "{%s:%d %s:%d %s:%d}" % (self._sock_addr[0], self._sock_addr[1], self._lan_address[0], self._lan_address[1], self._wan_address[0], self._wan_address[1]) + + +class BootstrapCandidate(WalkCandidate): + + def __init__(self, sock_addr, tunnel): + super(BootstrapCandidate, self).__init__(sock_addr, tunnel, sock_addr, sock_addr, connection_type=u"public") + + def is_eligible_for_walk(self, now): + """ + Bootstrap nodes are, by definition, always online, hence the timeouts do not apply. + """ + return self._last_walk + CANDIDATE_ELIGIBLE_DELAY <= now + + def is_associated(self, member): + """ + Bootstrap nodes are, by definition, always associated hence we return true. + """ + return True + + def __str__(self): + return "B!" + super(BootstrapCandidate, self).__str__() + + +class LoopbackCandidate(Candidate): + + def __init__(self): + super(LoopbackCandidate, self).__init__(("localhost", 0), False) diff -Nru tribler-6.2.0/Tribler/dispersy/community.py tribler-6.2.0/Tribler/dispersy/community.py --- tribler-6.2.0/Tribler/dispersy/community.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/community.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,1921 @@ +""" +the community module provides the Community base class that should be used when a new Community is +implemented. It provides a simplified interface between the Dispersy instance and a running +Community instance. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +import logging +logger = logging.getLogger(__name__) + +from hashlib import sha1 +from itertools import islice +from math import ceil +from random import random, Random, randint, shuffle +from time import time + +try: + # python 2.7 only... + from collections import OrderedDict +except ImportError: + from .python27_ordereddict import OrderedDict + +from .bloomfilter import BloomFilter +from .candidate import WalkCandidate, BootstrapCandidate +from .conversion import BinaryConversion, DefaultConversion +from .decorator import documentation, runtime_duration_warning +from .dispersy import Dispersy +from .distribution import SyncDistribution, GlobalTimePruning +from .member import DummyMember, Member +from .resolution import PublicResolution, LinearResolution, DynamicResolution +from .statistics import CommunityStatistics +from .timeline import Timeline + + +class SyncCache(object): + + def __init__(self, time_low, time_high, modulo, offset, bloom_filter): + self.time_low = time_low + self.time_high = time_high + self.modulo = modulo + self.offset = offset + self.bloom_filter = bloom_filter + self.times_used = 0 + self.responses_received = 0 + self.candidate = None + + +class Community(object): + # Probability steps to get a sync skipped if the previous one was empty + _SKIP_CURVE_STEPS = [0, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] + _SKIP_STEPS = len(_SKIP_CURVE_STEPS) + + @classmethod + def get_classification(cls): + """ + Describes the community type. Should be the same across compatible versions. + @rtype: unicode + """ + return cls.__name__.decode("UTF-8") + + @classmethod + def create_community(cls, dispersy, my_member, *args, **kargs): + """ + Create a new community owned by my_member. + + Each unique community, that exists out in the world, is identified by a public/private key + pair. When the create_community method is called such a key pair is generated. + + Furthermore, my_member will be granted permission to use all the messages that the community + provides. + + @param dispersy: The Dispersy instance where this community will attach itself to. + @type dispersy: Dispersy + + @param my_member: The Member that will be granted Permit, Authorize, and Revoke for all + messages. + @type my_member: Member + + @param args: optional arguments that are passed to the community constructor. + @type args: tuple + + @param kargs: optional keyword arguments that are passed to the community constructor. + @type args: dictionary + + @return: The created community instance. + @rtype: Community + """ + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(my_member, Member), type(my_member) + assert my_member.public_key, my_member.database_id + assert my_member.private_key, my_member.database_id + assert dispersy.callback.is_current_thread + master = dispersy.get_new_member(u"high") + + dispersy.database.execute(u"INSERT INTO community (master, member, classification) VALUES(?, ?, ?)", (master.database_id, my_member.database_id, cls.get_classification())) + community_database_id = dispersy.database.last_insert_rowid + + try: + # new community instance + community = cls.load_community(dispersy, master, *args, **kargs) + assert community.database_id == community_database_id + + # create the dispersy-identity for the master member + message = community.create_dispersy_identity(sign_with_master=True) + + # create my dispersy-identity + message = community.create_dispersy_identity() + + # authorize MY_MEMBER + permission_triplets = [] + for message in community.get_meta_messages(): + # grant all permissions for messages that use LinearResolution or DynamicResolution + if isinstance(message.resolution, (LinearResolution, DynamicResolution)): + for allowed in (u"authorize", u"revoke", u"permit"): + permission_triplets.append((my_member, message, allowed)) + + # ensure that undo_callback is available + if message.undo_callback: + # we do not support undo permissions for authorize, revoke, undo-own, and + # undo-other (yet) + if not message.name in (u"dispersy-authorize", u"dispersy-revoke", u"dispersy-undo-own", u"dispersy-undo-other"): + permission_triplets.append((my_member, message, u"undo")) + + # grant authorize, revoke, and undo permission for messages that use PublicResolution + # and SyncDistribution. Why? The undo permission allows nodes to revoke a specific + # message that was gossiped around. The authorize permission is required to grant other + # nodes the undo permission. The revoke permission is required to remove the undo + # permission. The permit permission is not required as the message uses + # PublicResolution and is hence permitted regardless. + elif isinstance(message.distribution, SyncDistribution) and isinstance(message.resolution, PublicResolution): + # ensure that undo_callback is available + if message.undo_callback: + # we do not support undo permissions for authorize, revoke, undo-own, and + # undo-other (yet) + if not message.name in (u"dispersy-authorize", u"dispersy-revoke", u"dispersy-undo-own", u"dispersy-undo-other"): + for allowed in (u"authorize", u"revoke", u"undo"): + permission_triplets.append((my_member, message, allowed)) + + if permission_triplets: + community.create_dispersy_authorize(permission_triplets, sign_with_master=True, forward=False) + + except: + # undo the insert info the database + # TODO it might still leave unused database entries referring to the community id + dispersy.database.execute(u"DELETE FROM community WHERE id = ?", (community_database_id,)) + + # raise the exception because this shouldn't happen + raise + + else: + return community + + @classmethod + def join_community(cls, dispersy, master, my_member, *args, **kargs): + """ + Join an existing community. + + Once you have discovered an existing community, i.e. you have obtained the public master key + from a community, you can join this community. + + Joining a community does not mean that you obtain permissions in that community, those will + need to be granted by another member who is allowed to do so. However, it will let you + receive, send, and disseminate messages that do not require any permission to use. + + @param dispersy: The Dispersy instance where this community will attach itself to. + @type dispersy: Dispersy + + @param master: The master member that identified the community that we want to join. + @type master: DummyMember or Member + + @param my_member: The member that will be granted Permit, Authorize, and Revoke for all + messages. + @type my_member: Member + + @param args: optional argumets that are passed to the community constructor. + @type args: tuple + + @param kargs: optional keyword arguments that are passed to the community constructor. + @type args: dictionary + + @return: The created community instance. + @rtype: Community + """ + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(master, DummyMember), type(master) + assert isinstance(my_member, Member), type(my_member) + assert my_member.public_key, my_member.database_id + assert my_member.private_key, my_member.database_id + assert dispersy.callback.is_current_thread + logger.debug("joining %s %s", cls.get_classification(), master.mid.encode("HEX")) + + dispersy.database.execute(u"INSERT INTO community(master, member, classification) VALUES(?, ?, ?)", + (master.database_id, my_member.database_id, cls.get_classification())) + community_database_id = dispersy.database.last_insert_rowid + + try: + # new community instance + community = cls.load_community(dispersy, master, *args, **kargs) + assert community.database_id == community_database_id + + # create my dispersy-identity + community.create_dispersy_identity() + + except: + # undo the insert info the database + # TODO it might still leave unused database entries referring to the community id + dispersy.database.execute(u"DELETE FROM community WHERE id = ?", (community_database_id,)) + + # raise the exception because this shouldn't happen + raise + + else: + return community + + @classmethod + def get_master_members(cls, dispersy): + assert isinstance(dispersy, Dispersy), type(dispersy) + assert dispersy.callback.is_current_thread + logger.debug("retrieving all master members owning %s communities", cls.get_classification()) + execute = dispersy.database.execute + return [dispersy.get_member(str(public_key)) if public_key else dispersy.get_temporary_member_from_id(str(mid)) + for mid, public_key, + in list(execute(u"SELECT m.mid, m.public_key FROM community AS c JOIN member AS m ON m.id = c.master WHERE c.classification = ?", + (cls.get_classification(),)))] + + @classmethod + def load_community(cls, dispersy, master, *args, **kargs): + """ + Load a single community. + + Will raise a ValueError exception when cid is unavailable. + + @param master: The master member that identifies the community. + @type master: DummyMember or Member + + @return: The community identified by master. + @rtype: Community + """ + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(master, DummyMember), type(master) + assert dispersy.callback.is_current_thread + logger.debug("loading %s %s", cls.get_classification(), master.mid.encode("HEX")) + community = cls(dispersy, master, *args, **kargs) + + # tell dispersy that there is a new community + dispersy.attach_community(community) + + return community + + def __init__(self, dispersy, master): + """ + Initialize a community. + + Generally a new community is created using create_community. Or an existing community is + loaded using load_community. These two methods prepare and call this __init__ method. + + @param dispersy: The Dispersy instance where this community will attach itself to. + @type dispersy: Dispersy + + @param master: The master member that identifies the community. + @type master: DummyMember or Member + """ + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(master, DummyMember), type(master) + assert dispersy.callback.is_current_thread + logger.debug("initializing: %s", self.get_classification()) + logger.debug("master member: %s %s", master.mid.encode("HEX"), "" if master.public_key else " (no public key available)") + + # Dispersy + self._dispersy = dispersy + + # _pending_callbacks contains all id's for registered calls that should be removed when the + # community is unloaded. most of the time this contains all the generators that are being + # used by the community + self._pending_callbacks = [] + + try: + self._database_id, member_public_key, self._database_version = self._dispersy.database.execute(u"SELECT community.id, member.public_key, database_version FROM community JOIN member ON member.id = community.member WHERE master = ?", (master.database_id,)).next() + except StopIteration: + raise ValueError(u"Community not found in database [" + master.mid.encode("HEX") + "]") + logger.debug("database id: %d", self._database_id) + + self._cid = master.mid + self._master_member = master + self._my_member = self._dispersy.get_member(str(member_public_key)) + logger.debug("my member: %s", self._my_member.mid.encode("HEX")) + assert self._my_member.public_key, [self._database_id, self._my_member.database_id, self._my_member.public_key] + assert self._my_member.private_key, [self._database_id, self._my_member.database_id, self._my_member.private_key] + if not self._master_member.public_key and self.dispersy_enable_candidate_walker and self.dispersy_auto_download_master_member: + self._pending_callbacks.append(self._dispersy.callback.register(self._download_master_member_identity)) + + # pre-fetch some values from the database, this allows us to only query the database once + self.meta_message_cache = {} + for database_id, name, cluster, priority, direction in self._dispersy.database.execute(u"SELECT id, name, cluster, priority, direction FROM meta_message WHERE community = ?", (self._database_id,)): + self.meta_message_cache[name] = {"id": database_id, "cluster": cluster, "priority": priority, "direction": direction} + # define all available messages + self._meta_messages = {} + self._initialize_meta_messages() + # cleanup pre-fetched values + self.meta_message_cache = None + + # define all available conversions + self._conversions = self.initiate_conversions() + if __debug__: + from .conversion import Conversion + assert len(self._conversions) > 0, len(self._conversions) + assert all(isinstance(conversion, Conversion) for conversion in self._conversions), [type(conversion) for conversion in self._conversions] + + # the global time. zero indicates no messages are available, messages must have global + # times that are higher than zero. + self._global_time, = self._dispersy.database.execute(u"SELECT MAX(global_time) FROM sync WHERE community = ?", (self._database_id,)).next() + if self._global_time is None: + self._global_time = 0 + assert isinstance(self._global_time, (int, long)) + self._acceptable_global_time_cache = self._global_time + self._acceptable_global_time_deadline = 0.0 + logger.debug("global time: %d", self._global_time) + + # sync range bloom filters + self._sync_cache = None + self._dispersy_sync_skip_enable = True + self._sync_cache_skip_count = 0 + if __debug__: + b = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate) + logger.debug("sync bloom: size: %d; capacity: %d; error-rate: %f", int(ceil(b.size // 8)), b.get_capacity(self.dispersy_sync_bloom_filter_error_rate), self.dispersy_sync_bloom_filter_error_rate) + + # initial timeline. the timeline will keep track of member permissions + self._timeline = Timeline(self) + self._initialize_timeline() + + # random seed, used for sync range + self._random = Random(self._cid) + self._nrsyncpackets = 0 + + # Initialize all the candidate iterators + self._candidates = OrderedDict() + self._walked_candidates = self._iter_category(u'walk') + self._stumbled_candidates = self._iter_category(u'stumble') + self._introduced_candidates = self._iter_category(u'intro') + self._walk_candidates = self._iter_categories([u'walk', u'stumble', u'intro']) + self._bootstrap_candidates = self._iter_bootstrap() + self._pending_callbacks.append(self._dispersy.callback.register(self._periodically_cleanup_candidates)) + + # statistics... + self._statistics = CommunityStatistics(self) + + @property + def candidates(self): + """ + Dictionary containing sock_addr:Candidate pairs. + """ + return self._candidates + + @property + def statistics(self): + """ + The Statistics instance. + """ + return self._statistics + + def _download_master_member_identity(self): + assert not self._master_member.public_key + logger.debug("using dummy master member") + + def on_dispersy_identity(message): + if message and not self._master_member: + logger.debug("%s received master member", self._cid.encode("HEX")) + assert message.authentication.member.mid == self._master_member.mid + self._master_member = message.authentication.member + assert self._master_member.public_key + + delay = 2.0 + while not self._master_member.public_key: + try: + public_key, = self._dispersy.database.execute(u"SELECT public_key FROM member WHERE id = ?", (self._master_member.database_id,)).next() + except StopIteration: + pass + else: + if public_key: + logger.debug("%s found master member", self._cid.encode("HEX")) + self._master_member = self._dispersy.get_member(str(public_key)) + assert self._master_member.public_key + break + + for candidate in islice(self.dispersy_yield_verified_candidates(), 1): + if candidate: + logger.debug("%s asking for master member from %s", self._cid.encode("HEX"), candidate) + self._dispersy.create_missing_identity(self, candidate, self._master_member, on_dispersy_identity) + + yield delay + delay = min(300.0, delay * 1.1) + + def _initialize_meta_messages(self): + assert isinstance(self._meta_messages, dict) + assert len(self._meta_messages) == 0 + + # obtain dispersy meta messages + for meta_message in self._dispersy.initiate_meta_messages(self): + assert meta_message.name not in self._meta_messages + self._meta_messages[meta_message.name] = meta_message + + # obtain community meta messages + for meta_message in self.initiate_meta_messages(): + assert meta_message.name not in self._meta_messages + self._meta_messages[meta_message.name] = meta_message + + if __debug__: + sync_interval = 5.0 + for meta_message in self._meta_messages.itervalues(): + if isinstance(meta_message.distribution, SyncDistribution) and meta_message.batch.max_window >= sync_interval: + logger.warning("when sync is enabled the interval should be greater than the walking frequency. otherwise you are likely to receive duplicate packets [%s]", meta_message.name) + + def _initialize_timeline(self): + mapping = {} + for name in [u"dispersy-authorize", u"dispersy-revoke", u"dispersy-dynamic-settings"]: + try: + meta = self.get_meta_message(name) + except KeyError: + logger.warning("unable to load permissions from database [could not obtain %s]", name) + else: + mapping[meta.database_id] = meta.handle_callback + + if mapping: + for packet, in list(self._dispersy.database.execute(u"SELECT packet FROM sync WHERE meta_message IN (" + ", ".join("?" for _ in mapping) + ") ORDER BY global_time, packet", + mapping.keys())): + message = self._dispersy.convert_packet_to_message(str(packet), self, verify=False) + if message: + logger.debug("processing %s", message.name) + mapping[message.database_id]([message], initializing=True) + else: + # TODO: when a packet conversion fails we must drop something, and preferably check + # all messages in the database again... + logger.error("invalid message in database [%s; %s]\n%s", self.get_classification(), self.cid.encode("HEX"), str(packet).encode("HEX")) + + @property + def dispersy_auto_load(self): + """ + When True, this community will automatically be loaded when a packet is received. + """ + # currently we grab it directly from the database, should become a property for efficiency + return bool(self._dispersy.database.execute(u"SELECT auto_load FROM community WHERE master = ?", + (self._master_member.database_id,)).next()[0]) + + @dispersy_auto_load.setter + def dispersy_auto_load(self, auto_load): + """ + Sets the auto_load flag for this community. + """ + assert isinstance(auto_load, bool) + self._dispersy.database.execute(u"UPDATE community SET auto_load = ? WHERE master = ?", + (1 if auto_load else 0, self._master_member.database_id)) + + @property + def dispersy_auto_download_master_member(self): + """ + Enable or disable automatic downloading of the dispersy-identity for the master member. + """ + return True + + @property + def dispersy_enable_candidate_walker(self): + """ + Enable the candidate walker. + + When True is returned, the dispersy_take_step method will be called periodically. Otherwise + it will be ignored. The candidate walker is enabled by default. + """ + return True + + @property + def dispersy_enable_candidate_walker_responses(self): + """ + Enable the candidate walker responses. + + When True is returned, the community will be able to respond to incoming + dispersy-introduction-request and dispersy-puncture-request messages. Otherwise these + messages are left undefined and will be ignored. + + When dispersy_enable_candidate_walker returns True, this property must also return True. + The default value is to mirror self.dispersy_enable_candidate_walker. + """ + return self.dispersy_enable_candidate_walker + + @property + def dispersy_enable_bloom_filter_sync(self): + """ + Enable the bloom filter synchronisation during the neighbourhood walking. + + When True is returned, outgoing dispersy-introduction-request messages will get the chance to include a sync + bloom filter by calling Community.dispersy_claim_sync_bloom_filter(...). + + When False is returned, outgoing dispersy-introduction-request messages will never include sync bloom filters + and Community.acceptable_global_time will return 2 ** 63 - 1, ensuring that all messages that are delivered + on-demand or incidentally, will be accepted. + """ + return True + + @property + def dispersy_sync_bloom_filter_error_rate(self): + """ + The error rate that is allowed within the sync bloom filter. + + Having a higher error rate will allow for more items to be stored in the bloom filter, + allowing more items to be syced with each sync interval. Although this has the disadvantage + that more false positives will occur. + + A false positive will mean that if A sends a dispersy-sync message to B, B will incorrectly + believe that A already has certain messages. Each message has -error rate- chance of being + a false positive, and hence B will not be able to receive -error rate- percent of the + messages in the system. + + This problem can be aleviated by having multiple bloom filters for each sync range with + different prefixes. Because bloom filters with different prefixes are extremely likely (the + hash functions md5, sha1, shaxxx ensure this) to have false positives for different packets. + Hence, having two of three different bloom filters will ensure you will get all messages, + though it will take more rounds. + + @rtype: float + """ + return 0.01 + + # @property + # def dispersy_sync_bloom_filter_redundancy(self): + # """ + # The number of bloom filters, each with a unique prefix, that are used to represent one sync + # range. + + # The effective error rate for a sync range then becomes redundancy * error_rate. + + # @rtype: int + # """ + # return 3 + + @property + def dispersy_sync_bloom_filter_bits(self): + """ + The size in bits of this bloom filter. + + Note that the amount must be a multiple of eight. + + The sync bloom filter is part of the dispersy-introduction-request message and hence must + fit within a single MTU. There are several numbers that need to be taken into account. + + - A typical MTU is 1500 bytes + + - A typical IP header is 20 bytes. However, the maximum IP header is 60 bytes (this + includes information for VPN, tunnels, etc.) + + - The UDP header is 8 bytes + + - The dispersy header is 2 + 20 + 1 + 20 + 8 = 51 bytes (version, cid, type, member, + global-time) + + - The signature is usually 60 bytes. This depends on what public/private key was chosen. + The current value is: self._my_member.signature_length + + - The other payload is 6 + 6 + 6 + 1 + 2 = 21 (destination-address, source-lan-address, + source-wan-address, advice+connection-type+sync flags, identifier) + + - The sync payload uses 8 + 8 + 4 + 4 + 1 + 4 + 1 = 30 (time low, time high, modulo, offset, + function, bits, prefix) + """ + return (1500 - 60 - 8 - 51 - self._my_member.signature_length - 21 - 30) * 8 + + @property + def dispersy_sync_bloom_filter_strategy(self): + return self._dispersy_claim_sync_bloom_filter_largest + + @property + def dispersy_sync_skip_enable(self): + return self._dispersy_sync_skip_enable + + def dispersy_store(self, messages): + """ + Called after new MESSAGES have been stored in the database. + """ + if __debug__: + cached = 0 + + if self._sync_cache: + cache = self._sync_cache + for message in messages: + if (message.distribution.priority > 32 and + cache.time_low <= message.distribution.global_time <= cache.time_high and + (message.distribution.global_time + cache.offset) % cache.modulo == 0): + + if __debug__: + cached += 1 + + # update cached bloomfilter to avoid duplicates + cache.bloom_filter.add(message.packet) + + # if this message was received from the candidate we send the bloomfilter too, increment responses + if (cache.candidate and message.candidate and cache.candidate.sock_addr == message.candidate.sock_addr): + cache.responses_received += 1 + + if __debug__: + if cached: + logger.debug("%s] %d out of %d were part of the cached bloomfilter", self._cid.encode("HEX"), cached, len(messages)) + + def dispersy_claim_sync_bloom_filter(self, request_cache): + """ + Returns a (time_low, time_high, modulo, offset, bloom_filter) or None. + """ + if self._sync_cache: + if self._sync_cache.responses_received > 0: + if self._dispersy_sync_skip_enable: + # We have received data, reset skip counter + self._sync_cache_skip_count = 0 + + if self._sync_cache.times_used < 100: + self._statistics.sync_bloom_reuse += 1 + self._statistics.sync_bloom_send += 1 + cache = self._sync_cache + cache.times_used += 1 + cache.responses_received = 0 + cache.candidate = request_cache.helper_candidate + + logger.debug("%s reuse #%d (packets received: %d; %s)", self._cid.encode("HEX"), cache.times_used, cache.responses_received, hex(cache.bloom_filter._filter)) + return cache.time_low, cache.time_high, cache.modulo, cache.offset, cache.bloom_filter + + elif self._sync_cache.times_used == 0: + # Still no updates, gradually increment the skipping probability one notch + logger.debug("skip:%d -> %d received:%d", self._sync_cache_skip_count, min(self._sync_cache_skip_count + 1, self._SKIP_STEPS), self._sync_cache.responses_received) + self._sync_cache_skip_count = min(self._sync_cache_skip_count + 1, self._SKIP_STEPS) + + if (self.dispersy_sync_skip_enable and + self._sync_cache_skip_count and + random() < self._SKIP_CURVE_STEPS[self._sync_cache_skip_count - 1]): + # Lets skip this one + logger.debug("skip: random() was <%f", self._SKIP_CURVE_STEPS[self._sync_cache_skip_count - 1]) + self._statistics.sync_bloom_skip += 1 + self._sync_cache = None + return None + + sync = self.dispersy_sync_bloom_filter_strategy() + if sync: + self._sync_cache = SyncCache(*sync) + self._sync_cache.candidate = request_cache.helper_candidate + self._statistics.sync_bloom_new += 1 + self._statistics.sync_bloom_send += 1 + logger.debug("%s new sync bloom (%d/%d~%.2f)", self._cid.encode("HEX"), self._statistics.sync_bloom_reuse, self._statistics.sync_bloom_new, round(1.0 * self._statistics.sync_bloom_reuse / self._statistics.sync_bloom_new, 2)) + + return sync + + @runtime_duration_warning(0.5) + def dispersy_claim_sync_bloom_filter_simple(self): + bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + global_time = self.global_time + + desired_mean = global_time / 2.0 + lambd = 1.0 / desired_mean + time_point = global_time - int(self._random.expovariate(lambd)) + if time_point < 1: + time_point = int(self._random.random() * global_time) + + time_low = time_point - capacity / 2 + time_high = time_low + capacity + + if time_low < 1: + time_low = 1 + time_high = capacity + db_high = capacity + + elif time_high > global_time - capacity: + time_low = max(1, global_time - capacity) + time_high = self.acceptable_global_time + db_high = global_time + + else: + db_high = time_high + + bloom.add_keys(str(packet) for packet, in self._dispersy.database.execute(u"SELECT sync.packet FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32 AND NOT sync.undone AND global_time BETWEEN ? AND ?", (self._database_id, time_low, db_high))) + + if __debug__: + import sys + print >> sys.stderr, "Syncing %d-%d, capacity = %d, pivot = %d" % (time_low, time_high, capacity, time_low) + return (time_low, time_high, 1, 0, bloom) + + # choose a pivot, add all items capacity to the right. If too small, add items left of pivot + @runtime_duration_warning(0.5) + def dispersy_claim_sync_bloom_filter_right(self): + bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + + desired_mean = self.global_time / 2.0 + lambd = 1.0 / desired_mean + from_gbtime = self.global_time - int(self._random.expovariate(lambd)) + if from_gbtime < 1: + from_gbtime = int(self._random.random() * self.global_time) + + # import sys + # print >> sys.stderr, "Pivot", from_gbtime + + mostRecent = False + if from_gbtime > 1: + # use from_gbtime - 1 to include from_gbtime + right, _ = self._select_and_fix(from_gbtime - 1, capacity, True) + + # we did not select enough items from right side, increase nr of items for left + if len(right) < capacity: + to_select = capacity - len(right) + mostRecent = True + + left, _ = self._select_and_fix(from_gbtime, to_select, False) + data = left + right + else: + data = right + else: + data, _ = self._select_and_fix(0, capacity, True) + + if len(data) > 0: + if len(data) >= capacity: + time_low = min(from_gbtime, data[0][0]) + + if mostRecent: + time_high = self.acceptable_global_time + else: + time_high = max(from_gbtime, data[-1][0]) + + # we did not fill complete bloomfilter, assume we selected all items + else: + time_low = 1 + time_high = self.acceptable_global_time + + bloom.add_keys(str(packet) for _, packet in data) + + # print >> sys.stderr, "Syncing %d-%d, nr_packets = %d, capacity = %d, packets %d-%d"%(time_low, time_high, len(data), capacity, data[0][0], data[-1][0]) + + return (time_low, time_high, 1, 0, bloom) + return (1, self.acceptable_global_time, 1, 0, BloomFilter(8, 0.1, prefix='\x00')) + + # instead of pivot + capacity, divide capacity to have 50/50 divivion around pivot + @runtime_duration_warning(0.5) + def dispersy_claim_sync_bloom_filter_50_50(self): + bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + + desired_mean = self.global_time / 2.0 + lambd = 1.0 / desired_mean + from_gbtime = self.global_time - int(self._random.expovariate(lambd)) + if from_gbtime < 1: + from_gbtime = int(self._random.random() * self.global_time) + + # import sys + # print >> sys.stderr, "Pivot", from_gbtime + + mostRecent = False + leastRecent = False + + if from_gbtime > 1: + to_select = capacity / 2 + + # use from_gbtime - 1 to include from_gbtime + right, _ = self._select_and_fix(from_gbtime - 1, to_select, True) + + # we did not select enough items from right side, increase nr of items for left + if len(right) < to_select: + to_select = capacity - len(right) + mostRecent = True + + left, _ = self._select_and_fix(from_gbtime, to_select, False) + + # we did not select enough items from left side + if len(left) < to_select: + leastRecent = True + + # increase nr of items for right if we did select enough items on right side + if len(right) >= to_select: + to_select = capacity - len(right) - len(left) + right = right + self._select_and_fix(right[-1][0], to_select, True)[0] + data = left + right + + else: + data, _ = self._select_and_fix(0, capacity, True) + + if len(data) > 0: + if len(data) >= capacity: + if leastRecent: + time_low = 1 + else: + time_low = min(from_gbtime, data[0][0]) + + if mostRecent: + time_high = self.acceptable_global_time + else: + time_high = max(from_gbtime, data[-1][0]) + + # we did not fill complete bloomfilter, assume we selected all items + else: + time_low = 1 + time_high = self.acceptable_global_time + + bloom.add_keys(str(packet) for _, packet in data) + + # print >> sys.stderr, "Syncing %d-%d, nr_packets = %d, capacity = %d, packets %d-%d"%(time_low, time_high, len(data), capacity, data[0][0], data[-1][0]) + + return (time_low, time_high, 1, 0, bloom) + return (1, self.acceptable_global_time, 1, 0, BloomFilter(8, 0.1, prefix='\x00')) + + # instead of pivot + capacity, compare pivot - capacity and pivot + capacity to see which globaltime range is largest + @runtime_duration_warning(0.5) + def _dispersy_claim_sync_bloom_filter_largest(self): + if __debug__: + t1 = time() + + syncable_messages = u", ".join(unicode(meta.database_id) for meta in self._meta_messages.itervalues() if isinstance(meta.distribution, SyncDistribution) and meta.distribution.priority > 32) + if syncable_messages: + if __debug__: + t2 = time() + + acceptable_global_time = self.acceptable_global_time + bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + + desired_mean = self.global_time / 2.0 + lambd = 1.0 / desired_mean + from_gbtime = self.global_time - int(self._random.expovariate(lambd)) + if from_gbtime < 1: + from_gbtime = int(self._random.random() * self.global_time) + + if from_gbtime > 1 and self._nrsyncpackets >= capacity: + # use from_gbtime -1/+1 to include from_gbtime + right, rightdata = self._select_bloomfilter_range(syncable_messages, from_gbtime - 1, capacity, True) + + # if right did not get to capacity, then we have less than capacity items in the database + # skip left + if right[2] == capacity: + left, leftdata = self._select_bloomfilter_range(syncable_messages, from_gbtime + 1, capacity, False) + left_range = (left[1] or self.global_time) - left[0] + right_range = (right[1] or self.global_time) - right[0] + + if left_range > right_range: + bloomfilter_range = left + data = leftdata + else: + bloomfilter_range = right + data = rightdata + + else: + bloomfilter_range = right + data = rightdata + + if __debug__: + t3 = time() + else: + if __debug__: + t3 = time() + + bloomfilter_range = [1, acceptable_global_time] + + data, fixed = self._select_and_fix(syncable_messages, 0, capacity, True) + if len(data) > 0 and fixed: + bloomfilter_range[1] = data[-1][0] + self._nrsyncpackets = capacity + 1 + + if __debug__: + t4 = time() + + if len(data) > 0: + bloom.add_keys(str(packet) for _, packet in data) + + if __debug__: + logger.debug("%s syncing %d-%d, nr_packets = %d, capacity = %d, packets %d-%d, pivot = %d", + self.cid.encode("HEX"), bloomfilter_range[0], bloomfilter_range[1], len(data), capacity, data[0][0], data[-1][0], from_gbtime) + logger.debug("%s took %f (fakejoin %f, rangeselect %f, dataselect %f, bloomfill, %f", + self.cid.encode("HEX"), time() - t1, t2 - t1, t3 - t2, t4 - t3, time() - t4) + + return (min(bloomfilter_range[0], acceptable_global_time), min(bloomfilter_range[1], acceptable_global_time), 1, 0, bloom) + + if __debug__: + logger.debug("%s no messages to sync", self.cid.encode("HEX")) + + elif __debug__: + logger.debug("%s NOT syncing no syncable messages", self.cid.encode("HEX")) + return (1, acceptable_global_time, 1, 0, BloomFilter(8, 0.1, prefix='\x00')) + + # instead of pivot + capacity, compare pivot - capacity and pivot + capacity to see which globaltime range is largest + @runtime_duration_warning(0.5) + def _dispersy_claim_sync_bloom_filter_modulo(self): + syncable_messages = u", ".join(unicode(meta.database_id) for meta in self._meta_messages.itervalues() if isinstance(meta.distribution, SyncDistribution) and meta.distribution.priority > 32) + if syncable_messages: + bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + + self._nrsyncpackets = list(self._dispersy.database.execute(u"SELECT count(*) FROM sync WHERE meta_message IN (%s) AND undone = 0 LIMIT 1" % (syncable_messages)))[0][0] + modulo = int(ceil(self._nrsyncpackets / float(capacity))) + if modulo > 1: + offset = randint(0, modulo - 1) + packets = list(str(packet) for packet, in self._dispersy.database.execute(u"SELECT sync.packet FROM sync WHERE meta_message IN (%s) AND sync.undone = 0 AND (sync.global_time + ?) %% ? = 0" % syncable_messages, (offset, modulo))) + else: + offset = 0 + modulo = 1 + packets = list(str(packet) for packet, in self._dispersy.database.execute(u"SELECT sync.packet FROM sync WHERE meta_message IN (%s) AND sync.undone = 0" % syncable_messages)) + + bloom.add_keys(packets) + + logger.debug("%s syncing %d-%d, nr_packets = %d, capacity = %d, totalnr = %d", + self.cid.encode("HEX"), modulo, offset, self._nrsyncpackets, capacity, self._nrsyncpackets) + + return (1, self.acceptable_global_time, modulo, offset, bloom) + + else: + logger.debug("%s NOT syncing no syncable messages", self.cid.encode("HEX")) + return (1, self.acceptable_global_time, 1, 0, BloomFilter(8, 0.1, prefix='\x00')) + + def _select_and_fix(self, syncable_messages, global_time, to_select, higher=True): + assert isinstance(syncable_messages, unicode) + if higher: + data = list(self._dispersy.database.execute(u"SELECT global_time, packet FROM sync WHERE meta_message IN (%s) AND undone = 0 AND global_time > ? ORDER BY global_time ASC LIMIT ?" % (syncable_messages), + (global_time, to_select + 1))) + else: + data = list(self._dispersy.database.execute(u"SELECT global_time, packet FROM sync WHERE meta_message IN (%s) AND undone = 0 AND global_time < ? ORDER BY global_time DESC LIMIT ?" % (syncable_messages), + (global_time, to_select + 1))) + + fixed = False + if len(data) > to_select: + fixed = True + + # if last 2 packets are equal, then we need to drop those + global_time = data[-1][0] + del data[-1] + while data and data[-1][0] == global_time: + del data[-1] + + if not higher: + data.reverse() + + return data, fixed + + def _select_bloomfilter_range(self, syncable_messages, global_time, to_select, higher=True): + data, fixed = self._select_and_fix(syncable_messages, global_time, to_select, higher) + + lowerfixed = True + higherfixed = True + + # if we selected less than to_select + if len(data) < to_select: + # calculate how many still remain + to_select = to_select - len(data) + if to_select > 25: + if higher: + lowerdata, lowerfixed = self._select_and_fix(syncable_messages, global_time + 1, to_select, False) + data = lowerdata + data + else: + higherdata, higherfixed = self._select_and_fix(syncable_messages, global_time - 1, to_select, True) + data = data + higherdata + + bloomfilter_range = [data[0][0], data[-1][0], len(data)] + # we can use the global_time as a min or max value for lower and upper bound + if higher: + # we selected items higher than global_time, make sure bloomfilter_range[0] is at least as low a global_time + 1 + # we select all items higher than global_time, thus all items global_time + 1 are included + bloomfilter_range[0] = min(bloomfilter_range[0], global_time + 1) + + # if not fixed and higher, then we have selected up to all know packets + if not fixed: + bloomfilter_range[1] = self.acceptable_global_time + if not lowerfixed: + bloomfilter_range[0] = 1 + else: + # we selected items lower than global_time, make sure bloomfilter_range[1] is at least as high as global_time -1 + # we select all items lower than global_time, thus all items global_time - 1 are included + bloomfilter_range[1] = max(bloomfilter_range[1], global_time - 1) + + if not fixed: + bloomfilter_range[0] = 1 + if not higherfixed: + bloomfilter_range[1] = self.acceptable_global_time + + return bloomfilter_range, data + + # def dispersy_claim_sync_bloom_filter(self, identifier): + # """ + # Returns a (time_low, time_high, bloom_filter) tuple or None. + # """ + # count, = self._dispersy.database.execute(u"SELECT COUNT(1) FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32", (self._database_id,)).next() + # if count: + # bloom = BloomFilter(self.dispersy_sync_bloom_filter_bits, self.dispersy_sync_bloom_filter_error_rate, prefix=chr(int(random() * 256))) + # capacity = bloom.get_capacity(self.dispersy_sync_bloom_filter_error_rate) + # ranges = int(ceil(1.0 * count / capacity)) + + # desired_mean = ranges / 2.0 + # lambd = 1.0 / desired_mean + # range_ = ranges - int(ceil(expovariate(lambd))) + # RANGE_ < 0 is possible when the exponential function returns a very large number (least likely) + # RANGE_ = 0 is the oldest time bloomfilter_range (less likely) + # RANGE_ = RANGES - 1 is the highest time bloomfilter_range (more likely) + + # if range_ < 0: + # pick uniform randomly + # range_ = int(random() * ranges) + + # if range_ == ranges - 1: + # the chosen bloomfilter_range is to small to fill an entire bloom filter. adjust the offset + # accordingly + # offset = max(0, count - capacity + 1) + + # else: + # offset = range_ * capacity + + # get the time bloomfilter_range associated to the offset + # try: + # time_low, time_high = self._dispersy.database.execute(u"SELECT MIN(sync.global_time), MAX(sync.global_time) FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32 ORDER BY sync.global_time LIMIT ? OFFSET ?", + # (self._database_id, capacity, offset)).next() + # except: + # dprint("count: ", count, " capacity: ", capacity, " bloomfilter_range: ", range_, " ranges: ", ranges, " offset: ", offset, force=True) + # assert False + + # if __debug__ and self.get_classification() == u"ChannelCommunity": + # low, high = self._dispersy.database.execute(u"SELECT MIN(sync.global_time), MAX(sync.global_time) FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32", + # (self._database_id,)).next() + # dprint("bloomfilter_range: ", range_, " ranges: ", ranges, " offset: ", offset, " time: [", time_low, ":", time_high, "] in-db: [", low, ":", high, "]", force=True) + + # assert isinstance(time_low, (int, long)) + # assert isinstance(time_high, (int, long)) + + # assert 0 < ranges + # assert 0 <= range_ < ranges + # assert ranges == 1 and range_ == 0 or ranges > 1 + # assert 0 <= offset + + # get all the data associated to the time bloomfilter_range + # counter = 0 + # for packet, in self._dispersy.database.execute(u"SELECT sync.packet FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32 AND sync.global_time BETWEEN ? AND ?", + # (self._database_id, time_low, time_high)): + # bloom.add(str(packet)) + # counter += 1 + + # if range_ == 0: + # time_low = 1 + + # if range_ == ranges - 1: + # time_high = 0 + + # if __debug__ and self.get_classification() == u"ChannelCommunity": + # dprint("off: ", offset, " cap: ", capacity, " count: ", counter, "/", count, " time: [", time_low, ":", time_high, "]", force=True) + + # if __debug__: + # if len(data) > 1: + # low, high = self._dispersy.database.execute(u"SELECT MIN(sync.global_time), MAX(sync.global_time) FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND meta_message.priority > 32", + # (self._database_id,)).next() + # dprint(self.cid.encode("HEX"), " syncing <<", data[0][0], " <", data[1][0], "-", data[-2][0], "> ", data[-1][0], ">> sync:[", time_low, ":", time_high, "] db:[", low, ":", high, "] len:", len(data), " cap:", capacity) + + # return (time_low, time_high, bloom) + + # return (1, 0, BloomFilter(8, 0.1, prefix='\x00')) + + @property + def dispersy_sync_response_limit(self): + """ + The maximum number of bytes to send back per received dispersy-sync message. + @rtype: int + """ + return 5 * 1025 + + @property + def dispersy_missing_sequence_response_limit(self): + """ + The maximum number of bytes to send back per received dispersy-missing-sequence message. + @rtype: (int, int) + """ + return 10 * 1025 + + @property + def dispersy_acceptable_global_time_range(self): + return 10000 + + @property + def cid(self): + """ + The 20 byte sha1 digest of the public master key, in other words: the community identifier. + @rtype: string + """ + return self._cid + + @property + def database_id(self): + """ + The number used to identify this community in the local Dispersy database. + @rtype: int or long + """ + return self._database_id + + @property + def database_version(self): + return self._database_version + + @property + def master_member(self): + """ + The community Member instance. + @rtype: Member + """ + return self._master_member + + @property + def my_member(self): + """ + Our own Member instance that is used to sign the messages that we create. + @rtype: Member + """ + return self._my_member + + @property + def dispersy(self): + """ + The Dispersy instance. + @rtype: Dispersy + """ + return self._dispersy + + @property + def timeline(self): + """ + The Timeline instance. + @rtype: Timeline + """ + return self._timeline + + @property + def global_time(self): + """ + The most highest global time that we have stored in the database. + @rtype: int or long + """ + return max(1, self._global_time) + + @property + def acceptable_global_time(self): + """ + The highest global time that we will accept for incoming messages that need to be stored in + the database. + + The acceptable global time is determined as follows: + + 1. when self.dispersy_enable_bloom_filter_sync == False, returns 2 ** 63 - 1, or + + 2. when we have more than 5 candidates (i.e. we have more than 5 opinions about what the global_time should be) + we will use its median + self.dispersy_acceptable_global_time_range, or + + 3. otherwise we will not trust the candidate's opinions and use our own global time (obtained from the highest + global time in the database) + self.dispersy_acceptable_global_time_range. + + @rtype: int or long + """ + now = time() + + def acceptable_global_time_helper(): + options = sorted(global_time for global_time in (candidate.global_time for candidate in self.dispersy_yield_verified_candidates()) if global_time > 0) + + if len(options) > 5: + # note: officially when the number of options is even, the median is the average between the + # two 'middle' options. in our case we simply round down the 'middle' option + median_global_time = options[len(options) / 2] + + else: + median_global_time = 0 + + # 07/05/12 Boudewijn: for an unknown reason values larger than 2^63-1 cause overflow + # exceptions in the sqlite3 wrapper + return min(max(self._global_time, median_global_time) + self.dispersy_acceptable_global_time_range, 2 ** 63 - 1) + + if self.dispersy_enable_bloom_filter_sync: + # get opinions from all active candidates + if self._acceptable_global_time_deadline < now: + self._acceptable_global_time_cache = acceptable_global_time_helper() + self._acceptable_global_time_deadline = now + 5.0 + return self._acceptable_global_time_cache + + else: + return 2 ** 63 - 1 + + def unload_community(self): + """ + Unload a single community. + """ + # remove all pending callbacks + for id_ in self._pending_callbacks: + self._dispersy.callback.unregister(id_) + self._pending_callbacks = [] + + self._dispersy.detach_community(self) + + def claim_global_time(self): + """ + Increments the current global time by one and returns this value. + @rtype: int or long + """ + self._global_time += 1 + logger.debug("claiming a new global time value @%d", self._global_time) + self._check_for_pruning() + return self._global_time + + def update_global_time(self, global_time): + """ + Increase the local global time if the given GLOBAL_TIME is larger. + """ + if global_time > self._global_time: + logger.debug("updating global time %d -> %d", self._global_time, global_time) + self._global_time = global_time + self._check_for_pruning() + + def _check_for_pruning(self): + """ + Check for messages that need to be pruned because the global time changed. Should be called + whenever self._global_time is increased. + """ + for meta in self._meta_messages.itervalues(): + if isinstance(meta.distribution, SyncDistribution) and isinstance(meta.distribution.pruning, GlobalTimePruning): + # TODO: some messages should support a notifier when a message is pruned + # if __debug__: dprint("checking pruning for ", meta.name, " @", self._global_time, force=1) + # packets = [str(packet) + # for packet, + # in self._dispersy.database.execute(u"SELECT packet FROM sync WHERE meta_message = ? AND global_time <= ?", + # (meta.database_id, self._global_time - meta.distribution.pruning.prune_threshold))] + # if packets: + + self._dispersy.database.execute(u"DELETE FROM sync WHERE meta_message = ? AND global_time <= ?", + (meta.database_id, self._global_time - meta.distribution.pruning.prune_threshold)) + logger.debug("%d %s messages have been pruned", self._dispersy.database.changes, meta.name) + + def dispersy_check_database(self): + """ + Called each time after the community is loaded and attached to Dispersy. + """ + self._database_version = self._dispersy.database.check_community_database(self, self._database_version) + + def get_member(self, public_key): + """ + Returns a Member instance associated with public_key. + + since we have the public_key, we can create this user when it didn't already exist. Hence, + this method always succeeds. + + @param public_key: The public key of the member we want to obtain. + @type public_key: string + + @return: The Member instance associated with public_key. + @rtype: Member + + @note: This returns -any- Member, it may not be a member that is part of this community. + + @todo: Since this method returns Members that are not specifically bound to any community, + this method should be moved to Dispersy + """ + logger.warning("deprecated. please use Dispersy.get_member") + return self._dispersy.get_member(public_key) + + def get_members_from_id(self, mid): + """ + Returns zero or more Member instances associated with mid, where mid is the sha1 digest of a + member public key. + + As we are using only 20 bytes to represent the actual member public key, this method may + return multiple possible Member instances. In this case, other ways must be used to figure + out the correct Member instance. For instance: if a signature or encryption is available, + all Member instances could be used, but only one can succeed in verifying or decrypting. + + Since we may not have the public key associated to MID, this method may return an empty + list. In such a case it is sometimes possible to DelayPacketByMissingMember to obtain the + public key. + + @param mid: The 20 byte sha1 digest indicating a member. + @type mid: string + + @return: A list containing zero or more Member instances. + @rtype: [Member] + + @note: This returns -any- Member, it may not be a member that is part of this community. + + @todo: Since this method returns Members that are not specifically bound to any community, + this method should be moved to Dispersy + """ + logger.warning("deprecated. please use Dispersy.get_members_from_id") + return self._dispersy.get_members_from_id(mid) + + def get_default_conversion(self): + """ + Returns the default conversion (defined as the last conversion). + + Raises KeyError() when no conversions are available. + """ + if self._conversions: + return self._conversions[-1] + + # for backwards compatibility we will raise a KeyError when conversion isn't found (previously self._conversions + # was a dictionary) + logger.warning("Unable to find default conversion (there are no conversions available)") + raise KeyError() + + def get_conversion_for_packet(self, packet): + """ + Returns the conversion associated with PACKET. + + This method returns the first available conversion that can *decode* PACKET, this is tested in reversed order + using conversion.can_decode_message(PACKET). Typically a conversion can decode a string when it matches: the + community version, the Dispersy version, and the community identifier, and the conversion knows how to decode + messages types described in PACKET. + + Note that only the bytes needed to determine conversion.can_decode_message(PACKET) must be given, therefore + PACKET is not necessarily an entire packet but can also be a the first N bytes of a packet. + + Raises KeyError(packet) when no conversion is available. + """ + assert isinstance(packet, str), type(packet) + for conversion in reversed(self._conversions): + if conversion.can_decode_message(packet): + return conversion + + # for backwards compatibility we will raise a KeyError when no conversion for PACKET is found (previously + # self._conversions was a dictionary) + logger.warning("Unable to find conversion to decode %s in %s", packet.encode("HEX"), self._conversions) + raise KeyError(packet) + + def get_conversion_for_message(self, message): + """ + Returns the conversion associated with MESSAGE. + + This method returns the first available conversion that can *encode* MESSAGE, this is tested in reversed order + using conversion.can_encode_message(MESSAGE). Typically a conversion can encode a message when: the conversion + knows how to encode messages with MESSAGE.name. + + Raises KeyError(message) when no conversion is available. + """ + if __debug__: + from .message import Message + assert isinstance(message, (Message, Message.Implementation)), type(message) + + for conversion in reversed(self._conversions): + if conversion.can_encode_message(message): + return conversion + + # for backwards compatibility we will raise a KeyError when no conversion for MESSAGE is found (previously + # self._conversions was a dictionary) + logger.warning("Unable to find conversion to encode %s in %s", message, self._conversions) + raise KeyError(message) + + def add_conversion(self, conversion): + """ + Add a Conversion to the Community. + + A conversion instance converts between the internal Message structure and the on-the-wire + message. + + @param conversion: The new conversion instance. + @type conversion: Conversion + """ + if __debug__: + from .conversion import Conversion + assert isinstance(conversion, Conversion) + self._conversions.append(conversion) + + @documentation(Dispersy.take_step) + def dispersy_take_step(self, allow_sync): + return self._dispersy.take_step(self, allow_sync) + + @documentation(Dispersy.get_message) + def get_dispersy_message(self, member, global_time): + return self._dispersy.get_message(self, member, global_time) + + @documentation(Dispersy.create_authorize) + def create_dispersy_authorize(self, permission_triplets, sign_with_master=False, store=True, update=True, forward=True): + return self._dispersy.create_authorize(self, permission_triplets, sign_with_master, store, update, forward) + + @documentation(Dispersy.create_revoke) + def create_dispersy_revoke(self, permission_triplets, sign_with_master=False, store=True, update=True, forward=True): + return self._dispersy.create_revoke(self, permission_triplets, sign_with_master, store, update, forward) + + @documentation(Dispersy.create_undo) + def create_dispersy_undo(self, message, sign_with_master=False, store=True, update=True, forward=True): + return self._dispersy.create_undo(self, message, sign_with_master, store, update, forward) + + @documentation(Dispersy.create_identity) + def create_dispersy_identity(self, sign_with_master=False, store=True, update=True): + return self._dispersy.create_identity(self, sign_with_master, store, update) + + @documentation(Dispersy.create_signature_request) + def create_dispersy_signature_request(self, candidate, message, response_func, response_args=(), timeout=10.0, forward=True): + return self._dispersy.create_signature_request(self, candidate, message, response_func, response_args, timeout, forward) + + @documentation(Dispersy.create_destroy_community) + def create_dispersy_destroy_community(self, degree, sign_with_master=False, store=True, update=True, forward=True): + return self._dispersy.create_destroy_community(self, degree, sign_with_master, store, update, forward) + + @documentation(Dispersy.create_dynamic_settings) + def create_dispersy_dynamic_settings(self, policies, sign_with_master=False, store=True, update=True, forward=True): + return self._dispersy.create_dynamic_settings(self, policies, sign_with_master, store, update, forward) + + @documentation(Dispersy.create_introduction_request) + def create_introduction_request(self, candidate, allow_sync): + return self._dispersy.create_introduction_request(self, candidate, allow_sync) + + def dispersy_on_dynamic_settings(self, messages, initializing=False): + return self._dispersy.on_dynamic_settings(self, messages, initializing) + + def _iter_category(self, category): + while True: + index = 0 + has_result = False + keys = self._candidates.keys() + + while index < len(keys): + now = time() + key = keys[index] + candidate = self._candidates.get(key) + + if (candidate and + candidate.get_category(now) == category): + + yield candidate + has_result = True + + keys = self._candidates.keys() + try: + if keys[index] != key: + # a key has been removed from self._candidates + index = keys.index(key) + except (IndexError, ValueError): + index -= 1 + + index += 1 + + if not has_result: + yield None + + def _iter_categories(self, categories, once=False): + while True: + index = 0 + has_result = False + keys = self._candidates.keys() + + while index < len(keys): + now = time() + key = keys[index] + candidate = self._candidates.get(key) + + if (candidate and + candidate.get_category(now) in categories): + + yield candidate + has_result = True + + keys = self._candidates.keys() + try: + if keys[index] != key: + # a key has been removed from self._candidates + index = keys.index(key) + except (IndexError, ValueError): + index -= 1 + + index += 1 + + if once: + break + elif not has_result: + yield None + + def _iter_bootstrap(self, once=False): + while True: + no_result = True + + bootstrap_candidates = list(self._dispersy.bootstrap_candidates) + for candidate in bootstrap_candidates: + if candidate.is_eligible_for_walk(time()): + no_result = False + yield candidate + + if no_result: + yield None + + if once: + break + + def dispersy_yield_candidates(self): + """ + Yields all candidates that are part of this community. + + The returned 'walk', 'stumble', and 'intro' candidates are randomised on every call and + returned only once each. + """ + assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates" + + now = time() + candidates = [candidate for candidate in self._candidates.itervalues() if candidate.get_category(now) in (u"walk", u"stumble", u"intro")] + shuffle(candidates) + return iter(candidates) + + def dispersy_yield_verified_candidates(self): + """ + Yields unique active candidates. + + The returned 'walk' and 'stumble' candidates are randomised on every call and returned only + once each. + """ + assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates" + + now = time() + candidates = [candidate for candidate in self._candidates.itervalues() if candidate.get_category(now) in (u"walk", u"stumble")] + shuffle(candidates) + return iter(candidates) + + def dispersy_get_introduce_candidate(self, exclude_candidate=None): + """ + Return one candidate or None in round robin fashion from the walked or stumbled categories. + This method is used by the walker to choose the candidates to introduce when an introduction + request is received. + """ + assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates" + + first_candidates = [None, None] + while True: + def get_walked(): + result = self._walked_candidates.next() + if result == first_candidates[0]: + result = None + + if not first_candidates[0]: + first_candidates[0] = result + + return result + + def get_stumbled(): + result = self._stumbled_candidates.next() + if result == first_candidates[1]: + result = None + + if not first_candidates[1]: + first_candidates[1] = result + + return result + + r = random() + result = get_walked() if r <= .5 else get_stumbled() + if not result: + result = get_stumbled() if r <= .5 else get_walked() + + if result and exclude_candidate: + # same candidate as requesting the introduction + if result == exclude_candidate: + continue + + # cannot introduce a non-tunnelled candidate to a tunneled candidate (it's swift instance will not + # get it) + if not exclude_candidate.tunnel and result.tunnel: + continue + + # cannot introduce two nodes that are behind a different symmetric NAT + if (exclude_candidate.connection_type == u"symmetric-NAT" and + result.connection_type == u"symmetric-NAT" and + not exclude_candidate.wan_address[0] == result.wan_address[0]): + continue + + return result + + def dispersy_get_walk_candidate(self): + """ + Returns a candidate from either the walk, stumble or intro category which is eligible for walking. + Selects a category based on predifined probabilities. + """ + # 13/02/12 Boudewijn: normal peers can not be visited multiple times within 30 seconds, + # bootstrap peers can not be visited multiple times within 55 seconds. this is handled by + # the Candidate.is_eligible_for_walk(...) method + + assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates" + + from sys import maxint + + now = time() + categories = [(maxint, None), (maxint, None), (maxint, None)] + category_sizes = [0, 0, 0] + + for candidate in self._candidates.itervalues(): + if candidate.is_eligible_for_walk(now): + category = candidate.get_category(now) + if category == u"walk": + categories[0] = min(categories[0], (candidate.last_walk, candidate)) + category_sizes[0] += 1 + elif category == u"stumble": + categories[1] = min(categories[1], (candidate.last_stumble, candidate)) + category_sizes[1] += 1 + elif category == u"intro": + categories[2] = min(categories[2], (candidate.last_intro, candidate)) + category_sizes[2] += 1 + + walk, stumble, intro = [candidate for _, candidate in categories] + while walk or stumble or intro: + r = random() + + # 13/02/12 Boudewijn: we decrease the 1% chance to contact a bootstrap peer to .5% + if r <= .4975: # ~50% + if walk: + logger.debug("returning [%2d:%2d:%2d walk ] %s", category_sizes[0] , category_sizes[1], category_sizes[2], walk) + return walk + + elif r <= .995: # ~50% + if stumble or intro: + while True: + if random() <= .5: + if stumble: + logger.debug("returning [%2d:%2d:%2d stumble] %s", category_sizes[0] , category_sizes[1], category_sizes[2], stumble) + return stumble + + else: + if intro: + logger.debug("returning [%2d:%2d:%2d intro ] %s", category_sizes[0] , category_sizes[1], category_sizes[2], intro) + return intro + + else: # ~.5% + candidate = self._bootstrap_candidates.next() + if candidate: + logger.debug("returning [%2d:%2d:%2d bootstr] %s", category_sizes[0] , category_sizes[1], category_sizes[2], candidate) + return candidate + + bootstrap_candidates = list(self._iter_bootstrap(once=True)) + shuffle(bootstrap_candidates) + for candidate in bootstrap_candidates: + if candidate: + logger.debug("returning [%2d:%2d:%2d bootstr] %s", category_sizes[0] , category_sizes[1], category_sizes[2], candidate) + return candidate + + logger.debug("no candidates or bootstrap candidates available") + return None + + def create_candidate(self, sock_addr, tunnel, lan_address, wan_address, connection_type): + """ + Creates and returns a new WalkCandidate instance. + """ + assert not sock_addr in self._candidates + assert isinstance(tunnel, bool) + candidate = WalkCandidate(sock_addr, tunnel, lan_address, wan_address, connection_type) + self.add_candidate(candidate) + return candidate + + def get_candidate(self, sock_addr, replace=True, lan_address=("0.0.0.0", 0)): + """ + Returns an existing candidate object or None + + 1. returns an existing candidate from self._candidates, or + + 2. returns a bootstrap candidate from self._bootstrap_candidates, or + + 3. returns an existing candidate with the same host on a different port if this candidate is + marked as a symmetric NAT. When replace is True, the existing candidate is moved from + its previous sock_addr to the new sock_addr. + + 4. Or returns None + """ + # use existing (bootstrap) candidate + candidate = self._candidates.get(sock_addr) or self._dispersy._bootstrap_candidates.get(sock_addr) + logger.debug("existing candidate for %s:%d is %s", sock_addr[0], sock_addr[1], candidate) + + if candidate is None: + # find matching candidate with the same host but a different port (symmetric NAT) + for candidate in self._candidates.itervalues(): + if (candidate.connection_type == "symmetric-NAT" and + candidate.sock_addr[0] == sock_addr[0] and + candidate.lan_address in (("0.0.0.0", 0), lan_address)): + logger.debug("using existing candidate %s at different port %s %s", candidate, sock_addr[1], "(replace)" if replace else "(no replace)") + + if replace: + # remove vote under previous key + self._dispersy.wan_address_unvote(candidate) + + # replace candidate + del self._candidates[candidate.sock_addr] + lan_address, wan_address = self._dispersy.estimate_lan_and_wan_addresses(sock_addr, candidate.lan_address, candidate.wan_address) + candidate.sock_addr = sock_addr + candidate.update(candidate.tunnel, lan_address, wan_address, candidate.connection_type) + self._candidates[candidate.sock_addr] = candidate + + break + + else: + # no symmetric NAT candidate found + candidate = None + + return candidate + + def get_walkcandidate(self, message): + if isinstance(message.candidate, WalkCandidate): + return message.candidate + + else: + # modify either the senders LAN or WAN address based on how we perceive that node + source_lan_address, source_wan_address = self._dispersy.estimate_lan_and_wan_addresses(message.candidate.sock_addr, message.payload.source_lan_address, message.payload.source_wan_address) + if source_lan_address == ("0.0.0.0", 0) or source_wan_address == ("0.0.0.0", 0): + logger.debug("problems determining source LAN or WAN address, can neither introduce nor convert candidate to WalkCandidate") + return None + + # check if we have this candidate registered at its sock_addr + candidate = self.get_candidate(message.candidate.sock_addr, lan_address=source_lan_address) + if candidate: + return candidate + + candidate = self.create_candidate(message.candidate.sock_addr, message.candidate.tunnel, source_lan_address, source_wan_address, message.payload.connection_type) + return candidate + + def add_candidate(self, candidate): + if not isinstance(candidate, BootstrapCandidate): + assert candidate.sock_addr not in self._dispersy._bootstrap_candidates.iterkeys(), "none of the bootstrap candidates may be in self._candidates" + + if candidate.sock_addr not in self._candidates: + self._candidates[candidate.sock_addr] = candidate + self._dispersy.statistics.total_candidates_discovered += 1 + + def get_candidate_mid(self, mid): + members = self._dispersy.get_members_from_id(mid) + if members: + member = members[0] + + for candidate in self._candidates.itervalues(): + if candidate.is_associated(member): + return candidate + + def filter_duplicate_candidate(self, candidate): + """ + A node told us its LAN and WAN address, it is possible that we can now determine that we + already have CANDIDATE in our candidate list. + + When we learn that a candidate happens to be behind a symmetric NAT we must remove all other + candidates that have the same host. + """ + wan_address = candidate.wan_address + lan_address = candidate.lan_address + + # find existing candidates that are likely to be the same candidate + others = [other + for other + in self._candidates.itervalues() + if (other.wan_address[0] == wan_address[0] and + other.lan_address == lan_address)] + + # merge and remove existing candidates in favor of the new CANDIDATE + for other in others: + # all except for the CANDIDATE + if not other == candidate: + logger.warn("removing %s in favor of %s", other, candidate) + candidate.merge(other) + del self._candidates[other.sock_addr] + self.add_candidate(candidate) + self._dispersy.wan_address_unvote(other) + + def _periodically_cleanup_candidates(self): + """ + Periodically remove obsolete Candidate instances. + """ + while True: + yield 5 * 60.0 + + now = time() + for key, candidate in [(key, candidate) for key, candidate in self._candidates.iteritems() if candidate.is_obsolete(now)]: + logger.debug("removing obsolete candidate %s", candidate) + del self._candidates[key] + self._dispersy.wan_address_unvote(candidate) + + def dispersy_cleanup_community(self, message): + """ + A dispersy-destroy-community message is received. + + Once a community is destroyed, it must be reclassified to ensure that it is not loaded in + its regular form. This method returns the class that the community will be reclassified + into. It should return either a subclass of SoftKilledCommity or HardKilledCommunity + depending on the received dispersy-destroy-community message. + + Depending on the degree of the destroy message, we will need to cleanup in different ways. + + - soft-kill: The community is frozen. Dispersy will retain the data it has obtained. + However, no messages beyond the global-time of the dispersy-destroy-community message + will be accepted. Responses to dispersy-sync messages will be send like normal. + + - hard-kill: The community is destroyed. Dispersy will throw away everything except the + dispersy-destroy-community message and the authorize chain that is required to verify + this message. The community should also remove all its data and cleanup as much as + possible. + + Similar to other on_... methods, this method may raise a DropMessage exception. In this + case the message will be ignored and no data is removed. However, each dispersy-sync that + is sent is likely to result in the same dispersy-destroy-community message to be received. + + @param address: The address from where we received this message. + @type address: (string, int) + + @param message: The received message. + @type message: Message.Implementation + + @rtype: Community class + """ + # override to implement community cleanup + if message.payload.is_soft_kill: + raise NotImplementedError() + + elif message.payload.is_hard_kill: + return HardKilledCommunity + + def dispersy_malicious_member_detected(self, member, packets): + """ + Proof has been found that MEMBER is malicious + + @param member: The malicious member. + @type member: Member + + @param packets: One or more packets proving that the member is malicious. All packets must + be associated to the same community. + @type packets: [Packet] + """ + pass + + def get_meta_message(self, name): + """ + Returns the meta message by its name. + + @param name: The name of the message. + @type name: unicode + + @return: The meta message. + @rtype: Message + + @raise KeyError: When there is no meta message by that name. + """ + assert isinstance(name, unicode) + return self._meta_messages[name] + + def get_meta_messages(self): + """ + Returns all meta messages. + + @return: The meta messages. + @rtype: [Message] + """ + return self._meta_messages.values() + + def initiate_meta_messages(self): + """ + Create the meta messages for one community instance. + + This method is called once for each community when it is created. The resulting meta + messages can be obtained by either get_meta_message(name) or get_meta_messages(). + + To distinct the meta messages that the community provides from those that Dispersy provides, + none of the messages may have a name that starts with 'dispersy-'. + + @return: The new meta messages. + @rtype: [Message] + """ + raise NotImplementedError(self) + + def initiate_conversions(self): + """ + Create the Conversion instances for this community instance. + + This method is called once for each community when it is created. The resulting Conversion instances can be + obtained using get_default_conversion(), get_conversion_for_packet(), and get_conversion_for_message(). + + Returns a list with all Conversion instances that this community will support. Note that the ordering of + Conversion classes determines what the get_..._conversion_...() methods return. + + @rtype: [Conversion] + """ + raise NotImplementedError(self) + + +class HardKilledCommunity(Community): + + def __init__(self, *args, **kargs): + super(HardKilledCommunity, self).__init__(*args, **kargs) + + destroy_message_id = self._meta_messages[u"dispersy-destroy-community"].database_id + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE meta_message = ? LIMIT 1", (destroy_message_id,)).next() + except StopIteration: + logger.error("unable to locate the dispersy-destroy-community message") + self._destroy_community_packet = "" + else: + self._destroy_community_packet = str(packet) + + def _initialize_meta_messages(self): + super(HardKilledCommunity, self)._initialize_meta_messages() + + # replace introduction_request behaviour + self._meta_messages[u"dispersy-introduction-request"]._handle_callback = self.dispersy_on_introduction_request + + @property + def dispersy_enable_candidate_walker(self): + # disable candidate walker + return False + + @property + def dispersy_enable_candidate_walker_responses(self): + # enable walker responses + return True + + def initiate_meta_messages(self): + # there are no community messages + return [] + + def initiate_conversions(self): + # TODO we will not be able to use this conversion because the community version will not + # match + return [DefaultConversion(self)] + + def get_conversion_for_packet(self, packet): + try: + return super(HardKilledCommunity, self).get_conversion_for_packet(packet) + + except KeyError: + # the dispersy version MUST BE available. Currently we only support \x00: BinaryConversion + if packet[0] == "\x00": + self.add_conversion(BinaryConversion(self, packet[1])) + + # try again + return super(HardKilledCommunity, self).get_conversion_for_packet(packet) + + def dispersy_on_introduction_request(self, messages): + if self._destroy_community_packet: + self._dispersy.statistics.dict_inc(self._dispersy.statistics.outgoing, u"-destroy-community") + self._dispersy.endpoint.send([message.candidate for message in messages], [self._destroy_community_packet]) diff -Nru tribler-6.2.0/Tribler/dispersy/conversion.py tribler-6.2.0/Tribler/dispersy/conversion.py --- tribler-6.2.0/Tribler/dispersy/conversion.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/conversion.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,1456 @@ +import logging +logger = logging.getLogger(__name__) + +from hashlib import sha1 +from math import ceil +from socket import inet_ntoa, inet_aton +from struct import pack, unpack_from, Struct +from random import choice + +from .authentication import NoAuthentication, MemberAuthentication, DoubleMemberAuthentication +from .bloomfilter import BloomFilter +from .crypto import ec_check_public_bin +from .destination import CommunityDestination, CandidateDestination +from .distribution import FullSyncDistribution, LastSyncDistribution, DirectDistribution +from .message import DelayPacketByMissingMember, DropPacket, Message +from .resolution import PublicResolution, LinearResolution, DynamicResolution + +if __debug__: + from .authentication import Authentication + from .candidate import Candidate + from .destination import Destination + from .distribution import Distribution + from .resolution import Resolution + + +class Conversion(object): + + """ + A Conversion object is used to convert incoming packets to a different, possibly more recent, + community version. If also allows outgoing messages to be converted to a different, possibly + older, community version. + """ + def __init__(self, community, dispersy_version, community_version): + """ + COMMUNITY instance that this conversion belongs to. + DISPERSY_VERSION is the dispersy conversion identifier (on the wire version; must be one byte). + COMMUNIY_VERSION is the community conversion identifier (on the wire version; must be one byte). + + COMMUNIY_VERSION may not be '\x00' or '\xff'. '\x00' is used by the DefaultConversion until + a proper conversion instance can be made for the Community. '\xff' is reserved for when + more than one byte is needed as a version indicator. + """ + if __debug__: + from .community import Community + assert isinstance(community, Community), type(community) + assert isinstance(dispersy_version, str), type(dispersy_version) + assert len(dispersy_version) == 1, dispersy_version + assert isinstance(community_version, str), type(community_version) + assert len(community_version) == 1, community_version + + # the community that this conversion belongs to. + self._community = community + + # the messages that this instance can handle, and that this instance produces, is identified + # by _prefix. + self._prefix = dispersy_version + community_version + community.cid + assert len(self._prefix) == 22 # when this assumption changes, we need to ensure the + # dispersy_version and community_version properties are + # returned correctly + + @property + def community(self): + return self._community + + @property + def dispersy_version(self): + return self._prefix[0] + + @property + def community_version(self): + return self._prefix[1] + + @property + def version(self): + return (self._prefix[0], self._prefix[1]) + + @property + def prefix(self): + return self._prefix + + def can_decode_message(self, data): + """ + Returns True when DATA can be decoded using this conversion. + """ + assert isinstance(data, str), type(data) + raise NotImplementedError("The subclass must implement decode_message") + + def decode_meta_message(self, data): + """ + Obtain the dispersy meta message from DATA. + @return: Message + """ + assert isinstance(data, str) + assert len(data) >= 22 + assert data[:22] == self._prefix + raise NotImplementedError("The subclass must implement decode_message") + + def decode_message(self, address, data, verify=True): + """ + DATA is a string, where the first byte is the on-the-wire Dispersy version, the second byte + is the on-the-wire Community version and the following 20 bytes is the Community Identifier. + The rest is the message payload. + + Returns a Message instance. + """ + assert isinstance(data, str) + assert len(data) >= 22 + assert data[:22] == self._prefix + raise NotImplementedError("The subclass must implement decode_message") + + def can_encode_message(self, message): + """ + Returns True when MESSAGE can be encoded using this conversion. + """ + assert isinstance(message, (Message, Message.Implementation)), type(message) + raise NotImplementedError("The subclass must implement can_encode_message") + + def encode_message(self, message, sign=True): + """ + Encode a Message instance into a binary string where the first byte is the on-the-wire + Dispersy version, the second byte is the on-the-wire Community version and the following 20 + bytes is the Community Identifier. The rest is the message payload. + + Returns a binary string. + """ + assert isinstance(message, Message) + raise NotImplementedError("The subclass must implement encode_message") + + def __str__(self): + return "<%s %s%s>" % (self.__class__.__name__, self.dispersy_version.encode("HEX"), self.community_version.encode("HEX")) + + def __repr__(self): + return str(self) + + +class BinaryConversion(Conversion): + + """ + On-The-Wire binary version + + This conversion is intended to be as space efficient as possible. + All data is encoded in a binary form. + """ + class Placeholder(object): + __slots__ = ["candidate", "meta", "offset", "data", "authentication", "resolution", "first_signature_offset", "destination", "distribution", "payload", "verify", "allow_empty_signature"] + + def __init__(self, candidate, meta, offset, data, verify, allow_empty_signature): + self.candidate = candidate + self.meta = meta + self.offset = offset + self.data = data + self.verify = verify + self.allow_empty_signature = allow_empty_signature + self.authentication = None + self.resolution = None + self.first_signature_offset = 0 + self.destination = None + self.distribution = None + self.payload = None + + class EncodeFunctions(object): + __slots__ = ["byte", "authentication", "signature", "resolution", "distribution", "payload"] + + def __init__(self, byte, xxx_todo_changeme, resolution, distribution, payload): + (authentication, signature) = xxx_todo_changeme + self.byte = byte + self.authentication = authentication + self.signature = signature + self.resolution = resolution + self.distribution = distribution + self.payload = payload + + class DecodeFunctions(object): + __slots__ = ["meta", "authentication", "resolution", "distribution", "destination", "payload"] + + def __init__(self, meta, authentication, resolution, distribution, destination, payload): + self.meta = meta + self.authentication = authentication + self.resolution = resolution + self.distribution = distribution + self.destination = destination + self.payload = payload + + def __init__(self, community, community_version): + Conversion.__init__(self, community, "\x00", community_version) + + self._struct_B = Struct(">B") + self._struct_BBH = Struct(">BBH") + self._struct_BH = Struct(">BH") + self._struct_H = Struct(">H") + self._struct_HH = Struct(">HH") + self._struct_LL = Struct(">LL") + self._struct_Q = Struct(">Q") + self._struct_QH = Struct(">QH") + self._struct_QL = Struct(">QL") + self._struct_QQHHBH = Struct(">QQHHBH") + self._struct_ccB = Struct(">ccB") + + self._encode_message_map = dict() # message.name : EncodeFunctions + self._decode_message_map = dict() # byte : DecodeFunctions + + # the dispersy-introduction-request and dispersy-introduction-response have several bitfield + # flags that must be set correctly + # reserve 1st bit for enable/disable advice + self._encode_advice_map = {True: int("1", 2), False: int("0", 2)} + self._decode_advice_map = dict((value, key) for key, value in self._encode_advice_map.iteritems()) + # reserve 2nd bit for enable/disable sync + self._encode_sync_map = {True: int("10", 2), False: int("00", 2)} + self._decode_sync_map = dict((value, key) for key, value in self._encode_sync_map.iteritems()) + # reserve 3rd bit for enable/disable tunnel (02/05/12) + self._encode_tunnel_map = {True: int("100", 2), False: int("000", 2)} + self._decode_tunnel_map = dict((value, key) for key, value in self._encode_tunnel_map.iteritems()) + # 4th, 5th and 6th bits are currently unused + # reserve 7th and 8th bits for connection type + self._encode_connection_type_map = {u"unknown": int("00000000", 2), u"public": int("10000000", 2), u"symmetric-NAT": int("11000000", 2)} + self._decode_connection_type_map = dict((value, key) for key, value in self._encode_connection_type_map.iteritems()) + + def define(value, name, encode, decode): + try: + meta = community.get_meta_message(name) + except KeyError: + if __debug__: + debug_non_available.append(name) + else: + self.define_meta_message(chr(value), meta, encode, decode) + + if __debug__: + debug_non_available = [] + + # 255 is reserved + define(254, u"dispersy-missing-sequence", self._encode_missing_sequence, self._decode_missing_sequence) + define(253, u"dispersy-missing-proof", self._encode_missing_proof, self._decode_missing_proof) + define(252, u"dispersy-signature-request", self._encode_signature_request, self._decode_signature_request) + define(251, u"dispersy-signature-response", self._encode_signature_response, self._decode_signature_response) + define(250, u"dispersy-puncture-request", self._encode_puncture_request, self._decode_puncture_request) + define(249, u"dispersy-puncture", self._encode_puncture, self._decode_puncture) + define(248, u"dispersy-identity", self._encode_identity, self._decode_identity) + define(247, u"dispersy-missing-identity", self._encode_missing_identity, self._decode_missing_identity) + define(246, u"dispersy-introduction-request", self._encode_introduction_request, self._decode_introduction_request) + define(245, u"dispersy-introduction-response", self._encode_introduction_response, self._decode_introduction_response) + define(244, u"dispersy-destroy-community", self._encode_destroy_community, self._decode_destroy_community) + define(243, u"dispersy-authorize", self._encode_authorize, self._decode_authorize) + define(242, u"dispersy-revoke", self._encode_revoke, self._decode_revoke) + # 241 for obsolete dispersy-subjective-set + # 240 for obsolete dispersy-missing-subjective-set + define(239, u"dispersy-missing-message", self._encode_missing_message, self._decode_missing_message) + define(238, u"dispersy-undo-own", self._encode_undo_own, self._decode_undo_own) + define(237, u"dispersy-undo-other", self._encode_undo_other, self._decode_undo_other) + define(236, u"dispersy-dynamic-settings", self._encode_dynamic_settings, self._decode_dynamic_settings) + define(235, u"dispersy-missing-last-message", self._encode_missing_last_message, self._decode_missing_last_message) + + if __debug__: + if debug_non_available: + logger.debug("unable to define non-available messages %s", debug_non_available) + + def define_meta_message(self, byte, meta, encode_payload_func, decode_payload_func): + assert isinstance(byte, str) + assert len(byte) == 1 + assert isinstance(meta, Message) + assert 0 < ord(byte) < 255 + assert not meta.name in self._encode_message_map + assert not byte in self._decode_message_map, "This byte has already been defined (%d)" % ord(byte) + assert callable(encode_payload_func) + assert callable(decode_payload_func) + + mapping = {MemberAuthentication: (self._encode_member_authentication, self._encode_member_authentication_signature), + DoubleMemberAuthentication: (self._encode_double_member_authentication, self._encode_double_member_authentication_signature), + NoAuthentication: (self._encode_no_authentication, self._encode_no_authentication_signature), + + PublicResolution: self._encode_public_resolution, + LinearResolution: self._encode_linear_resolution, + DynamicResolution: self._encode_dynamic_resolution, + + FullSyncDistribution: self._encode_full_sync_distribution, + LastSyncDistribution: self._encode_last_sync_distribution, + DirectDistribution: self._encode_direct_distribution} + + self._encode_message_map[meta.name] = self.EncodeFunctions(byte, mapping[type(meta.authentication)], mapping[type(meta.resolution)], mapping[type(meta.distribution)], encode_payload_func) + + mapping = {MemberAuthentication: self._decode_member_authentication, + DoubleMemberAuthentication: self._decode_double_member_authentication, + NoAuthentication: self._decode_no_authentication, + + DynamicResolution: self._decode_dynamic_resolution, + LinearResolution: self._decode_linear_resolution, + PublicResolution: self._decode_public_resolution, + + DirectDistribution: self._decode_direct_distribution, + FullSyncDistribution: self._decode_full_sync_distribution, + LastSyncDistribution: self._decode_last_sync_distribution, + + CandidateDestination: self._decode_empty_destination, + CommunityDestination: self._decode_empty_destination} + + self._decode_message_map[byte] = self.DecodeFunctions(meta, mapping[type(meta.authentication)], mapping[type(meta.resolution)], mapping[type(meta.distribution)], mapping[type(meta.destination)], decode_payload_func) + + # + # Dispersy payload + # + + def _encode_missing_sequence(self, message): + payload = message.payload + assert payload.message.name in self._encode_message_map, payload.message.name + message_id = self._encode_message_map[payload.message.name].byte + return (payload.member.mid, message_id, self._struct_LL.pack(payload.missing_low, payload.missing_high)) + + def _decode_missing_sequence(self, placeholder, offset, data): + if len(data) < offset + 29: + raise DropPacket("Insufficient packet size") + + member_id = data[offset:offset + 20] + offset += 20 + members = [member for member in self._community.dispersy.get_members_from_id(member_id) if member.has_identity(self._community)] + if not members: + raise DelayPacketByMissingMember(self._community, member_id) + elif len(members) > 1: + # this is unrecoverable. a member id without a signature is simply not globally unique. + # This can occur when two or more nodes have the same sha1 hash. Very unlikely. + raise DropPacket("Unrecoverable: ambiguous member") + member = members[0] + + decode_functions = self._decode_message_map.get(data[offset]) + if decode_functions is None: + raise DropPacket("Invalid message") + offset += 1 + + missing_low, missing_high = self._struct_LL.unpack_from(data, offset) + if not (0 < missing_low <= missing_high): + raise DropPacket("Invalid missing_low and missing_high combination") + offset += 8 + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, decode_functions.meta, missing_low, missing_high) + + def _encode_missing_message(self, message): + """ + Encode the payload for dispersy-missing-message. + + The payload will contain one public key, this is a binary string of variable length. It + also contains one or more global times, each global time is a 64 bit unsigned integer. + + The payload contains: + - 2 bytes: the length of the public key + - n bytes: the public key + - 8 bytes: the global time + - 8 bytes: the global time + - ... + - 8 bytes: the global time + """ + payload = message.payload + return (self._struct_H.pack(len(payload.member.public_key)), payload.member.public_key, pack("!%dQ" % len(payload.global_times), *payload.global_times)) + + def _decode_missing_message(self, placeholder, offset, data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size (_decode_missing_message.1)") + + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + + if len(data) < offset + key_length: + raise DropPacket("Insufficient packet size (_decode_missing_message.2)") + + key = data[offset:offset + key_length] + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_missing_message)") + member = self._community.dispersy.get_member(key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + # there must be at least one global time in the packet + global_time_length, mod = divmod(len(data) - offset, 8) + if global_time_length == 0: + raise DropPacket("Insufficient packet size (_decode_missing_message.3)") + if mod != 0: + raise DropPacket("Invalid packet size (_decode_missing_message)") + + global_times = unpack_from("!%dQ" % global_time_length, data, offset) + offset += 8 * len(global_times) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, global_times) + + def _encode_missing_last_message(self, message): + """ + Encode the payload for dispersy-missing-last-message. + + The payload will contain one public key, this is a binary string of variable length. It + also contains the meta message where the last message is requested from. It also contains a + counter, i.e. how many last we want. + + The payload contains: + - 2 bytes: the request identifier + - 2 bytes: the length of the public key + - n bytes: the public key + - 1 byte: the meta message + - 1 byte: the max count we want + """ + payload = message.payload + return (self._struct_H.pack(len(payload.member.public_key)), + payload.member.public_key, + self._encode_message_map[payload.message.name].byte, + chr(payload.count)) + + def _decode_missing_last_message(self, placeholder, offset, data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size (_decode_missing_message.1)") + + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + + if len(data) < offset + key_length: + raise DropPacket("Insufficient packet size (_decode_missing_message.2)") + + key = data[offset:offset + key_length] + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_missing_message)") + member = self._community.dispersy.get_member(key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + if len(data) < offset + 1: + raise DropPacket("Insufficient packet size (_decode_missing_message.3)") + message_id = data[offset] + offset += 1 + decode_functions = self._decode_message_map.get(message_id) + if decode_functions is None: + raise DropPacket("Unknown sub-message id [%d]" % ord(message_id)) + message = decode_functions.meta + + if len(data) < offset + 1: + raise DropPacket("Insufficient packet size (_decode_missing_message.4)") + count = ord(data[offset]) + offset += 1 + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, message, count) + + def _encode_signature_request(self, message): + return (self._struct_H.pack(message.payload.identifier), message.payload.message.packet) + + def _decode_signature_request(self, placeholder, offset, data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size (_decode_signature_request)") + + identifier, = self._struct_H.unpack_from(data, offset) + offset += 2 + + message = self._decode_message(placeholder.candidate, data[offset:], True, True) + offset = len(data) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, identifier, message) + + def _encode_signature_response(self, message): + return (self._struct_H.pack(message.payload.identifier), self.encode_message(message.payload.message)) + # return message.payload.identifier, message.payload.signature + + def _decode_signature_response(self, placeholder, offset, data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size (_decode_signature_request)") + + identifier, = self._struct_H.unpack_from(data, offset) + offset += 2 + + message = self._decode_message(placeholder.candidate, data[offset:], True, True) + offset = len(data) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, identifier, message) + + def _encode_identity(self, message): + return () + + def _decode_identity(self, placeholder, offset, data): + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload) + + def _encode_missing_identity(self, message): + return (message.payload.mid,) + + def _decode_missing_identity(self, placeholder, offset, data): + if len(data) < offset + 20: + raise DropPacket("Insufficient packet size") + + return offset + 20, placeholder.meta.payload.Implementation(placeholder.meta.payload, data[offset:offset + 20]) + + def _encode_destroy_community(self, message): + if message.payload.is_soft_kill: + return ("s",) + else: + return ("h",) + + def _decode_destroy_community(self, placeholder, offset, data): + if len(data) < offset + 1: + raise DropPacket("Insufficient packet size") + + if data[offset] == "s": + degree = u"soft-kill" + else: + degree = u"hard-kill" + offset += 1 + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, degree) + + def _encode_authorize(self, message): + """ + Encode the permissiong_triplets (Member, Message, permission) into an on-the-wire string. + + On-the-wire format: + [ repeat for each Member + 2 byte member public key length + n byte member public key + 1 byte length + [ once for each number in previous byte + 1 byte message id + 1 byte permission bits + ] + ] + """ + permission_map = {u"permit": int("0001", 2), u"authorize": int("0010", 2), u"revoke": int("0100", 2), u"undo": int("1000", 2)} + members = {} + for member, message, permission in message.payload.permission_triplets: + public_key = member.public_key + assert isinstance(public_key, str) + assert message.name in self._encode_message_map + message_id = self._encode_message_map[message.name].byte + assert isinstance(message_id, str) + assert len(message_id) == 1 + assert permission in permission_map + permission_bit = permission_map[permission] + + if not public_key in members: + members[public_key] = {} + + if not message_id in members[public_key]: + members[public_key][message_id] = 0 + + members[public_key][message_id] |= permission_bit + + data = [] + for public_key, messages in members.iteritems(): + data.extend((self._struct_H.pack(len(public_key)), public_key, self._struct_B.pack(len(messages)))) + for message_id, permission_bits in messages.iteritems(): + data.extend((message_id, self._struct_B.pack(permission_bits))) + + return tuple(data) + + def _decode_authorize(self, placeholder, offset, data): + permission_map = {u"permit": int("0001", 2), u"authorize": int("0010", 2), u"revoke": int("0100", 2), u"undo": int("1000", 2)} + permission_triplets = [] + + while offset < len(data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size") + + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + + if len(data) < offset + key_length + 1: + raise DropPacket("Insufficient packet size") + + key = data[offset:offset + key_length] + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_authorize)") + member = self._community.dispersy.get_member(key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + messages_length, = self._struct_B.unpack_from(data, offset) + offset += 1 + + if len(data) < offset + messages_length * 2: + raise DropPacket("Insufficient packet size") + + for _ in xrange(messages_length): + message_id = data[offset] + offset += 1 + decode_functions = self._decode_message_map.get(message_id) + if decode_functions is None: + raise DropPacket("Unknown sub-message id [%d]" % ord(message_id)) + message = decode_functions.meta + + if not isinstance(message.resolution, (PublicResolution, LinearResolution, DynamicResolution)): + raise DropPacket("Invalid resolution policy") + + if not isinstance(message.authentication, (MemberAuthentication, DoubleMemberAuthentication)): + # it makes no sence to authorize a message that does not use the + # MemberAuthentication or DoubleMemberAuthentication policy because without this + # policy it is impossible to verify WHO created the message. + raise DropPacket("Invalid authentication policy") + + permission_bits, = self._struct_B.unpack_from(data, offset) + offset += 1 + + for permission, permission_bit in permission_map.iteritems(): + if permission_bit & permission_bits: + permission_triplets.append((member, message, permission)) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, permission_triplets) + + def _encode_revoke(self, message): + """ + Encode the permissiong_triplets (Member, Message, permission) into an on-the-wire string. + + On-the-wire format: + [ repeat for each Member + 2 byte member public key length + n byte member public key + 1 byte length + [ once for each number in previous byte + 1 byte message id + 1 byte permission bits + ] + ] + """ + permission_map = {u"permit": int("0001", 2), u"authorize": int("0010", 2), u"revoke": int("0100", 2), u"undo": int("1000", 2)} + members = {} + for member, message, permission in message.payload.permission_triplets: + public_key = member.public_key + assert isinstance(public_key, str) + assert message.name in self._encode_message_map + message_id = self._encode_message_map[message.name].byte + assert isinstance(message_id, str) + assert len(message_id) == 1 + assert permission in permission_map + permission_bit = permission_map[permission] + + if not public_key in members: + members[public_key] = {} + + if not message_id in members[public_key]: + members[public_key][message_id] = 0 + + members[public_key][message_id] |= permission_bit + + data = [] + for public_key, messages in members.iteritems(): + data.extend((self._struct_H.pack(len(public_key)), public_key, self._struct_B.pack(len(messages)))) + for message_id, permission_bits in messages.iteritems(): + data.extend((message_id, self._struct_B.pack(permission_bits))) + + return tuple(data) + + def _decode_revoke(self, placeholder, offset, data): + permission_map = {u"permit": int("0001", 2), u"authorize": int("0010", 2), u"revoke": int("0100", 2), u"undo": int("1000", 2)} + permission_triplets = [] + + while offset < len(data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size") + + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + + if len(data) < offset + key_length + 1: + raise DropPacket("Insufficient packet size") + + key = data[offset:offset + key_length] + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_revoke)") + member = self._community.dispersy.get_member(key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + messages_length, = self._struct_B.unpack_from(data, offset) + offset += 1 + + if len(data) < offset + messages_length * 2: + raise DropPacket("Insufficient packet size") + + for _ in xrange(messages_length): + message_id = data[offset] + offset += 1 + decode_functions = self._decode_message_map.get(message_id) + if decode_functions is None: + raise DropPacket("Unknown message id [%d]" % ord(message_id)) + message = decode_functions.meta + + if not isinstance(message.resolution, LinearResolution): + # it makes no sence to authorize a message that does not use the + # LinearResolution policy. currently we have two policies, PublicResolution + # (where all messages are allowed regardless of authorization) and + # LinearResolution. + raise DropPacket("Invalid resolution policy") + + if not isinstance(message.authentication, MemberAuthentication): + # it makes no sence to authorize a message that does not use the + # MemberAuthentication policy because without this policy it is impossible to + # verify WHO created the message. + raise DropPacket("Invalid authentication policy") + + permission_bits, = self._struct_B.unpack_from(data, offset) + offset += 1 + + for permission, permission_bit in permission_map.iteritems(): + if permission_bit & permission_bits: + permission_triplets.append((member, message, permission)) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, permission_triplets) + + def _encode_undo_own(self, message): + return (self._struct_Q.pack(message.payload.global_time),) + + def _decode_undo_own(self, placeholder, offset, data): + # use the member in the Authentication policy + member = placeholder.authentication.member + + if len(data) < offset + 8: + raise DropPacket("Insufficient packet size") + + global_time, = self._struct_Q.unpack_from(data, offset) + offset += 8 + + if not global_time < placeholder.distribution.global_time: + raise DropPacket("Invalid global time (trying to apply undo to the future)") + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, global_time) + + def _encode_undo_other(self, message): + public_key = message.payload.member.public_key + assert message.payload.member.public_key + return (self._struct_H.pack(len(public_key)), public_key, self._struct_Q.pack(message.payload.global_time)) + + def _decode_undo_other(self, placeholder, offset, data): + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size") + + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + + if len(data) < offset + key_length: + raise DropPacket("Insufficient packet size") + + public_key = data[offset:offset + key_length] + if not ec_check_public_bin(public_key): + raise DropPacket("Invalid cryptographic key (_decode_revoke)") + member = self._community.dispersy.get_member(public_key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + if len(data) < offset + 8: + raise DropPacket("Insufficient packet size") + + global_time, = self._struct_Q.unpack_from(data, offset) + offset += 8 + + if not global_time < placeholder.distribution.global_time: + raise DropPacket("Invalid global time (trying to apply undo to the future)") + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, global_time) + + def _encode_missing_proof(self, message): + payload = message.payload + return (self._struct_QH.pack(payload.global_time, len(payload.member.public_key)), payload.member.public_key) + + def _decode_missing_proof(self, placeholder, offset, data): + if len(data) < offset + 10: + raise DropPacket("Insufficient packet size (_decode_missing_proof)") + + global_time, key_length = self._struct_QH.unpack_from(data, offset) + offset += 10 + + key = data[offset:offset + key_length] + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_missing_proof)") + member = self._community.dispersy.get_member(key) + if not member.has_identity(self._community): + raise DelayPacketByMissingMember(self._community, member.mid) + offset += key_length + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, member, global_time) + + def _encode_dynamic_settings(self, message): + data = [] + for meta, policy in message.payload.policies: + assert meta.name in self._encode_message_map, ("unknown message", meta.name) + assert isinstance(policy, (PublicResolution, LinearResolution)) + assert isinstance(meta.resolution, DynamicResolution) + assert policy in meta.resolution.policies, "the given policy must be one available at meta message creation" + meta_id = self._encode_message_map[meta.name].byte + # currently only supporting resolution policy changes + policy_type = "r" + policy_index = meta.resolution.policies.index(policy) + data.append(self._struct_ccB.pack(meta_id, policy_type, policy_index)) + return data + + def _decode_dynamic_settings(self, placeholder, offset, data): + if len(data) < offset + 3: + raise DropPacket("Insufficient packet size (_decode_dynamic_settings)") + + policies = [] + while len(data) >= offset + 3: + meta_id, policy_type, policy_index = self._struct_ccB.unpack_from(data, offset) + decode_functions = self._decode_message_map.get(meta_id) + if decode_functions is None: + raise DropPacket("Unknown meta id [%d]" % ord(meta_id)) + meta = decode_functions.meta + if not isinstance(meta.resolution, DynamicResolution): + raise DropPacket("Invalid meta id [%d]" % ord(meta_id)) + + # currently only supporting resolution policy changes + if not policy_type == "r": + raise DropPacket("Invalid policy type") + if not policy_index < len(meta.resolution.policies): + raise DropPacket("Invalid policy id") + policy = meta.resolution.policies[policy_index] + + offset += 3 + + policies.append((meta, policy)) + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, policies) + + def _encode_introduction_request(self, message): + payload = message.payload + + data = [inet_aton(payload.destination_address[0]), self._struct_H.pack(payload.destination_address[1]), + inet_aton(payload.source_lan_address[0]), self._struct_H.pack(payload.source_lan_address[1]), + inet_aton(payload.source_wan_address[0]), self._struct_H.pack(payload.source_wan_address[1]), + self._struct_B.pack(self._encode_advice_map[payload.advice] | self._encode_connection_type_map[payload.connection_type] | self._encode_sync_map[payload.sync]), + self._struct_H.pack(payload.identifier)] + + # add optional sync + if payload.sync: + assert payload.bloom_filter.size % 8 == 0 + assert 0 < payload.bloom_filter.functions < 256, "assuming that we choose BITS to ensure the bloom filter will fit in one MTU, it is unlikely that there will be more than 255 functions. hence we can encode this in one byte" + assert len(payload.bloom_filter.prefix) == 1, "must have a one character prefix" + assert len(payload.bloom_filter.bytes) == int(ceil(payload.bloom_filter.size / 8)) + data.extend((self._struct_QQHHBH.pack(payload.time_low, payload.time_high, payload.modulo, payload.offset, payload.bloom_filter.functions, payload.bloom_filter.size), + payload.bloom_filter.prefix, payload.bloom_filter.bytes)) + + return data + + def _decode_introduction_request(self, placeholder, offset, data): + if len(data) < offset + 21: + raise DropPacket("Insufficient packet size") + + destination_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + source_lan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + source_wan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + flags, identifier = self._struct_BH.unpack_from(data, offset) + offset += 3 + + advice = self._decode_advice_map.get(flags & int("1", 2)) + if advice is None: + raise DropPacket("Invalid advice flag") + + connection_type = self._decode_connection_type_map.get(flags & int("11000000", 2)) + if connection_type is None: + raise DropPacket("Invalid connection type flag") + + sync = self._decode_sync_map.get(flags & int("10", 2)) + if sync is None: + raise DropPacket("Invalid sync flag") + if sync: + if len(data) < offset + 24: + raise DropPacket("Insufficient packet size") + + time_low, time_high, modulo, modulo_offset, functions, size = self._struct_QQHHBH.unpack_from(data, offset) + offset += 23 + + prefix = data[offset] + offset += 1 + + if not time_low > 0: + raise DropPacket("Invalid time_low value") + if not (time_high == 0 or time_low <= time_high): + raise DropPacket("Invalid time_high value") + if not 0 < modulo: + raise DropPacket("Invalid modulo value") + if not 0 <= modulo_offset < modulo: + raise DropPacket("Invalid offset value") + if not 0 < functions: + raise DropPacket("Invalid functions value") + if not 0 < size: + raise DropPacket("Invalid size value") + if not size % 8 == 0: + raise DropPacket("Invalid size value, must be a multiple of eight") + + length = int(ceil(size / 8)) + if not length == len(data) - offset: + raise DropPacket("Invalid number of bytes available") + + bloom_filter = BloomFilter(data[offset:offset + length], functions, prefix=prefix) + offset += length + + sync = (time_low, time_high, modulo, modulo_offset, bloom_filter) + + else: + sync = None + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, destination_address, source_lan_address, source_wan_address, advice, connection_type, sync, identifier) + + def _encode_introduction_response(self, message): + payload = message.payload + return (inet_aton(payload.destination_address[0]), self._struct_H.pack(payload.destination_address[1]), + inet_aton(payload.source_lan_address[0]), self._struct_H.pack(payload.source_lan_address[1]), + inet_aton(payload.source_wan_address[0]), self._struct_H.pack(payload.source_wan_address[1]), + inet_aton(payload.lan_introduction_address[0]), self._struct_H.pack(payload.lan_introduction_address[1]), + inet_aton(payload.wan_introduction_address[0]), self._struct_H.pack(payload.wan_introduction_address[1]), + self._struct_B.pack(self._encode_connection_type_map[payload.connection_type] | self._encode_tunnel_map[payload.tunnel]), + self._struct_H.pack(payload.identifier)) + + def _decode_introduction_response(self, placeholder, offset, data): + if len(data) < offset + 33: + raise DropPacket("Insufficient packet size") + + destination_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + source_lan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + source_wan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + lan_introduction_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + wan_introduction_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + flags, identifier, = self._struct_BH.unpack_from(data, offset) + offset += 3 + + connection_type = self._decode_connection_type_map.get(flags & int("11000000", 2)) + if connection_type is None: + raise DropPacket("Invalid connection type flag") + + tunnel = self._decode_tunnel_map.get(flags & int("100", 2)) + if tunnel is None: + raise DropPacket("Invalid tunnel flag") + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, destination_address, source_lan_address, source_wan_address, lan_introduction_address, wan_introduction_address, connection_type, tunnel, identifier) + + def _encode_puncture_request(self, message): + payload = message.payload + return (inet_aton(payload.lan_walker_address[0]), self._struct_H.pack(payload.lan_walker_address[1]), + inet_aton(payload.wan_walker_address[0]), self._struct_H.pack(payload.wan_walker_address[1]), + self._struct_H.pack(payload.identifier)) + + def _decode_puncture_request(self, placeholder, offset, data): + if len(data) < offset + 14: + raise DropPacket("Insufficient packet size") + + lan_walker_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + wan_walker_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + identifier, = self._struct_H.unpack_from(data, offset) + offset += 2 + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, lan_walker_address, wan_walker_address, identifier) + + def _encode_puncture(self, message): + payload = message.payload + return (inet_aton(payload.source_lan_address[0]), self._struct_H.pack(payload.source_lan_address[1]), + inet_aton(payload.source_wan_address[0]), self._struct_H.pack(payload.source_wan_address[1]), + self._struct_H.pack(payload.identifier)) + + def _decode_puncture(self, placeholder, offset, data): + if len(data) < offset + 14: + raise DropPacket("Insufficient packet size") + + source_lan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + source_wan_address = (inet_ntoa(data[offset:offset + 4]), self._struct_H.unpack_from(data, offset + 4)[0]) + offset += 6 + + identifier, = self._struct_H.unpack_from(data, offset) + offset += 2 + + return offset, placeholder.meta.payload.Implementation(placeholder.meta.payload, source_lan_address, source_wan_address, identifier) + + # + # Encoding + # + + def _encode_no_authentication(self, container, message): + pass + + def _encode_member_authentication(self, container, message): + if message.authentication.encoding == "sha1": + container.append(message.authentication.member.mid) + elif message.authentication.encoding == "bin": + assert message.authentication.member.public_key + assert ec_check_public_bin(message.authentication.member.public_key), message.authentication.member.public_key.encode("HEX") + container.extend((self._struct_H.pack(len(message.authentication.member.public_key)), message.authentication.member.public_key)) + else: + raise NotImplementedError(message.authentication.encoding) + + def _encode_double_member_authentication(self, container, message): + if message.authentication.encoding == "sha1": + container.extend([member.mid for member in message.authentication.members]) + elif message.authentication.encoding == "bin": + assert message.authentication.members[0].public_key + assert message.authentication.members[1].public_key + assert ec_check_public_bin(message.authentication.members[0].public_key), message.authentication.members[0].public_key.encode("HEX") + assert ec_check_public_bin(message.authentication.members[1].public_key), message.authentication.members[1].public_key.encode("HEX") + container.extend((self._struct_HH.pack(len(message.authentication.members[0].public_key), len(message.authentication.members[1].public_key)), + message.authentication.members[0].public_key, + message.authentication.members[1].public_key)) + else: + raise NotImplementedError(message.authentication.encoding) + + def _encode_full_sync_distribution(self, container, message): + assert message.distribution.global_time + # 23/04/12 Boudewijn: testcases generate global time values that have not been claimed + # if message.distribution.global_time > message.community.global_time: + # did not use community.claim_global_time() FAIL + # raise ValueError("incorrect global_time value chosen") + if message.distribution.enable_sequence_number: + assert message.distribution.sequence_number + container.append(self._struct_QL.pack(message.distribution.global_time, message.distribution.sequence_number)) + else: + container.append(self._struct_Q.pack(message.distribution.global_time)) + + def _encode_last_sync_distribution(self, container, message): + assert message.distribution.global_time + # 23/04/12 Boudewijn: testcases generate global time values that have not been claimed + # if message.distribution.global_time > message.community.global_time: + # did not use community.claim_global_time() FAIL + # raise ValueError("incorrect global_time value chosen") + container.append(self._struct_Q.pack(message.distribution.global_time)) + + def _encode_direct_distribution(self, container, message): + assert message.distribution.global_time + # 23/04/12 Boudewijn: testcases generate global time values that have not been claimed + # if message.distribution.global_time > message.community.global_time: + # did not use community.claim_global_time() FAIL + # raise ValueError("incorrect global_time value chosen") + container.append(self._struct_Q.pack(message.distribution.global_time)) + + def _encode_public_resolution(self, container, message): + pass + + def _encode_linear_resolution(self, container, message): + pass + + def _encode_dynamic_resolution(self, container, message): + assert isinstance(message.resolution.policy, (PublicResolution.Implementation, LinearResolution.Implementation)), message.resolution.policy + assert not isinstance(message.resolution.policy, DynamicResolution), message.resolution.policy + index = message.resolution.policies.index(message.resolution.policy.meta) + container.append(chr(index)) + # both the public and the linear resolution do not require any storage + + def _encode_no_authentication_signature(self, container, message, sign): + return "".join(container) + + def _encode_member_authentication_signature(self, container, message, sign): + assert message.authentication.member.private_key, (message.authentication.member.database_id, message.authentication.member.mid.encode("HEX"), id(message.authentication.member)) + if sign: + data = "".join(container) + signature = message.authentication.member.sign(data) + message.authentication.set_signature(signature) + return data + signature + + else: + return data + "\x00" * message.authentication.member.signature_length + + def _encode_double_member_authentication_signature(self, container, message, sign): + data = "".join(container) + signatures = [] + for signature, member in message.authentication.signed_members: + if signature: + signatures.append(signature) + elif sign and member.private_key: + signature = member.sign(data) + message.authentication.set_signature(member, signature) + signatures.append(signature) + else: + signatures.append("\x00" * member.signature_length) + return data + "".join(signatures) + + def can_encode_message(self, message): + """ + Returns True when MESSAGE can be encoded using this conversion. + """ + assert isinstance(message, (Message, Message.Implementation)), type(message) + return message.name in self._encode_message_map + + def encode_message(self, message, sign=True): + assert isinstance(message, Message.Implementation), message + assert message.name in self._encode_message_map, message.name + encode_functions = self._encode_message_map[message.name] + + # community prefix, message-id + container = [self._prefix, encode_functions.byte] + + # authentication + encode_functions.authentication(container, message) + + # resolution + encode_functions.resolution(container, message) + + # distribution + encode_functions.distribution(container, message) + + # payload + payload = encode_functions.payload(message) + assert isinstance(payload, (tuple, list)), (type(payload), encode_functions.payload) + assert all(isinstance(x, str) for x in payload) + container.extend(payload) + + # sign + packet = encode_functions.signature(container, message, sign) + + if len(packet) > 1500 - 60 - 8: + logger.warning("Packet size for %s exceeds MTU - IP header - UDP header (%d bytes)", message.name, len(packet)) + + return packet + + # + # Decoding + # + + def _decode_full_sync_distribution(self, placeholder): + distribution = placeholder.meta.distribution + if distribution.enable_sequence_number: + global_time, sequence_number = self._struct_QL.unpack_from(placeholder.data, placeholder.offset) + if not global_time: + raise DropPacket("Invalid global time value (_decode_full_sync_distribution)") + if not sequence_number: + raise DropPacket("Invalid sequence number value (_decode_full_sync_distribution)") + placeholder.offset += 12 + placeholder.distribution = distribution.Implementation(distribution, global_time, sequence_number) + + else: + global_time, = self._struct_Q.unpack_from(placeholder.data, placeholder.offset) + if not global_time: + raise DropPacket("Invalid global time value (_decode_full_sync_distribution)") + placeholder.offset += 8 + placeholder.distribution = distribution.Implementation(distribution, global_time) + + def _decode_last_sync_distribution(self, placeholder): + global_time, = self._struct_Q.unpack_from(placeholder.data, placeholder.offset) + if not global_time: + raise DropPacket("Invalid global time value (_decode_last_sync_distribution)") + placeholder.offset += 8 + placeholder.distribution = LastSyncDistribution.Implementation(placeholder.meta.distribution, global_time) + + def _decode_direct_distribution(self, placeholder): + global_time, = self._struct_Q.unpack_from(placeholder.data, placeholder.offset) + placeholder.offset += 8 + placeholder.distribution = DirectDistribution.Implementation(placeholder.meta.distribution, global_time) + + def _decode_public_resolution(self, placeholder): + placeholder.resolution = PublicResolution.Implementation(placeholder.meta.resolution) + + def _decode_linear_resolution(self, placeholder): + placeholder.resolution = LinearResolution.Implementation(placeholder.meta.resolution) + + def _decode_dynamic_resolution(self, placeholder): + if len(placeholder.data) < placeholder.offset + 1: + raise DropPacket("Insufficient packet size (_decode_dynamic_resolution)") + + index = ord(placeholder.data[placeholder.offset]) + if index > len(placeholder.meta.resolution.policies): + raise DropPacket("Invalid policy index") + meta_policy = placeholder.meta.resolution.policies[index] + placeholder.offset += 1 + + assert isinstance(meta_policy, (PublicResolution, LinearResolution)), meta_policy + assert not isinstance(meta_policy, DynamicResolution), meta_policy + # both the public and the linear resolution do not require any storage + policy = meta_policy.Implementation(meta_policy) + + placeholder.resolution = DynamicResolution.Implementation(placeholder.meta.resolution, policy) + + def _decode_no_authentication(self, placeholder): + placeholder.first_signature_offset = len(placeholder.data) + placeholder.authentication = NoAuthentication.Implementation(placeholder.meta.authentication) + + def _decode_member_authentication(self, placeholder): + authentication = placeholder.meta.authentication + offset = placeholder.offset + data = placeholder.data + + if authentication.encoding == "sha1": + if len(data) < offset + 20: + raise DropPacket("Insufficient packet size (_decode_member_authentication sha1)") + member_id = data[offset:offset + 20] + offset += 20 + + members = [member for member in self._community.dispersy.get_members_from_id(member_id) if member.has_identity(self._community)] + if not members: + raise DelayPacketByMissingMember(self._community, member_id) + + # signatures are enabled, verify that the signature matches the member sha1 + # identifier + for member in members: + first_signature_offset = len(data) - member.signature_length + if (not placeholder.verify and len(members) == 1) or member.verify(data, data[first_signature_offset:], length=first_signature_offset): + placeholder.offset = offset + placeholder.first_signature_offset = first_signature_offset + placeholder.authentication = MemberAuthentication.Implementation(authentication, member, is_signed=True) + return + + raise DelayPacketByMissingMember(self._community, member_id) + + elif authentication.encoding == "bin": + if len(data) < offset + 2: + raise DropPacket("Insufficient packet size (_decode_member_authentication bin)") + key_length, = self._struct_H.unpack_from(data, offset) + offset += 2 + if len(data) < offset + key_length: + raise DropPacket("Insufficient packet size (_decode_member_authentication bin)") + key = data[offset:offset + key_length] + offset += key_length + + if not ec_check_public_bin(key): + raise DropPacket("Invalid cryptographic key (_decode_member_authentication)") + + member = self._community.dispersy.get_member(key) + + # TODO we should ensure that member.has_identity(self._community), however, the + # exception is the dispersy-identity message. hence we need the placeholder parameter + # to check this + first_signature_offset = len(data) - member.signature_length + + # signatures are enabled, verify that the signature matches the member sha1 identifier + if not placeholder.verify or member.verify(data, data[first_signature_offset:], length=first_signature_offset): + placeholder.offset = offset + placeholder.first_signature_offset = first_signature_offset + placeholder.authentication = MemberAuthentication.Implementation(authentication, member, is_signed=True) + return + + raise DropPacket("Invalid signature") + + else: + raise NotImplementedError(authentication.encoding) + + def _decode_double_member_authentication(self, placeholder): + authentication = placeholder.meta.authentication + offset = placeholder.offset + data = placeholder.data + + if authentication.encoding == "sha1": + def iter_options(members_ids): + """ + members_ids = [[m1_a, m1_b], [m2_a], [m3_a, m3_b]] + --> m1_a, m2_a, m3_a + --> m1_a, m2_a, m3_b + --> m1_b, m2_a, m3_a + --> m1_b, m2_a, m3_b + """ + if members_ids: + for member_id in members_ids[0]: + for others in iter_options(members_ids[1:]): + yield [member_id] + others + else: + yield [] + + members_ids = [] + for _ in range(2): + member_id = data[offset:offset + 20] + members = [member for member in self._community.dispersy.get_members_from_id(member_id) if member.has_identity(self._community)] + if not members: + raise DelayPacketByMissingMember(self._community, member_id) + offset += 20 + members_ids.append(members) + + for members in iter_options(members_ids): + # try this member combination + first_signature_offset = len(data) - sum([member.signature_length for member in members]) + signature_offset = first_signature_offset + signatures = ["", ""] + found_valid_combination = True + for index, member in zip(range(2), members): + signature = data[signature_offset:signature_offset + member.signature_length] + # logging.info("INDEX: %d", index) + # logging.info("%s", signature.encode('HEX')) + if placeholder.allow_empty_signature and signature == "\x00" * member.signature_length: + signatures[index] = "" + + elif (not placeholder.verify and len(members) == 1) or member.verify(data, data[signature_offset:signature_offset + member.signature_length], length=first_signature_offset): + signatures[index] = signature + + else: + found_valid_combination = False + break + signature_offset += member.signature_length + + # found a valid combination + if found_valid_combination: + placeholder.offset = offset + placeholder.first_signature_offset = first_signature_offset + placeholder.authentication = DoubleMemberAuthentication.Implementation(placeholder.meta.authentication, members, signatures=signatures) + return + + # we have no idea which member we are missing, hence we request a random one. in the future + # we should request all members instead + raise DelayPacketByMissingMember(self._community, choice(members_ids[0])) + + elif authentication.encoding == "bin": + if len(data) < offset + 4: + raise DropPacket("Insufficient packet size (_decode_double_member_authentication bin)") + key1_length, key2_length = self._struct_HH.unpack_from(data, offset) + offset += 4 + if len(data) < offset + key1_length + key2_length: + raise DropPacket("Insufficient packet size (_decode_double_member_authentication bin)") + key1 = data[offset:offset + key1_length] + offset += key1_length + key2 = data[offset:offset + key2_length] + offset += key2_length + + if not ec_check_public_bin(key1): + raise DropPacket("Invalid cryptographic key1 (_decode_double_member_authentication)") + if not ec_check_public_bin(key2): + raise DropPacket("Invalid cryptographic key2 (_decode_double_member_authentication)") + + members = [self._community.dispersy.get_member(key1), self._community.dispersy.get_member(key2)] + + second_signature_offset = len(data) - members[1].signature_length + first_signature_offset = second_signature_offset - members[0].signature_length + signatures = [data[first_signature_offset:second_signature_offset], data[second_signature_offset:]] + + for index, member in enumerate(members): + if placeholder.allow_empty_signature and signatures[index] == "\x00" * member.signature_length: + signatures[index] = "" + + elif placeholder.verify and not member.verify(data, signatures[index], length=first_signature_offset): + raise DropPacket("Signature does not match public key") + + placeholder.offset = offset + placeholder.first_signature_offset = first_signature_offset + placeholder.authentication = DoubleMemberAuthentication.Implementation(placeholder.meta.authentication, members, signatures=signatures) + + else: + raise NotImplementedError(authentication.encoding) + + def _decode_empty_destination(self, placeholder): + placeholder.destination = placeholder.meta.destination.Implementation(placeholder.meta.destination) + + def _decode_message(self, candidate, data, verify, allow_empty_signature): + """ + Decode a binary string into a Message structure, with some + Dispersy specific parameters. + + When VERIFY is True the signature(s), if applicable, are verified. Otherwise the + signature(s) are ignored. + + Invalid signature(s) will cause DropPacket to be raised, except when ALLOW_EMPTY_SIGNATURE + is True and the failed signature consist of \x00 bytes. + """ + assert isinstance(data, str) + assert isinstance(verify, bool) + assert isinstance(allow_empty_signature, bool) + assert len(data) >= 22 + assert data[:22] == self._prefix, (data[:22].encode("HEX"), self._prefix.encode("HEX")) + + if len(data) < 100: + DropPacket("Packet is to small to decode") + + # meta_message + decode_functions = self._decode_message_map.get(data[22]) + if decode_functions is None: + raise DropPacket("Unknown message code %d" % ord(data[22])) + + # placeholder + placeholder = self.Placeholder(candidate, decode_functions.meta, 23, data, verify, allow_empty_signature) + + # authentication + decode_functions.authentication(placeholder) + assert isinstance(placeholder.authentication, Authentication.Implementation) + # drop packet if the creator is blacklisted. we would prefer to do this in dispersy.py, + # however, decoding the payload can cause DelayPacketByMissingMessage to be raised for + # dispersy-undo messages, and the last thing that we want is to request messages from a + # blacklisted member + if isinstance(placeholder.meta.authentication, (MemberAuthentication, DoubleMemberAuthentication)) and placeholder.authentication.member.must_blacklist: + self._community.dispersy.send_malicious_proof(self._community, placeholder.authentication.member, candidate) + raise DropPacket("Creator is blacklisted") + + # resolution + decode_functions.resolution(placeholder) + assert isinstance(placeholder.resolution, Resolution.Implementation) + + # destination + decode_functions.destination(placeholder) + assert isinstance(placeholder.destination, Destination.Implementation) + + # distribution + decode_functions.distribution(placeholder) + assert isinstance(placeholder.distribution, Distribution.Implementation) + + # payload + placeholder.offset, placeholder.payload = decode_functions.payload(placeholder, placeholder.offset, placeholder.data[:placeholder.first_signature_offset]) + if placeholder.offset != placeholder.first_signature_offset: + logger.warning("invalid packet size for %s data:%d; offset:%d", placeholder.meta.name, placeholder.first_signature_offset, placeholder.offset) + raise DropPacket("Invalid packet size (there are unconverted bytes)") + + if __debug__: + from .payload import Payload + assert isinstance(placeholder.payload, Payload.Implementation), type(placeholder.payload) + assert isinstance(placeholder.offset, (int, long)) + + return placeholder.meta.Implementation(placeholder.meta, placeholder.authentication, placeholder.resolution, placeholder.distribution, placeholder.destination, placeholder.payload, conversion=self, candidate=candidate, packet=placeholder.data) + + def can_decode_message(self, data): + """ + Returns True when DATA can be decoded using this conversion. + """ + assert isinstance(data, str), type(data) + return (len(data) >= 23 and + data[:22] == self._prefix and + data[22] in self._decode_message_map) + + def decode_meta_message(self, data): + """ + Decode a binary string into a Message instance. + """ + assert isinstance(data, str) + assert data[:22] == self._prefix, (data[:22].encode("HEX"), self._prefix.encode("HEX")) + + if len(data) < 23: + raise DropPacket("Packet is to small to decode") + + # meta_message + decode_functions = self._decode_message_map.get(data[22]) + if decode_functions is None: + raise DropPacket("Unknown message code %d" % ord(data[22])) + + return decode_functions.meta + + def decode_message(self, candidate, data, verify=True): + """ + Decode a binary string into a Message.Implementation structure. + """ + assert isinstance(candidate, Candidate), candidate + assert isinstance(data, str), data + assert isinstance(verify, bool) + return self._decode_message(candidate, data, verify, False) + + def __str__(self): + return "<%s %s%s [%s]>" % (self.__class__.__name__, self.dispersy_version.encode("HEX"), self.community_version.encode("HEX"), ", ".join(self._encode_message_map.iterkeys())) + + +class DefaultConversion(BinaryConversion): + + """ + This conversion class is initially used to encode some Dispersy + specific messages during the creation of a new Community + (authorizing the initial member). Afterwards it is usually + replaced by a Community specific conversion that also supplies + payload conversion for the Community specific messages. + """ + def __init__(self, community): + super(DefaultConversion, self).__init__(community, "\x00") diff -Nru tribler-6.2.0/Tribler/dispersy/crypto.py tribler-6.2.0/Tribler/dispersy/crypto.py --- tribler-6.2.0/Tribler/dispersy/crypto.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/crypto.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,339 @@ +""" +The crypto module provides a layer between Dispersy and low level crypographic features. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +if False: + # + # disable crypto + # + + from random import random + + _curves = {u"very-low": 42, + u"low": 60, + u"medium": 104, + u"high": 144} + + def ec_generate_key(security): + assert isinstance(security, unicode) + assert security in _curves + + length = _curves[security] + private_key = "".join(chr(int(random() * 2 ** 8)) for _ in xrange(2 * length)) + public_key = private_key[:length] + + return (length, public_key, private_key) + + def ec_public_pem_to_public_bin(pem): + return pem + + def ec_private_pem_to_private_bin(pem): + return pem + + def ec_to_private_pem(ec, cipher=None, password=None): + return ";".join((str(ec[0]), ec[1].encode("HEX"), ec[2].encode("HEX"))) + + def ec_to_public_pem(ec): + return ";".join((str(ec[0]), ec[1].encode("HEX"), "")) + + def ec_from_private_pem(pem, password=None): + length, public_key, private_key = pem.split(";") + return int(length), public_key.decode("HEX"), private_key.decode("HEX") + + def ec_from_public_pem(pem): + length, public_key, private_key = pem.split(";") + assert private_key == "" + return int(length), public_key.decode("HEX"), private_key.decode("HEX") + + def ec_to_private_bin(ec): + return ec_to_private_pem(ec) + + def ec_to_public_bin(ec): + return ec_to_public_pem(ec) + + def ec_check_private_bin(string): + try: + return bool(ec_from_private_bin(string)) + except: + return False + + def ec_check_public_bin(string): + try: + return bool(ec_from_public_bin(string)) + except: + return False + + def ec_from_private_bin(string): + return ec_from_private_pem(string) + + def ec_from_public_bin(string): + return ec_from_public_pem(string) + + def ec_signature_length(ec): + return ec[0] + + def ec_sign(ec, digest): + return "".join(chr(int(random() * 2 ** 8)) for _ in xrange(ec[0])) + + def ec_verify(ec, digest, signature): + return True + +else: + # + # enable crypto + # + + from hashlib import sha1, sha224, sha256, sha512, md5 + from math import ceil + # from M2Crypto.m2 import bn_to_bin, bin_to_bn, bn_to_mpi, mpi_to_bn + from M2Crypto import EC, BIO + from struct import Struct + + _struct_L = Struct(">L") + + # Allow all available curves. + _curves = dict((unicode(curve), getattr(EC, curve)) for curve in dir(EC) if curve.startswith("NID_")) + + # We want to provide a few default curves. We will change these curves as new become available + # and old ones to small to provide sufficient security. + _curves.update({u"very-low": EC.NID_sect163k1, + u"low": EC.NID_sect233k1, + u"medium": EC.NID_sect409k1, + u"high": EC.NID_sect571r1}) + + def _progress(*args): + "Called when no feedback needs to be given." + pass + + def ec_generate_key(security): + """ + Generate a new Elliptic Curve object with a new public / private + key pair. + + Security can be u'low', u'medium', or u'high' depending on how secure you need your Elliptic + Curve to be. Currently these values translate into: + - very-low: NID_sect163k1 ~42 byte signatures + - low: NID_sect233k1 ~60 byte signatures + - medium: NID_sect409k1 ~104 byte signatures + - high: NID_sect571r1 ~144 byte signatures + + @param security: Level of security {u'very-low', u'low', u'medium', or u'high'}. + @type security: unicode + + @note that the NID must always be 160 bits or more, otherwise it will not be able to sign a sha1 + digest. + """ + assert isinstance(security, unicode) + assert security in _curves + ec = EC.gen_params(_curves[security]) + ec.gen_key() + return ec + + def ec_public_pem_to_public_bin(pem): + "Convert a public key in PEM format into a public key in binary format." + return "".join(pem.split("\n")[1:-2]).decode("BASE64") + + def ec_private_pem_to_private_bin(pem): + """ + Convert a private key in PEM format into a private key in binary format. + + @note: Enrcypted pem's are NOT supported and will silently fail. + """ + return "".join(pem.split("\n")[1:-2]).decode("BASE64") + + def ec_to_private_pem(ec, cipher=None, password=None): + "Get the private key in PEM format." + def get_password(*args): + return password or "" + bio = BIO.MemoryBuffer() + ec.save_key_bio(bio, cipher, get_password) + return bio.read_all() + + def ec_to_public_pem(ec): + "Get the public key in PEM format." + bio = BIO.MemoryBuffer() + ec.save_pub_key_bio(bio) + return bio.read_all() + + def ec_from_private_pem(pem, password=None): + "Get the EC from a private PEM." + def get_password(*args): + return password or "" + return EC.load_key_bio(BIO.MemoryBuffer(pem), get_password) + + def ec_from_public_pem(pem): + "Get the EC from a public PEM." + return EC.load_pub_key_bio(BIO.MemoryBuffer(pem)) + + def ec_to_private_bin(ec): + "Get the private key in binary format." + return ec_private_pem_to_private_bin(ec_to_private_pem(ec)) + + def ec_to_public_bin(ec): + "Get the public key in binary format." + return ec_public_pem_to_public_bin(ec_to_public_pem(ec)) + + def ec_check_private_bin(string): + "Returns True if the input is a valid private key" + try: + ec_from_private_bin(string) + except: + return False + return True + + def ec_check_public_bin(string): + "Returns True if the input is a valid public key" + try: + ec_from_public_bin(string) + except: + return False + return True + + def ec_from_private_bin(string): + "Get the EC from a private key in binary format." + return ec_from_private_pem("".join(("-----BEGIN EC PRIVATE KEY-----\n", string.encode("BASE64"), "-----END EC PRIVATE KEY-----\n"))) + + def ec_from_public_bin(string): + "Get the EC from a public key in binary format." + return ec_from_public_pem("".join(("-----BEGIN PUBLIC KEY-----\n", string.encode("BASE64"), "-----END PUBLIC KEY-----\n"))) + + def ec_signature_length(ec): + """ + Returns the length, in bytes, of each signature made using EC. + """ + return int(ceil(len(ec) / 8.0)) * 2 + + def ec_sign(ec, digest): + """ + Returns the signature of DIGEST made using EC. + """ + length = int(ceil(len(ec) / 8.0)) + + mpi_r, mpi_s = ec.sign_dsa(digest) + length_r, = _struct_L.unpack_from(mpi_r) + r = mpi_r[-min(length, length_r):] + length_s, = _struct_L.unpack_from(mpi_s) + s = mpi_s[-min(length, length_s):] + + return "".join(("\x00" * (length - len(r)), r, "\x00" * (length - len(s)), s)) + + def ec_verify(ec, digest, signature): + """ + Returns True when SIGNATURE matches the DIGEST made using EC. + """ + assert len(signature) == ec_signature_length(ec), [len(signature), ec_signature_length(ec)] + length = len(signature) / 2 + try: + r = signature[:length] + # remove all "\x00" prefixes + while r and r[0] == "\x00": + r = r[1:] + # prepend "\x00" when the most significant bit is set + if ord(r[0]) & 128: + r = "\x00" + r + + s = signature[length:] + # remove all "\x00" prefixes + while s and s[0] == "\x00": + s = s[1:] + # prepend "\x00" when the most significant bit is set + if ord(s[0]) & 128: + s = "\x00" + s + + mpi_r = _struct_L.pack(len(r)) + r + mpi_s = _struct_L.pack(len(s)) + s + + # mpi_r3 = bn_to_mpi(bin_to_bn(signature[:length])) + # mpi_s3 = bn_to_mpi(bin_to_bn(signature[length:])) + + # if not mpi_r == mpi_r3: + # raise RuntimeError([mpi_r.encode("HEX"), mpi_r3.encode("HEX")]) + # if not mpi_s == mpi_s3: + # raise RuntimeError([mpi_s.encode("HEX"), mpi_s3.encode("HEX")]) + + return bool(ec.verify_dsa(digest, mpi_r, mpi_s)) + + except: + return False + +if __debug__: + import time + + def EC_name(curve): + assert isinstance(curve, int) + for name in dir(EC): + value = getattr(EC, name) + if isinstance(value, int) and value == curve: + return name + + def mpi_test(): + for _ in xrange(100): + for curve in sorted([unicode(attr) for attr in dir(EC) if attr.startswith("NID_")]): + ec = ec_generate_key(curve) + if not ec_verify(ec, "foo-bar", ec_sign(ec, "foo-bar")): + raise RuntimeError("crypto fail") + + def speed(): + curves = {} + for curve in sorted([unicode(attr) for attr in dir(EC) if attr.startswith("NID_")]): + ec = ec_generate_key(curve) + private_pem = ec_to_private_pem(ec) + public_pem = ec_to_public_pem(ec) + public_bin = ec_to_public_bin(ec) + private_bin = ec_to_private_bin(ec) + print + print "generated:", time.ctime() + print "curve:", curve, "<<<", EC_name(_curves[curve]), ">>>" + print "len:", len(ec), "bits ~", ec_signature_length(ec), "bytes signature" + print "pub:", len(public_bin), public_bin.encode("HEX") + print "prv:", len(private_bin), private_bin.encode("HEX") + print "pub-sha1", sha1(public_bin).digest().encode("HEX") + print "prv-sha1", sha1(private_bin).digest().encode("HEX") + print public_pem.strip() + print private_pem.strip() + + ec2 = ec_from_public_pem(public_pem) + assert ec_verify(ec2, "foo-bar", ec_sign(ec, "foo-bar")) + ec2 = ec_from_private_pem(private_pem) + assert ec_verify(ec2, "foo-bar", ec_sign(ec, "foo-bar")) + ec2 = ec_from_public_bin(public_bin) + assert ec_verify(ec2, "foo-bar", ec_sign(ec, "foo-bar")) + ec2 = ec_from_private_bin(private_bin) + assert ec_verify(ec2, "foo-bar", ec_sign(ec, "foo-bar")) + + curves[EC_name(_curves[curve])] = ec + + for key, curve in sorted(curves.iteritems()): + t1 = time.time() + + signatures = [ec_sign(curve, str(i)) for i in xrange(100)] + + t2 = time.time() + + for i, signature in enumerate(signatures): + ec_verify(curve, str(i), signature) + + t3 = time.time() + print key, "signing took", round(t2 - t1, 5), "verify took", round(t3 - t2, 5), "totals", round(t3-t1, 5) + + def main(): + for curve in [u"very-low", u"NID_secp224r1", u"low", u"medium", u"high"]: + ec = ec_generate_key(curve) + private_pem = ec_to_private_pem(ec) + public_pem = ec_to_public_pem(ec) + public_bin = ec_to_public_bin(ec) + private_bin = ec_to_private_bin(ec) + print + print "generated:", time.ctime() + print "curve:", curve, "<<<", EC_name(_curves[curve]), ">>>" + print "len:", len(ec), "bits ~", ec_signature_length(ec), "bytes signature" + print "pub:", len(public_bin), public_bin.encode("HEX") + print "prv:", len(private_bin), private_bin.encode("HEX") + print "pub-sha1", sha1(public_bin).digest().encode("HEX") + print "prv-sha1", sha1(private_bin).digest().encode("HEX") + print public_pem.strip() + print private_pem.strip() diff -Nru tribler-6.2.0/Tribler/dispersy/database.py tribler-6.2.0/Tribler/dispersy/database.py --- tribler-6.2.0/Tribler/dispersy/database.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/database.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,515 @@ +""" +This module provides basic database functionalty and simple version control. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +import logging +logger = logging.getLogger(__name__) + +import sys +from sqlite3 import Connection, Error + +from .decorator import attach_runtime_statistics + +if __debug__: + import thread + +if "--explain-query-plan" in getattr(sys, "argv", []): + _explain_query_plan_logger = logging.getLogger("explain-query-plan") + _explain_query_plan = set() + + def attach_explain_query_plan(func): + def attach_explain_query_plan_helper(self, statements, bindings=()): + if not statements in _explain_query_plan: + _explain_query_plan.add(statements) + + _explain_query_plan_logger.info("Explain query plan for <<<%s>>>", statements) + for line in self._cursor.execute(u"EXPLAIN QUERY PLAN %s" % statements, bindings): + _explain_query_plan_logger.info(line) + _explain_query_plan_logger.info("--") + + return func(self, statements, bindings) + attach_explain_query_plan_helper.__name__ = func.__name__ + return attach_explain_query_plan_helper + +else: + def attach_explain_query_plan(func): + return func + + +class IgnoreCommits(Exception): + + """ + Ignore all commits made within the body of a 'with database:' clause. + + with database: + # all commit statements are delayed until the database.__exit__ + database.commit() + database.commit() + # raising IgnoreCommits causes all commits to be ignored + raise IgnoreCommits() + """ + def __init__(self): + super(IgnoreCommits, self).__init__("Ignore all commits made within __enter__ and __exit__") + + +class Database(object): + + def __init__(self, file_path): + """ + Initialize a new Database instance. + + @param file_path: the path to the database file. + @type file_path: unicode + """ + assert isinstance(file_path, unicode) + logger.debug("loading database [%s]", file_path) + self._file_path = file_path + + # _CONNECTION, _CURSOR, AND _DATABASE_VERSION are set during open(...) + self._connection = None + self._cursor = None + self._database_version = 0 + + # _commit_callbacks contains a list with functions that are called on each database commit + self._commit_callbacks = [] + + # Database.commit() is enabled when _pending_commits == 0. Database.commit() is disabled + # when _pending_commits > 0. A commit is required when _pending_commits > 1. + self._pending_commits = 0 + + if __debug__: + self._debug_thread_ident = 0 + + def open(self): + assert self._cursor is None, "Database.open() has already been called" + assert self._connection is None, "Database.open() has already been called" + if __debug__: + self._debug_thread_ident = thread.get_ident() + logger.info("open database [%s]", self._file_path) + self._connect() + self._initial_statements() + self._prepare_version() + + def close(self, commit=True): + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + if commit: + self.commit(exiting=True) + logger.info("close database [%s]", self._file_path) + self._cursor.close() + self._cursor = None + self._connection.close() + self._connection = None + + def _connect(self): + self._connection = Connection(self._file_path) + self._cursor = self._connection.cursor() + + def _initial_statements(self): + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + + # collect current database configuration + page_size = int(next(self._cursor.execute(u"PRAGMA page_size"))[0]) + journal_mode = unicode(next(self._cursor.execute(u"PRAGMA journal_mode"))[0]).upper() + synchronous = unicode(next(self._cursor.execute(u"PRAGMA synchronous"))[0]).upper() + + # + # PRAGMA page_size = bytes; + # http://www.sqlite.org/pragma.html#pragma_page_size + # Note that changing page_size has no effect unless performed on a new database or followed + # directly by VACUUM. Since we do not want the cost of VACUUM every time we load a + # database, existing databases must be upgraded. + # + if page_size < 8192: + logger.debug("PRAGMA page_size = 8192 (previously: %s) [%s]", page_size, self._file_path) + + # it is not possible to change page_size when WAL is enabled + if journal_mode == u"WAL": + self._cursor.executescript(u"PRAGMA journal_mode = DELETE") + journal_mode = u"DELETE" + self._cursor.execute(u"PRAGMA page_size = 8192") + self._cursor.execute(u"VACUUM") + page_size = 8192 + + else: + logger.debug("PRAGMA page_size = %s (no change) [%s]", page_size, self._file_path) + + # + # PRAGMA journal_mode = DELETE | TRUNCATE | PERSIST | MEMORY | WAL | OFF + # http://www.sqlite.org/pragma.html#pragma_page_size + # + if not (journal_mode == u"WAL" or self._file_path == u":memory:"): + logger.debug("PRAGMA journal_mode = WAL (previously: %s) [%s]", journal_mode, self._file_path) + self._cursor.execute(u"PRAGMA journal_mode = WAL") + + else: + logger.debug("PRAGMA journal_mode = %s (no change) [%s]", journal_mode, self._file_path) + + # + # PRAGMA synchronous = 0 | OFF | 1 | NORMAL | 2 | FULL; + # http://www.sqlite.org/pragma.html#pragma_synchronous + # + if not synchronous in (u"NORMAL", u"1"): + logger.debug("PRAGMA synchronous = NORMAL (previously: %s) [%s]", synchronous, self._file_path) + self._cursor.execute(u"PRAGMA synchronous = NORMAL") + + else: + logger.debug("PRAGMA synchronous = %s (no change) [%s]", synchronous, self._file_path) + + def _prepare_version(self): + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + + # check is the database contains an 'option' table + try: + count, = next(self.execute(u"SELECT COUNT(*) FROM sqlite_master WHERE type = 'table' AND name = 'option'")) + except StopIteration: + raise RuntimeError() + + if count: + # get version from required 'option' table + try: + version, = next(self.execute(u"SELECT value FROM option WHERE key == 'database_version' LIMIT 1")) + except StopIteration: + # the 'database_version' key was not found + version = u"0" + else: + # the 'option' table probably hasn't been created yet + version = u"0" + + self._database_version = self.check_database(version) + assert isinstance(self._database_version, (int, long)), type(self._database_version) + + @property + def database_version(self): + return self._database_version + + @property + def file_path(self): + """ + The database filename including path. + """ + return self._file_path + + def __enter__(self): + """ + Enters a no-commit state. The commit will be performed by __exit__. + + @return: The method self.execute + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + + logger.debug("disabling commit [%s]", self._file_path) + self._pending_commits = max(1, self._pending_commits) + return self + + def __exit__(self, exc_type, exc_value, traceback): + """ + Leaves a no-commit state. A commit will be performed if Database.commit() was called while + in the no-commit state. + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + + self._pending_commits, pending_commits = 0, self._pending_commits + + if exc_type is None: + logger.debug("enabling commit [%s]", self._file_path) + if pending_commits > 1: + logger.debug("performing %d pending commits [%s]", pending_commits - 1, self._file_path) + self.commit() + return True + + elif isinstance(exc_value, IgnoreCommits): + logger.debug("enabling commit without committing now [%s]", self._file_path) + return True + + else: + # Niels 23-01-2013, an exception happened from within the with database block + # returning False to let Python reraise the exception. + return False + + @property + def last_insert_rowid(self): + """ + The row id of the most recent insert query. + @rtype: int or long + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + assert not self._cursor.lastrowid is None, "The last statement was NOT an insert query" + return self._cursor.lastrowid + + @property + def changes(self): + """ + The number of changes that resulted from the most recent query. + @rtype: int or long + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + return self._cursor.rowcount + # return self._connection.changes() + + @attach_explain_query_plan + @attach_runtime_statistics("{0.__class__.__name__}.{function_name} {1} [{0.file_path}]") + def execute(self, statement, bindings=()): + """ + Execute one SQL statement. + + A SQL query must be presented in unicode format. This is to ensure that no unicode + exeptions occur when the bindings are merged into the statement. + + Furthermore, the bindings may not contain any strings either. For a 'string' the unicode + type must be used. For a binary string the buffer(...) type must be used. + + The SQL query may contain placeholder entries defined with a '?'. Each of these + placeholders will be used to store one value from bindings. The placeholders are filled by + sqlite and all proper escaping is done, making this the preferred way of adding variables to + the SQL query. + + @param statement: the SQL statement that is to be executed. + @type statement: unicode + + @param bindings: the values that must be set to the placeholders in statement. + @type bindings: tuple + + @returns: unknown + @raise sqlite.Error: unknown + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + assert isinstance(statement, unicode), "The SQL statement must be given in unicode" + assert isinstance(bindings, (tuple, list, dict, set)), "The bindings must be a tuple, list, dictionary, or set" + assert all(lambda x: isinstance(x, str) for x in bindings), "The bindings may not contain a string. \nProvide unicode for TEXT and buffer(...) for BLOB. \nGiven types: %s" % str([type(binding) for binding in bindings]) + + try: + logger.log(logging.NOTSET, "%s <-- %s [%s]", statement, bindings, self._file_path) + return self._cursor.execute(statement, bindings) + + except Error: + logger.exception("%s [%s] ", statement, self._file_path) + raise + + @attach_runtime_statistics("{0.__class__.__name__}.{function_name} {1} [{0.file_path}]") + def executescript(self, statements): + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + assert isinstance(statements, unicode), "The SQL statement must be given in unicode" + + try: + logger.log(logging.NOTSET, "%s [%s]", statements, self._file_path) + return self._cursor.executescript(statements) + + except Error: + logger.exception("%s [%s]", statements, self._file_path) + raise + + @attach_explain_query_plan + @attach_runtime_statistics("{0.__class__.__name__}.{function_name} {1} [{0.file_path}]") + def executemany(self, statement, sequenceofbindings): + """ + Execute one SQL statement several times. + + All SQL queries must be presented in unicode format. This is to ensure that no unicode + exeptions occur when the bindings are merged into the statement. + + Furthermore, the bindings may not contain any strings either. For a 'string' the unicode + type must be used. For a binary string the buffer(...) type must be used. + + The SQL query may contain placeholder entries defined with a '?'. Each of these + placeholders will be used to store one value from bindings. The placeholders are filled by + sqlite and all proper escaping is done, making this the preferred way of adding variables to + the SQL query. + + @param statement: the SQL statement that is to be executed. + @type statement: unicode + + @param bindings: a sequence of values that must be set to the placeholders in statement. + Each element in sequence is another tuple containing bindings. + @type bindings: list containing tuples + + @returns: unknown + @raise sqlite.Error: unknown + """ + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + if __debug__: + # we allow GeneratorType but must convert it to a list in __debug__ mode since a + # generator can only iterate once + from types import GeneratorType + is_iterator = isinstance(sequenceofbindings, GeneratorType) + if is_iterator: + sequenceofbindings = list(sequenceofbindings) + assert isinstance(statement, unicode), "The SQL statement must be given in unicode" + assert isinstance(sequenceofbindings, (tuple, list, set)), "The sequenceofbindings must be a tuple, list, or set" + assert all(isinstance(x, (tuple, list, dict, set)) for x in list(sequenceofbindings)), "The sequenceofbindings must be a list with tuples, lists, dictionaries, or sets" + assert not filter(lambda x: filter(lambda y: isinstance(y, str), x), list(sequenceofbindings)), "The bindings may not contain a string. \nProvide unicode for TEXT and buffer(...) for BLOB." + if is_iterator: + sequenceofbindings = iter(sequenceofbindings) + + try: + logger.log(logging.NOTSET, "%s [%s]", statement, self._file_path) + return self._cursor.executemany(statement, sequenceofbindings) + + except Error: + logger.exception("%s [%s]", statement, self._file_path) + raise + + @attach_runtime_statistics("{0.__class__.__name__}.{function_name} [{0.file_path}]") + def commit(self, exiting=False): + assert self._cursor is not None, "Database.close() has been called or Database.open() has not been called" + assert self._connection is not None, "Database.close() has been called or Database.open() has not been called" + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.commit on the wrong thread" + assert not (exiting and self._pending_commits), "No pending commits should be present when exiting" + + if self._pending_commits: + logger.debug("defer commit [%s]", self._file_path) + self._pending_commits += 1 + return False + + else: + logger.debug("commit [%s]", self._file_path) + for callback in self._commit_callbacks: + try: + callback(exiting=exiting) + except Exception as exception: + logger.exception("%s [%s]", exception, self._file_path) + + return self._connection.commit() + + def check_database(self, database_version): + """ + Check the database and upgrade if required. + + This method is called once for each Database instance to ensure that the database structure + and version is correct. Each Database must contain one table of the structure below where + the database_version is stored. This value is used to keep track of the current database + version. + + >>> CREATE TABLE option(key TEXT PRIMARY KEY, value BLOB); + >>> INSERT INTO option(key, value) VALUES('database_version', '1'); + + @param database_version: the current database_version value from the option table. This + value reverts to u'0' when the table could not be accessed. + @type database_version: unicode + """ + raise NotImplementedError() + + def attach_commit_callback(self, func): + assert not func in self._commit_callbacks + self._commit_callbacks.append(func) + + def detach_commit_callback(self, func): + assert func in self._commit_callbacks + self._commit_callbacks.remove(func) + + +class APSWDatabase(Database): + + def _connect(self): + import apsw + self._connection = apsw.Connection(self._file_path) + self._cursor = self._connection.cursor() + + def _initial_statements(self): + super(APSWDatabase, self)._initial_statements() + self.execute("BEGIN") + + def execute(self, statement, bindings=()): + import apsw + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + assert isinstance(statement, unicode), "The SQL statement must be given in unicode" + assert isinstance(bindings, (tuple, list, dict)), "The bindings must be a tuple, list, or dictionary" + assert all(lambda x: isinstance(x, str) for x in bindings), "The bindings may not contain a string. \nProvide unicode for TEXT and buffer(...) for BLOB. \nGiven types: %s" % str([type(binding) for binding in bindings]) + + try: + logger.log(logging.NOTSET, "%s <-- %s [%s]", statement, bindings, self._file_path) + return self._cursor.execute(statement, bindings) + + except apsw.Error: + logger.exception("%s [%s]", statement, self._file_path) + raise + + def executescript(self, statements): + return self.execute(statements) + + def executemany(self, statement, sequenceofbindings): + import apsw + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + if __debug__: + # we allow GeneratorType but must convert it to a list in __debug__ mode since a + # generator can only iterate once + from types import GeneratorType + if isinstance(sequenceofbindings, GeneratorType): + sequenceofbindings = list(sequenceofbindings) + assert isinstance(statement, unicode), "The SQL statement must be given in unicode" + assert isinstance(sequenceofbindings, (tuple, list)), "The sequenceofbindings must be a list with tuples, lists, or dictionaries" + assert all(isinstance(x, (tuple, list, dict)) for x in list(sequenceofbindings)), "The sequenceofbindings must be a list with tuples, lists, or dictionaries" + assert not filter(lambda x: filter(lambda y: isinstance(y, str), x), list(sequenceofbindings)), "The bindings may not contain a string. \nProvide unicode for TEXT and buffer(...) for BLOB." + + try: + logger.log(logging.NOTSET, "%s [%s]", statement, self._file_path) + return self._cursor.executemany(statement, sequenceofbindings) + + except apsw.Error: + logger.exception("%s [%s]", statement, self._file_path) + raise + + @property + def last_insert_rowid(self): + """ + The row id of the most recent insert query. + @rtype: int or long + """ + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident() + assert not self._cursor.lastrowid is None, "The last statement was NOT an insert query" + return self._connection.last_insert_rowid() + + @property + def changes(self): + """ + The number of changes that resulted from the most recent query. + @rtype: int or long + """ + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.execute on the wrong thread" + return self._connection.totalchanges() + + def commit(self, exiting=False): + assert self._debug_thread_ident != 0, "please call database.open() first" + assert self._debug_thread_ident == thread.get_ident(), "Calling Database.commit on the wrong thread" + assert not (exiting and self._pending_commits), "No pending commits should be present when exiting" + + logger.debug("commit [%s]", self._file_path) + result = self.execute("COMMIT;BEGIN") + for callback in self._commit_callbacks: + try: + callback(exiting=exiting) + except Exception as exception: + logger.debug("%s [%s]", exception, self._file_path) + return result diff -Nru tribler-6.2.0/Tribler/dispersy/debug.py tribler-6.2.0/Tribler/dispersy/debug.py --- tribler-6.2.0/Tribler/dispersy/debug.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/debug.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,383 @@ +try: + # python 2.7 only... + from collections import OrderedDict +except ImportError: + from .python27_ordereddict import OrderedDict + +from time import time, sleep +import socket + +from .bloomfilter import BloomFilter +from .candidate import Candidate +from .crypto import ec_generate_key, ec_to_public_bin, ec_to_private_bin, ec_from_private_bin +from .dprint import dprint +from .member import Member +from .message import Message +from .revision import update_revision_information + +# update version information directly from SVN +update_revision_information("$HeadURL: http://svn.tribler.org/dispersy/trunk/debug.py $", "$Revision: 31808 $") + +class DebugOnlyMember(Member): + _cache = OrderedDict() + _mid_cache = {} + _did_cache = {} + + def __init__(self, public_key, private_key=""): + super(DebugOnlyMember, self).__init__(public_key) + + if private_key: + self._private_key = private_key + self._ec = ec_from_private_bin(private_key) + +class Node(object): + _socket_range = (8000, 8999) + _socket_pool = {} + _socket_counter = 0 + + def __init__(self): + self._socket = None + self._my_member = None + self._community = None + self._dispersy = None + + @property + def socket(self): + return self._socket + + @property + def lan_address(self): + _, port = self._socket.getsockname() + return ("127.0.0.1", port) + + @property + def wan_address(self): + if self._dispersy: + host = self._dispersy.wan_address[0] + + if host == "0.0.0.0": + host = self._dispersy.lan_address[0] + + else: + host = "0.0.0.0" + + _, port = self._socket.getsockname() + return (host, port) + + def init_socket(self): + assert self._socket is None + port = Node._socket_range[0] + Node._socket_counter % (Node._socket_range[1] - Node._socket_range[0]) + Node._socket_counter += 1 + + if not port in Node._socket_pool: + s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 870400) + s.setblocking(False) + s.settimeout(0.0) + while True: + try: + s.bind(("localhost", port)) + except socket.error, error: + port = Node._socket_range[0] + Node._socket_counter % (Node._socket_range[1] - Node._socket_range[0]) + Node._socket_counter += 1 + continue + break + + Node._socket_pool[port] = s + if __debug__: dprint("create socket ", port) + + elif __debug__: + dprint("reuse socket ", port, level="warning") + + self._socket = Node._socket_pool[port] + + @property + def my_member(self): + return self._my_member + + def init_my_member(self, bits=None, sync_with_database=None, candidate=True, identity=True): + assert bits is None, "The parameter bits is deprecated and must be None" + assert sync_with_database is None, "The parameter sync_with_database is deprecated and must be None" + + ec = ec_generate_key(u"low") + self._my_member = DebugOnlyMember(ec_to_public_bin(ec), ec_to_private_bin(ec)) + + if identity: + # update identity information + assert self._socket, "Socket needs to be set to candidate" + assert self._community, "Community needs to be set to candidate" + message = self.create_dispersy_identity_message(2) + self.give_message(message) + + if candidate: + # update candidate information + assert self._socket, "Socket needs to be set to candidate" + assert self._community, "Community needs to be set to candidate" + message = self.create_dispersy_introduction_request_message(self._community.my_candidate, self.lan_address, self.wan_address, False, u"unknown", None, 1, 1) + self.give_message(message) + sleep(0.1) + self.receive_message(message_names=[u"dispersy-introduction-response"]) + + @property + def community(self): + return self._community + + def set_community(self, community): + self._community = community + if community: + self._dispersy = community.dispersy + + def encode_message(self, message): + assert isinstance(message, Message.Implementation) + tmp_member = self._community._my_member + self._community._my_member= self._my_member + try: + packet = self._community.get_conversion().encode_message(message) + finally: + self._community._my_member = tmp_member + return packet + + def give_packet(self, packet, verbose=False, cache=False, tunnel=False): + assert isinstance(packet, str) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + if verbose: dprint("giving ", len(packet), " bytes") + candidate = Candidate(self.lan_address, tunnel) + self._dispersy.on_incoming_packets([(candidate, packet)], cache=cache, timestamp=time()) + return packet + + def give_packets(self, packets, verbose=False, cache=False, tunnel=False): + assert isinstance(packets, list) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + if verbose: dprint("giving ", sum(len(packet) for packet in packets), " bytes") + candidate = Candidate(self.lan_address, tunnel) + self._dispersy.on_incoming_packets([(candidate, packet) for packet in packets], cache=cache, timestamp=time()) + return packets + + def give_message(self, message, verbose=False, cache=False): + assert isinstance(message, Message.Implementation) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + self.encode_message(message) + if verbose: dprint("giving ", message.name, " (", len(message.packet), " bytes)") + self.give_packet(message.packet, verbose=verbose, cache=cache) + return message + + def give_messages(self, messages, verbose=False, cache=False): + assert isinstance(messages, list) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + map(self.encode_message, messages) + if verbose: dprint("giving ", len(messages), " messages (", sum(len(message.packet) for message in messages), " bytes)") + self.give_packets([message.packet for message in messages], verbose=verbose, cache=cache) + return messages + + def send_packet(self, packet, address, verbose=False): + assert isinstance(packet, str) + assert isinstance(address, tuple) + assert isinstance(verbose, bool) + if verbose: dprint(len(packet), " bytes to ", address[0], ":", address[1]) + self._socket.sendto(packet, address) + return packet + + def send_message(self, message, address, verbose=False): + assert isinstance(message, Message.Implementation) + assert isinstance(address, tuple) + assert isinstance(verbose, bool) + self.encode_message(message) + if verbose: dprint(message.name, " (", len(message.packet), " bytes) to ", address[0], ":", address[1]) + self.send_packet(message.packet, address) + return message + + def drop_packets(self): + while True: + try: + packet, address = self._socket.recvfrom(10240) + except: + break + + dprint("droped ", len(packet), " bytes from ", address[0], ":", address[1]) + + def receive_packet(self, timeout=None, addresses=None, packets=None): + assert timeout is None, "The parameter TIMEOUT is deprecated and must be None" + assert addresses is None or isinstance(addresses, list) + assert addresses is None or all(isinstance(address, tuple) for address in addresses) + assert packets is None or isinstance(packets, list) + assert packets is None or all(isinstance(packet, str) for packet in packets) + + while True: + try: + packet, address = self._socket.recvfrom(10240) + except: + raise + + if not (addresses is None or address in addresses or (address[0] == "127.0.0.1" and ("0.0.0.0", address[1]) in addresses)): + continue + + if not (packets is None or packet in packets): + continue + + if packet.startswith("ffffffff".decode("HEX")): + tunnel = True + packet = packet[4:] + else: + tunnel = False + + candidate = Candidate(address, tunnel) + dprint(len(packet), " bytes from ", candidate) + return candidate, packet + + def receive_message(self, timeout=None, addresses=None, packets=None, message_names=None, payload_types=None, distributions=None, destinations=None): + assert timeout is None, "The parameter TIMEOUT is deprecated and must be None" + assert isinstance(message_names, (type(None), list)) + assert isinstance(payload_types, (type(None), list)) + assert isinstance(distributions, (type(None), list)) + assert isinstance(destinations, (type(None), list)) + + while True: + candidate, packet = self.receive_packet(timeout, addresses, packets) + + try: + message = self._community.get_conversion(packet[:22]).decode_message(candidate, packet) + except KeyError: + continue + + if not (message_names is None or message.name in message_names): + dprint("Ignored ", message.name, " (", len(packet), " bytes) from ", candidate) + continue + + if not (payload_types is None or message.payload.type in payload_types): + dprint("Ignored ", message.name, " (", len(packet), " bytes) from ", candidate) + continue + + if not (distributions is None or isinstance(message.distribution, distributions)): + dprint("Ignored ", message.name, " (", len(packet), " bytes) from ", candidate) + continue + + if not (destinations is None or isinstance(message.destination, destinations)): + dprint("Ignored ", message.name, " (", len(packet), " bytes) from ", candidate) + continue + + dprint(message.name, " (", len(packet), " bytes) from ", candidate) + return candidate, message + + def receive_messages(self, *args, **kargs): + messages = [] + while True: + try: + messages.append(self.receive_message(*args, **kargs)) + except socket.error: + break + return messages + + def create_dispersy_authorize(self, permission_triplets, sequence_number, global_time): + meta = self._community.get_meta_message(u"dispersy-authorize") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(permission_triplets,)) + + def create_dispersy_identity_message(self, global_time): + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(u"dispersy-identity") + return meta.impl(authentication=(self._my_member,), distribution=(global_time,)) + + def create_dispersy_undo_own_message(self, message, global_time, sequence_number): + assert message.authentication.member == self._my_member, "use create_dispersy_undo_other_message" + meta = self._community.get_meta_message(u"dispersy-undo-own") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(message.authentication.member, message.distribution.global_time, message)) + + def create_dispersy_undo_other_message(self, message, global_time, sequence_number): + assert message.authentication.member != self._my_member, "use create_dispersy_undo_own_message" + meta = self._community.get_meta_message(u"dispersy-undo-other") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(message.authentication.member, message.distribution.global_time, message)) + + def create_dispersy_missing_sequence_message(self, missing_member, missing_message_meta, missing_low, missing_high, global_time, destination_candidate): + assert isinstance(missing_member, Member) + assert isinstance(missing_message_meta, Message) + assert isinstance(missing_low, (int, long)) + assert isinstance(missing_high, (int, long)) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-missing-sequence") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time,), + destination=(destination_candidate,), + payload=(missing_member, missing_message_meta, missing_low, missing_high)) + + def create_dispersy_signature_request_message(self, message, global_time, destination_member): + isinstance(message, Message.Implementation) + isinstance(global_time, (int, long)) + isinstance(destination_member, Member) + meta = self._community.get_meta_message(u"dispersy-signature-request") + return meta.impl(distribution=(global_time,), + destination=(destination_member,), + payload=(message,)) + + def create_dispersy_signature_response_message(self, identifier, message, global_time, destination_candidate): + isinstance(identifier, (int, long)) + isinstance(message, Message.Implementation) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-signature-response") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(identifier, message)) + + def create_dispersy_missing_message_message(self, missing_member, missing_global_times, global_time, destination_candidate): + assert isinstance(missing_member, Member) + assert isinstance(missing_global_times, list) + assert not filter(lambda x: not isinstance(x, (int, long)), missing_global_times) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-missing-message") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(missing_member, missing_global_times)) + + def create_dispersy_missing_sequence_message(self, missing_member, missing_message, missing_sequence_low, missing_sequence_high, global_time, destination_candidate): + assert isinstance(missing_member, Member) + assert isinstance(missing_message, Message) + assert isinstance(missing_sequence_low, (int, long)) + assert isinstance(missing_sequence_high, (int, long)) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-missing-sequence") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(missing_member, missing_message, missing_sequence_low, missing_sequence_high)) + + def create_dispersy_missing_proof_message(self, member, global_time): + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + meta = self._community.get_meta_message(u"dispersy-missing-proof") + return meta.impl(distribution=(global_time,), payload=(member, global_time)) + + def create_dispersy_introduction_request_message(self, destination, source_lan, source_wan, advice, connection_type, sync, identifier, global_time): + # TODO assert other arguments + assert isinstance(destination, Candidate), destination + if sync: + assert isinstance(sync, tuple) + assert len(sync) == 5 + time_low, time_high, modulo, offset, bloom_packets = sync + assert isinstance(time_low, (int, long)) + assert isinstance(time_high, (int, long)) + assert isinstance(modulo, int) + assert isinstance(offset, int) + assert isinstance(bloom_packets, list) + assert not filter(lambda x: not isinstance(x, str), bloom_packets) + bloom_filter = BloomFilter(512*8, 0.001, prefix="x") + map(bloom_filter.add, bloom_packets) + sync = (time_low, time_high, modulo, offset, bloom_filter) + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(u"dispersy-introduction-request") + return meta.impl(authentication=(self._my_member,), + destination=(destination,), + distribution=(global_time,), + payload=(destination.sock_addr, source_lan, source_wan, advice, connection_type, sync, identifier)) + diff -Nru tribler-6.2.0/Tribler/dispersy/debugcommunity.py tribler-6.2.0/Tribler/dispersy/debugcommunity.py --- tribler-6.2.0/Tribler/dispersy/debugcommunity.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/debugcommunity.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,302 @@ +from struct import pack, unpack_from + +from .authentication import DoubleMemberAuthentication, MemberAuthentication +from .candidate import Candidate +from .community import Community, HardKilledCommunity +from .conversion import BinaryConversion, DefaultConversion +from .debug import Node +from .destination import MemberDestination, CommunityDestination +from .distribution import DirectDistribution, FullSyncDistribution, LastSyncDistribution +from .dprint import dprint +from .member import Member +from .message import Message, DropPacket, DelayMessageByProof +from .payload import Payload +from .resolution import PublicResolution, LinearResolution, DynamicResolution +from .revision import update_revision_information + +# update version information directly from SVN +update_revision_information("$HeadURL: http://svn.tribler.org/dispersy/trunk/debugcommunity.py $", "$Revision: 31912 $") + +# +# Node +# + +class DebugNode(Node): + def _create_text_message(self, message_name, text, global_time, resolution=(), destination=()): + assert isinstance(message_name, unicode) + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + assert isinstance(resolution, tuple) + assert isinstance(destination, tuple) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=(self._my_member,), + resolution=resolution, + distribution=(global_time,), + destination=destination, + payload=(text,)) + + def _create_sequence_text_message(self, message_name, text, global_time, sequence_number): + assert isinstance(message_name, unicode) + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + assert isinstance(sequence_number, (int, long)) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(text,)) + + def _create_doublemember_text_message(self, message_name, other, text, global_time): + assert isinstance(message_name, unicode) + assert isinstance(other, Member) + assert not self._my_member == other + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=([self._my_member, other],), + distribution=(global_time,), + payload=(text,)) + + def create_last_1_test_message(self, text, global_time): + return self._create_text_message(u"last-1-test", text, global_time) + + def create_last_9_test_message(self, text, global_time): + return self._create_text_message(u"last-9-test", text, global_time) + + def create_last_1_doublemember_text_message(self, other, text, global_time): + return self._create_doublemember_text_message(u"last-1-doublemember-text", other, text, global_time) + + def create_full_sync_text_message(self, text, global_time): + return self._create_text_message(u"full-sync-text", text, global_time) + + def create_in_order_text_message(self, text, global_time): + return self._create_text_message(u"ASC-text", text, global_time) + + def create_out_order_text_message(self, text, global_time): + return self._create_text_message(u"DESC-text", text, global_time) + + def create_protected_full_sync_text_message(self, text, global_time): + return self._create_text_message(u"protected-full-sync-text", text, global_time) + + def create_dynamic_resolution_text_message(self, text, global_time, policy): + assert isinstance(policy, (PublicResolution.Implementation, LinearResolution.Implementation)) + return self._create_text_message(u"dynamic-resolution-text", text, global_time, resolution=(policy,)) + + def create_sequence_test_message(self, text, global_time, sequence_number): + return self._create_sequence_text_message(u"sequence-text", text, global_time, sequence_number) +# +# Conversion +# + +class DebugCommunityConversion(BinaryConversion): + def __init__(self, community): + super(DebugCommunityConversion, self).__init__(community, "\x02") + self.define_meta_message(chr(1), community.get_meta_message(u"last-1-test"), self._encode_text, self._decode_text) + self.define_meta_message(chr(2), community.get_meta_message(u"last-9-test"), self._encode_text, self._decode_text) + self.define_meta_message(chr(4), community.get_meta_message(u"double-signed-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(8), community.get_meta_message(u"full-sync-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(9), community.get_meta_message(u"ASC-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(10), community.get_meta_message(u"DESC-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(11), community.get_meta_message(u"last-1-doublemember-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(12), community.get_meta_message(u"protected-full-sync-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(13), community.get_meta_message(u"dynamic-resolution-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(14), community.get_meta_message(u"sequence-text"), self._encode_text, self._decode_text) + + def _encode_text(self, message): + return pack("!B", len(message.payload.text)), message.payload.text + + def _decode_text(self, placeholder, offset, data): + if len(data) < offset + 1: + raise DropPacket("Insufficient packet size") + + text_length, = unpack_from("!B", data, offset) + offset += 1 + + if len(data) < offset + text_length: + raise DropPacket("Insufficient packet size") + + text = data[offset:offset+text_length] + offset += text_length + + return offset, placeholder.meta.payload.implement(text) + +# +# Payload +# + +class TextPayload(Payload): + class Implementation(Payload.Implementation): + def __init__(self, meta, text): + assert isinstance(text, str) + super(TextPayload.Implementation, self).__init__(meta) + self._text = text + + @property + def text(self): + return self._text + +# +# Community +# + +class DebugCommunity(Community): + """ + Community to debug Dispersy related messages and policies. + """ + @property + def my_candidate(self): + return Candidate(self._dispersy.lan_address, False) + + @property + def dispersy_candidate_request_initial_delay(self): + # disable candidate + return 0.0 + + @property + def dispersy_sync_initial_delay(self): + # disable sync + return 0.0 + + def initiate_conversions(self): + return [DefaultConversion(self), DebugCommunityConversion(self)] + + # + # helper methods to check database status + # + + def fetch_packets(self, *message_names): + return [str(packet) for packet, in list(self._dispersy.database.execute(u"SELECT packet FROM sync WHERE meta_message IN (" + ", ".join("?" * len(message_names)) + ") ORDER BY global_time, packet", + [self.get_meta_message(name).database_id for name in message_names]))] + + def fetch_messages(self, *message_names): + """ + Fetch all packets for MESSAGE_NAMES from the database and converts them into + Message.Implementation instances. + """ + return self._dispersy.convert_packets_to_messages(self.fetch_packets(*message_names), community=self, verify=False) + + def delete_messages(self, *message_names): + """ + Deletes all packets for MESSAGE_NAMES from the database. Returns the number of packets + removed. + """ + self._dispersy.database.execute(u"DELETE FROM sync WHERE meta_message IN (" + ", ".join("?" * len(message_names)) + ")", + [self.get_meta_message(name).database_id for name in message_names]) + return self._dispersy.database.changes + + def initiate_meta_messages(self): + return [Message(self, u"last-1-test", MemberAuthentication(), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=1), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"last-9-test", MemberAuthentication(), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=9), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"last-1-doublemember-text", DoubleMemberAuthentication(allow_signature_func=self.allow_signature_func), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=1), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"double-signed-text", DoubleMemberAuthentication(allow_signature_func=self.allow_double_signed_text), PublicResolution(), DirectDistribution(), MemberDestination(), TextPayload(), self.check_text, self.on_text), + Message(self, u"full-sync-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + Message(self, u"ASC-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"DESC-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"DESC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"protected-full-sync-text", MemberAuthentication(), LinearResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"dynamic-resolution-text", MemberAuthentication(), DynamicResolution(PublicResolution(), LinearResolution()), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + Message(self, u"sequence-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + ] + + def create_full_sync_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"full-sync-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # double-signed-text + # + + def create_double_signed_text(self, text, member, response_func, response_args=(), timeout=10.0, forward=True): + meta = self.get_meta_message(u"double-signed-text") + message = meta.impl(authentication=([self._my_member, member],), + distribution=(self.global_time,), + destination=(member,), + payload=(text,)) + return self.create_dispersy_signature_request(message, response_func, response_args, timeout, forward) + + def allow_double_signed_text(self, message): + """ + Received a request to sign MESSAGE. + """ + dprint(message, " \"", message.payload.text, "\"") + assert message.payload.text in ("Allow=True", "Allow=False") + return message.payload.text == "Allow=True" + + # + # last-1-doublemember-text + # + def allow_signature_func(self, message): + return True + + # + # protected-full-sync-text + # + def create_protected_full_sync_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"protected-full-sync-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # dynamic-resolution-text + # + def create_dynamic_resolution_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"dynamic-resolution-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # sequence-text + # + def create_sequence_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"sequence-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(), meta.distribution.claim_sequence_number()), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # any text-payload + # + + def check_text(self, messages): + for message in messages: + allowed, proof = self._timeline.check(message) + if allowed: + yield message + else: + yield DelayMessageByProof(message) + + def on_text(self, messages): + """ + Received a text message. + """ + for message in messages: + if not "Dprint=False" in message.payload.text: + dprint(message, " \"", message.payload.text, "\" @", message.distribution.global_time) + + def undo_text(self, descriptors): + """ + Received an undo for a text message. + """ + for member, global_time, packet in descriptors: + message = packet.load_message() + dprint("undo \"", message.payload.text, "\" @", global_time) + + def dispersy_cleanup_community(self, message): + if message.payload.is_soft_kill: + raise NotImplementedError() + + elif message.payload.is_hard_kill: + return HardKilledDebugCommunity + +class HardKilledDebugCommunity(DebugCommunity, HardKilledCommunity): + pass diff -Nru tribler-6.2.0/Tribler/dispersy/decorator.py tribler-6.2.0/Tribler/dispersy/decorator.py --- tribler-6.2.0/Tribler/dispersy/decorator.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/decorator.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,305 @@ +import logging +logger = logging.getLogger(__name__) + +from atexit import register as atexit_register +from cProfile import Profile +from collections import defaultdict +from hashlib import sha1 +from thread import get_ident +from threading import current_thread +from time import time +import sys + +if __debug__: + from time import sleep + + +class Constructor(object): + + """ + Allow a class to have multiple constructors. The right one will + be chosen based on the parameter types. + + class Foo(Constructor): + @constructor(int) + def _init_from_number(self, i): + pass + + @constructor(str) + def _init_from_str(self, s): + pass + """ + def __new__(cls, *args, **kargs): + # We only need to get __constructors once per class + if not hasattr(cls, "_Constructor__constructors"): + constructors = [] + for m in dir(cls): + attr = getattr(cls, m) + if isinstance(attr, tuple) and len(attr) == 4 and attr[0] == "CONSTRUCTOR": + _, order, types, method = attr + constructors.append((order, types, method)) + setattr(cls, m, method) + constructors.sort() + setattr(cls, "_Constructor__constructors", [(types, method) for _, types, method in constructors]) + return object.__new__(cls) + + def __init__(self, *args, **kargs): + for types, method in getattr(self, "_Constructor__constructors"): + if not len(types) == len(args): + continue + for type_, arg in zip(types, args): + if not isinstance(arg, type_): + break + else: + return method(self, *args, **kargs) + raise RuntimeError("No constructor found for", tuple(map(type, args))) + +__constructor_order = 0 + + +def constructor(*types): + def helper(func): + if __debug__: + # do not do anything when running epydoc + if sys.argv[0] == "(imported)": + return func + global __constructor_order + __constructor_order += 1 + return "CONSTRUCTOR", __constructor_order, types, func + return helper + + +def documentation(documented_func): + def helper(func): + if documented_func.__doc__: + prefix = documented_func.__doc__ + "\n" + else: + prefix = "" + func.__doc__ = prefix + "\n @note: This documentation is copied from " + documented_func.__class__.__name__ + "." + documented_func.__name__ + return func + return helper + +if __debug__: + def runtime_duration_warning(threshold): + assert isinstance(threshold, float), type(threshold) + assert 0.0 <= threshold + + def helper(func): + def runtime_duration_warning_helper(*args, **kargs): + start = time() + try: + return func(*args, **kargs) + finally: + end = time() + if end - start >= threshold: + logger.warning("%.2fs %s", end - start, func) + runtime_duration_warning_helper.__name__ = func.__name__ + "_RDWH" + return runtime_duration_warning_helper + return helper + +else: + def runtime_duration_warning(threshold): + def helper(func): + return func + return helper + +# Niels 21-06-2012: argv seems to be missing if python is not started as a script +if "--profiler" in getattr(sys, "argv", []): + _profiled_threads = set() + + def attach_profiler(func): + def helper(*args, **kargs): + filename = "profile-%s-%d.out" % (current_thread().name, get_ident()) + if filename in _profiled_threads: + raise RuntimeError("Can not attach profiler on the same thread twice") + + logger.debug("running with profiler [%s]", filename) + _profiled_threads.add(filename) + profiler = Profile() + + try: + return profiler.runcall(func, *args, **kargs) + finally: + logger.debug("profiler results [%s]", filename) + profiler.dump_stats(filename) + + return helper + +else: + def attach_profiler(func): + return func + +if "--runtime-statistics" in getattr(sys, "argv", []): + _runtime_statistics_logger = logging.getLogger("runtime-statistics") + _runtime_statistics = defaultdict(lambda: [0, 0.0]) + + def _output_runtime_statistics(): + entries = sorted([(stats[0], stats[1], entry) for entry, stats in _runtime_statistics.iteritems()]) + for count, duration, entry in entries: + if "\n" in entry: + _runtime_statistics_logger.info("<<<%s %dx %.2fs %.2fs\n%s\n>>>", sha1(entry).digest().encode("HEX"), count, duration, duration / count, entry) + + _runtime_statistics_logger.info(" COUNT SUM AVG ENTRY") + for count, duration, entry in entries: + _runtime_statistics_logger.info("%5dx %7.2fs %7.2fs %s", count, duration, duration / count, entry.strip().split("\n")[0]) + atexit_register(_output_runtime_statistics) + + def attach_runtime_statistics(format_): + def helper(func): + def attach_runtime_statistics_helper(*args, **kargs): + start = time() + try: + return func(*args, **kargs) + finally: + end = time() + entry = format_.format(function_name=func.__name__, *args, **kargs) + _runtime_statistics_logger.debug(entry) + stats = _runtime_statistics[entry] + stats[0] += 1 + stats[1] += (end - start) + attach_runtime_statistics_helper.__name__ = func.__name__ + return attach_runtime_statistics_helper + return helper + +else: + def attach_runtime_statistics(format_): + """ + Keep track of how often and how long a function was called. + + Runtime statistics will only be collected when sys.argv contains '--runtime-statistics'. + Otherwise the decorator will not influence the runtime in any way. + + FORMAT_ must be a (unicode)string. Each unique string tracks individual statistics. + FORMAT_ uses the format mini language and has access to all the arguments and keyword + arguments of the function. Furthermore, the function name is available as a keyword + argument called 'function_name'. The python format mini language is described at: + http://docs.python.org/2/library/string.html#format-specification-mini-language. + + @attach_runtime_statistics("{function_name} bar={1}, moo={moo}") + def foo(self, bar, moo='milk'): + pass + + foo(1) + foo(2) + foo(2) + + After running the above example, the statistics will show that: + - 'foo bar=1 moo=milk' was called once + - 'foo bar=2 moo=milk' was called twice + """ + def helper(func): + return func + return helper + +if __debug__: + def main(): + class Foo(Constructor): + + @constructor(int) + def init_a(self, *args): + self.init = int + self.args = args + self.clss = Foo + + @constructor(int, float) + def init_b(self, *args): + self.init = (int, float) + self.args = args + self.clss = Foo + + @constructor((str, unicode), ) + def init_c(self, *args): + self.init = ((str, unicode), ) + self.args = args + self.clss = Foo + + class Bar(Constructor): + + @constructor(int) + def init_a(self, *args): + self.init = int + self.args = args + self.clss = Bar + + @constructor(int, float) + def init_b(self, *args): + self.init = (int, float) + self.args = args + self.clss = Bar + + @constructor((str, unicode), ) + def init_c(self, *args): + self.init = ((str, unicode), ) + self.args = args + self.clss = Bar + + foo = Foo(1) + assert foo.init == int + assert foo.args == (1, ) + assert foo.clss == Foo + + foo = Foo(1, 1.0) + assert foo.init == (int, float) + assert foo.args == (1, 1.0) + assert foo.clss == Foo + + foo = Foo("a") + assert foo.init == ((str, unicode), ) + assert foo.args == ("a", ) + assert foo.clss == Foo + + foo = Foo(u"a") + assert foo.init == ((str, unicode), ) + assert foo.args == (u"a", ) + assert foo.clss == Foo + + bar = Bar(1) + assert bar.init == int + assert bar.args == (1, ) + assert bar.clss == Bar + + bar = Bar(1, 1.0) + assert bar.init == (int, float) + assert bar.args == (1, 1.0) + assert bar.clss == Bar + + bar = Bar("a") + assert bar.init == ((str, unicode), ) + assert bar.args == ("a", ) + assert bar.clss == Bar + + bar = Bar(u"a") + assert bar.init == ((str, unicode), ) + assert bar.args == (u"a", ) + assert bar.clss == Bar + + def invalid_args(cls, *args): + try: + obj = cls(*args) + assert False + except RuntimeError: + pass + + invalid_args(Foo, 1.0) + invalid_args(Foo, "a", 1) + invalid_args(Foo, 1, 1.0, 1) + invalid_args(Foo, []) + + invalid_args(Bar, 1.0) + invalid_args(Bar, "a", 1) + invalid_args(Bar, 1, 1.0, 1) + invalid_args(Bar, []) + + print "Constructor test passed" + + @runtime_duration_warning(1.0) + def test(delay): + sleep(delay) + + test(0.5) + test(1.5) + + print "Runtime duration test complete" + + if __name__ == "__main__": + main() diff -Nru tribler-6.2.0/Tribler/dispersy/destination.py tribler-6.2.0/Tribler/dispersy/destination.py --- tribler-6.2.0/Tribler/dispersy/destination.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/destination.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,82 @@ +from .meta import MetaObject + + +class Destination(MetaObject): + + class Implementation(MetaObject.Implementation): + pass + + def setup(self, message): + """ + Setup is called after the meta message is initially created. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + def __str__(self): + return "<%s>" % (self.__class__.__name__,) + + +class CandidateDestination(Destination): + + """ + A destination policy where the message is sent to one or more specified candidates. + """ + class Implementation(Destination.Implementation): + + def __init__(self, meta, *candidates): + """ + Construct a CandidateDestination.Implementation object. + + META the associated CandidateDestination object. + + CANDIDATES is a tuple containing zero or more Candidate objects. These will contain the + destination addresses when the associated message is sent. + """ + if __debug__: + from .candidate import Candidate + assert isinstance(candidates, tuple), type(candidates) + assert len(candidates) >= 0, len(candidates) + assert all(isinstance(candidate, Candidate) for candidate in candidates), [type(candidate) for candidate in candidates] + super(CandidateDestination.Implementation, self).__init__(meta) + self._candidates = candidates + + @property + def candidates(self): + return self._candidates + + +class CommunityDestination(Destination): + + """ + A destination policy where the message is sent to one or more community members selected from + the current candidate list. + + At the time of sending at most NODE_COUNT addresses are obtained using + community.yield_random_candidates(...) to receive the message. + """ + class Implementation(Destination.Implementation): + + @property + def node_count(self): + return self._meta._node_count + + def __init__(self, node_count): + """ + Construct a CommunityDestination object. + + NODE_COUNT is an integer giving the number of nodes where, when the message is created, the + message must be sent to. These nodes are selected using the + community.yield_random_candidates(...) method. NODE_COUNT must be zero or higher. + """ + assert isinstance(node_count, int) + assert node_count >= 0 + self._node_count = node_count + + @property + def node_count(self): + return self._node_count + + def __str__(self): + return "<%s node_count:%d>" % (self.__class__.__name__, self._node_count) diff -Nru tribler-6.2.0/Tribler/dispersy/dispersy.py tribler-6.2.0/Tribler/dispersy/dispersy.py --- tribler-6.2.0/Tribler/dispersy/dispersy.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/dispersy.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,4585 @@ +""" +The Distributed Permission System, or Dispersy, is a platform to simplify the design of distributed +communities. At the heart of Dispersy lies a simple identity and message handling system where each +community and each user is uniquely and securely identified using elliptic curve cryptography. + +Since we can not guarantee each member to be online all the time, messages that they created at one +point in time should be able to retain their meaning even when the member is off-line. This can be +achieved by signing such messages and having them propagated though other nodes in the network. +Unfortunately, this increases the strain on these other nodes, which we try to alleviate using +specific message policies, which will be described below. + +Following from this, we can easily package each message into one UDP packet to simplify +connect-ability problems since UDP packets are much easier to pass though NAT's and firewalls. + +Earlier we hinted that messages can have different policies. A message has the following four +different policies, and each policy defines how a specific part of the message should be handled. + + - Authentication defines if the message is signed, and if so, by how many members. + + - Resolution defines how the permission system should resolve conflicts between messages. + + - Distribution defines if the message is send once or if it should be gossiped around. In the + latter case, it can also define how many messages should be kept in the network. + + - Destination defines to whom the message should be send or gossiped. + +To ensure that every node handles a messages in the same way, i.e. has the same policies associated +to each message, a message exists in two stages. The meta-message and the implemented-message +stage. Each message has one meta-message associated to it and tells us how the message is supposed +to be handled. When a message is send or received an implementation is made from the meta-message +that contains information specifically for that message. For example: a meta-message could have the +member-authentication-policy that tells us that the message must be signed by a member but only the +an implemented-message will have data and this signature. + +A community can tweak the policies and how they behave by changing the parameters that the policies +supply. Aside from the four policies, each meta-message also defines the community that it is part +of, the name it uses as an internal identifier, and the class that will contain the payload. +""" + +import logging +logger = logging.getLogger(__name__) + +import os +import sys +import netifaces + +try: + # python 2.7 only... + from collections import OrderedDict +except ImportError: + from .python27_ordereddict import OrderedDict + +from collections import defaultdict +from hashlib import sha1 +from itertools import groupby, islice, count +from pprint import pformat +from socket import inet_aton, error as socket_error +from time import time + +from .authentication import NoAuthentication, MemberAuthentication, DoubleMemberAuthentication +from .bloomfilter import BloomFilter +from .bootstrap import get_bootstrap_candidates +from .candidate import BootstrapCandidate, LoopbackCandidate, WalkCandidate, Candidate +from .crypto import ec_generate_key, ec_to_public_bin, ec_to_private_bin +from .destination import CommunityDestination, CandidateDestination +from .dispersydatabase import DispersyDatabase +from .distribution import SyncDistribution, FullSyncDistribution, LastSyncDistribution, DirectDistribution, GlobalTimePruning +from .member import DummyMember, Member +from .message import BatchConfiguration, Packet, Message +from .message import DropMessage, DelayMessage, DelayMessageByProof, DelayMessageBySequence, DelayMessageByMissingMessage +from .message import DropPacket, DelayPacket +from .payload import AuthorizePayload, RevokePayload, UndoPayload +from .payload import DestroyCommunityPayload +from .payload import DynamicSettingsPayload +from .payload import IdentityPayload, MissingIdentityPayload +from .payload import IntroductionRequestPayload, IntroductionResponsePayload, PunctureRequestPayload, PuncturePayload +from .payload import MissingMessagePayload, MissingLastMessagePayload +from .payload import MissingSequencePayload, MissingProofPayload +from .payload import SignatureRequestPayload, SignatureResponsePayload +from .requestcache import Cache, RequestCache +from .resolution import PublicResolution, LinearResolution +from .statistics import DispersyStatistics + +if __debug__: + from .callback import Callback + from .endpoint import Endpoint + +# the callback identifier for the task that periodically takes a step +CANDIDATE_WALKER_CALLBACK_ID = u"dispersy-candidate-walker" + + +class SignatureRequestCache(Cache): + cleanup_delay = 0.0 + + def __init__(self, members, response_func, response_args, timeout): + self.request = None + # MEMBERS is a list containing all the members that should add their signature. currently + # we only support double signed messages, hence MEMBERS contains only a single Member + # instance. + self.members = members + self.response_func = response_func + self.response_args = response_args + self.timeout_delay = timeout + + def on_timeout(self): + logger.debug("signature timeout") + self.response_func(self, None, True, *self.response_args) + + +class IntroductionRequestCache(Cache): + # we will accept the response at most 10.5 seconds after our request + timeout_delay = 10.5 + # the cache remains available at most 4.5 after receiving the response. this gives some time to + # receive the puncture message + cleanup_delay = 4.5 + + def __init__(self, community, helper_candidate): + self.community = community + self.helper_candidate = helper_candidate + self.response_candidate = None + self.puncture_candidate = None + + def on_timeout(self): + # helper_candidate did not respond to a request message in this community. after some time + # inactive candidates become obsolete and will be removed by + # _periodically_cleanup_candidates + logger.debug("walker timeout for %s", self.helper_candidate) + + self.community.dispersy.statistics.dict_inc(self.community.dispersy.statistics.walk_fail, self.helper_candidate.sock_addr) + + # set the candidate to obsolete + self.helper_candidate.obsolete(time()) + + +class MissingSomethingCache(Cache): + cleanup_delay = 0.0 + + def __init__(self, timeout): + logger.debug("%s: waiting for %f seconds", self.__class__.__name__, timeout) + self.timeout_delay = timeout + self.callbacks = [] + + def on_timeout(self): + logger.debug("%s: timeout on %d callbacks", self.__class__.__name__, len(self.callbacks)) + for func, args in self.callbacks: + func(None, *args) + + @staticmethod + def properties_to_identifier(*args): + raise NotImplementedError() + + @staticmethod + def message_to_identifier(message): + raise NotImplementedError() + + +class MissingMemberCache(MissingSomethingCache): + + @staticmethod + def properties_to_identifier(community, member): + return "-missing-member-%s-%s-" % (community.cid, member.mid) + + @staticmethod + def message_to_identifier(message): + return "-missing-member-%s-%s-" % (message.community.cid, message.authentication.member.mid) + + +class MissingMessageCache(MissingSomethingCache): + + @staticmethod + def properties_to_identifier(community, member, global_time): + return "-missing-message-%s-%s-%d-" % (community.cid, member.mid, global_time) + + @staticmethod + def message_to_identifier(message): + return "-missing-message-%s-%s-%d-" % (message.community.cid, message.authentication.member.mid, message.distribution.global_time) + + +class MissingLastMessageCache(MissingSomethingCache): + + @staticmethod + def properties_to_identifier(community, member, message): + return "-missing-last-message-%s-%s-%s-" % (community.cid, member.mid, message.name.encode("UTF-8")) + + @staticmethod + def message_to_identifier(message): + return "-missing-last-message-%s-%s-%s-" % (message.community.cid, message.authentication.member.mid, message.name.encode("UTF-8")) + + +class MissingProofCache(MissingSomethingCache): + + def __init__(self, timeout): + super(MissingProofCache, self).__init__(timeout) + + # duplicates contains the (meta messages, member) for which we have already requesting + # proof, this allows us send fewer duplicate requests + self.duplicates = [] + + @staticmethod + def properties_to_identifier(community): + return "-missing-proof-%s-" % (community.cid,) + + @staticmethod + def message_to_identifier(message): + return "-missing-proof-%s-" % (message.community.cid,) + + +class MissingSequenceOverviewCache(Cache): + cleanup_delay = 0.0 + + def __init__(self, timeout): + self.timeout_delay = timeout + self.missing_high = 0 + + def on_timeout(self): + pass + + @staticmethod + def properties_to_identifier(community, member, message): + return "-missing-sequence-overview-%s-%s-%s-" % (community.cid, member.mid, message.name.encode("UTF-8")) + + +class MissingSequenceCache(MissingSomethingCache): + + @staticmethod + def properties_to_identifier(community, member, message, missing_high): + return "-missing-sequence-%s-%s-%s-%d-" % (community.cid, member.mid, message.name.encode("UTF-8"), missing_high) + + @staticmethod + def message_to_identifier(message): + return "-missing-sequence-%s-%s-%s-%d-" % (message.community.cid, message.authentication.member.mid, message.name.encode("UTF-8"), message.distribution.sequence_number) + + +class Dispersy(object): + + """ + The Dispersy class provides the interface to all Dispersy related commands, managing the in- and + outgoing data for, possibly, multiple communities. + """ + def __init__(self, callback, endpoint, working_directory, database_filename=u"dispersy.db"): + """ + Initialise a Dispersy instance. + + @param callback: Instance for callback scheduling. + @type callback: Callback + + @param endpoint: Instance for communication. + @type callback: Endpoint + + @param working_directory: The directory where all files should be stored. + @type working_directory: unicode + + @param database_filename: The database filename or u":memory:" + @type database_filename: unicode + """ + assert isinstance(callback, Callback), type(callback) + assert isinstance(endpoint, Endpoint), type(endpoint) + assert isinstance(working_directory, unicode), type(working_directory) + assert isinstance(database_filename, unicode), type(database_filename) + super(Dispersy, self).__init__() + + # the thread we will be using + self._callback = callback + + # communication endpoint + self._endpoint = endpoint + + # batch caching incoming packets + self._batch_cache = {} + + # where we store all data + self._working_directory = os.path.abspath(working_directory) + + self._member_cache_by_public_key = OrderedDict() + self._member_cache_by_hash = dict() + self._member_cache_by_database_id = dict() + + # our data storage + if not database_filename == u":memory:": + database_directory = os.path.join(self._working_directory, u"sqlite") + if not os.path.isdir(database_directory): + os.makedirs(database_directory) + database_filename = os.path.join(database_directory, database_filename) + self._database = DispersyDatabase(database_filename) + + # assigns temporary cache objects to unique identifiers + self._request_cache = RequestCache(self._callback) + + # indicates what our connection type is. currently it can be u"unknown", u"public", or + # u"symmetric-NAT" + self._connection_type = u"unknown" + + # our LAN and WAN addresses + self._lan_address = (self._guess_lan_address() or "0.0.0.0", 0) + self._wan_address = ("0.0.0.0", 0) + self._wan_address_votes = {} + if __debug__: + logger.debug("my LAN address is %s:%d", self._lan_address[0], self._lan_address[1]) + logger.debug("my WAN address is %s:%d", self._wan_address[0], self._wan_address[1]) + + # bootstrap peers + bootstrap_candidates = get_bootstrap_candidates(self) + if not all(bootstrap_candidates): + self._callback.register(self._retry_bootstrap_candidates) + self._bootstrap_candidates = dict((candidate.sock_addr, candidate) for candidate in bootstrap_candidates if candidate) + + # communities that can be auto loaded. classification:(cls, args, kargs) pairs. + self._auto_load_communities = OrderedDict() + + # loaded communities. cid:Community pairs. + self._communities = {} + self._walker_commmunities = [] + + self._check_distribution_batch_map = {DirectDistribution: self._check_direct_distribution_batch, + FullSyncDistribution: self._check_full_sync_distribution_batch, + LastSyncDistribution: self._check_last_sync_distribution_batch} + + # progress handlers (used to notify the user when something will take a long time) + self._progress_handlers = [] + + # commit changes to the database periodically + self._callback.register(self._watchdog) + + # statistics... + self._statistics = DispersyStatistics(self) + + # memory profiler + if "--memory-dump" in sys.argv: + def memory_dump(): + from meliae import scanner + start = time() + try: + while True: + yield float(60 * 60) + scanner.dump_all_objects("memory-%d.out" % (time() - start)) + except GeneratorExit: + scanner.dump_all_objects("memory-%d-shutdown.out" % (time() - start)) + + self._callback.register(memory_dump) + + self._callback.register(self._stats_candidates) + self._callback.register(self._stats_detailed_candidates) + + @staticmethod + def _guess_lan_address(): + """ + Returns the address of the first AF_INET interface it can find. + """ + blacklist = ["127.0.0.1", "0.0.0.0", "255.255.255.255"] + for interface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(interface) + for option in addresses.get(netifaces.AF_INET, []): + if "broadcast" in option and "addr" in option and not option["addr"] in blacklist: + logger.debug("interface %s address %s", interface, option["addr"]) + return option["addr"] + # Exception for virtual machines/containers + for interface in netifaces.interfaces(): + addresses = netifaces.ifaddresses(interface) + for option in addresses.get(netifaces.AF_INET, []): + if "addr" in option and not option["addr"] in blacklist: + logger.debug("interface %s address %s", interface, option["addr"]) + return option["addr"] + logger.error("Unable to find our public interface!") + return None + + def _retry_bootstrap_candidates(self): + """ + One or more bootstrap addresses could not be retrieved. + + The first 30 seconds we will attempt to resolve the addresses once every second. If we did + not succeed after 30 seconds will will retry once every 30 seconds until we succeed. + """ + logger.warning("unable to resolve all bootstrap addresses") + for counter in count(1): + yield 1.0 if counter < 30 else 30.0 + logger.warning("attempt #%d", counter) + candidates = get_bootstrap_candidates(self) + for candidate in candidates: + if candidate is None: + break + else: + logger.debug("resolved all bootstrap addresses") + self._bootstrap_candidates = dict((candidate.sock_addr, candidate) for candidate in candidates if candidate) + break + + @property + def working_directory(self): + """ + The full directory path where all dispersy related files are stored. + @rtype: unicode + """ + return self._working_directory + + @property + def endpoint(self): + """ + The endpoint object used to send packets. + @rtype: Object with a send(address, data) method + """ + return self._endpoint + + def _endpoint_ready(self): + """ + Guess our LAN and WAN address from information provided by endpoint. + + This method is called immediately after endpoint.start finishes. + """ + host, port = self._endpoint.get_address() + logger.warn("update LAN address %s:%d -> %s:%d", self._lan_address[0], self._lan_address[1], self._lan_address[0], port) + self._lan_address = (self._lan_address[0], port) + + # at this point we do not yet have a WAN address, set it to the LAN address to ensure we + # have something + assert self._wan_address == ("0.0.0.0", 0) + logger.warn("update WAN address %s:%d -> %s:%d", self._wan_address[0], self._wan_address[1], self._lan_address[0], self._lan_address[1]) + self._wan_address = self._lan_address + + if not self.is_valid_address(self._lan_address): + logger.warn("update LAN address %s:%d -> %s:%d", self._lan_address[0], self._lan_address[1], host, self._lan_address[1]) + self._lan_address = (host, self._lan_address[1]) + + if not self.is_valid_address(self._lan_address): + logger.warn("update LAN address %s:%d -> %s:%d", self._lan_address[0], self._lan_address[1], self._wan_address[0], self._lan_address[1]) + self._lan_address = (self._wan_address[0], self._lan_address[1]) + + # our address may not be a bootstrap address + if self._lan_address in self._bootstrap_candidates: + del self._bootstrap_candidates[self._lan_address] + + # our address may not be a candidate + for community in self._communities.itervalues(): + community.candidates.pop(self._lan_address, None) + + @property + def lan_address(self): + """ + The LAN address where we believe people who are inside our LAN can find us. + + Our LAN address is determined by the default gateway of our + system and our port. + + @rtype: (str, int) + """ + return self._lan_address + + @property + def wan_address(self): + """ + The wan address where we believe that we can be found from outside our LAN. + + Our wan address is determined by majority voting. Each time when we receive a message + that contains an opinion about our wan address, we take this into account. The + address with the most votes wins. + + Votes can be added by calling the wan_address_vote(...) method. + + Usually these votes are received through dispersy-introduction-request and + dispersy-introduction-response messages. + + @rtype: (str, int) + """ + return self._wan_address + + @property + def connection_type(self): + """ + The connection type that we believe we have. + + Currently the following types are recognized: + - u'unknown': the default value until the actual type can be recognized. + - u'public': when the LAN and WAN addresses are determined to be the same. + - u'symmetric-NAT': when each remote peer reports different external port numbers. + + @rtype: unicode + """ + return self._connection_type + + @property + def callback(self): + return self._callback + + @property + def database(self): + """ + The Dispersy database singleton. + @rtype: DispersyDatabase + """ + return self._database + + @property + def request_cache(self): + """ + The request cache instance responsible for maintaining identifiers and timeouts for + outstanding requests. + @rtype: RequestCache + """ + return self._request_cache + + @property + def statistics(self): + """ + The Statistics instance. + """ + return self._statistics + + def initiate_meta_messages(self, community): + """ + Create the meta messages that Dispersy uses. + + This method is called once for each community when it is created. The resulting meta + messages can be obtained by either community.get_meta_message(name) or + community.get_meta_messages(). + + Since these meta messages will be used along side the meta messages that each community + provides, all message names are prefixed with 'dispersy-' to ensure that the names are + unique. + + @param community: The community that will get the messages. + @type community: Community + + @return: The new meta messages. + @rtype: [Message] + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + messages = [Message(community, u"dispersy-identity", MemberAuthentication(encoding="bin"), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=16, history_size=1), CommunityDestination(node_count=0), IdentityPayload(), self._generic_timeline_check, self.on_identity), + Message(community, u"dispersy-signature-request", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), SignatureRequestPayload(), self.check_signature_request, self.on_signature_request), + Message(community, u"dispersy-signature-response", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), SignatureResponsePayload(), self.check_signature_response, self.on_signature_response), + Message(community, u"dispersy-authorize", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), AuthorizePayload(), self._generic_timeline_check, self.on_authorize), + Message(community, u"dispersy-revoke", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), RevokePayload(), self._generic_timeline_check, self.on_revoke), + Message(community, u"dispersy-undo-own", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), UndoPayload(), self.check_undo, self.on_undo), + Message(community, u"dispersy-undo-other", MemberAuthentication(), LinearResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), UndoPayload(), self.check_undo, self.on_undo), + Message(community, u"dispersy-destroy-community", MemberAuthentication(), LinearResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=192), CommunityDestination(node_count=50), DestroyCommunityPayload(), self._generic_timeline_check, self.on_destroy_community), + Message(community, u"dispersy-dynamic-settings", MemberAuthentication(), LinearResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"DESC", priority=191), CommunityDestination(node_count=10), DynamicSettingsPayload(), self._generic_timeline_check, community.dispersy_on_dynamic_settings), + + # + # when something is missing, a dispersy-missing-... message can be used to request + # it from another peer + # + + # when we have a member id (20 byte sha1 of the public key) but not the public key + Message(community, u"dispersy-missing-identity", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), MissingIdentityPayload(), self._generic_timeline_check, self.on_missing_identity), + + # when we are missing one or more SyncDistribution messages in a certain sequence + Message(community, u"dispersy-missing-sequence", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), MissingSequencePayload(), self._generic_timeline_check, self.on_missing_sequence, batch=BatchConfiguration(max_window=0.1)), + + # when we have a reference to a message that we do not have. a reference consists + # of the community identifier, the member identifier, and the global time + Message(community, u"dispersy-missing-message", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), MissingMessagePayload(), self._generic_timeline_check, self.on_missing_message), + + # when we might be missing a dispersy-authorize message + Message(community, u"dispersy-missing-proof", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), MissingProofPayload(), self._generic_timeline_check, self.on_missing_proof), + + # when we have a reference to a LastSyncDistribution that we do not have. a + # reference consists of the community identifier and the member identifier + Message(community, u"dispersy-missing-last-message", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), MissingLastMessagePayload(), self._generic_timeline_check, self.on_missing_last_message), + ] + + if community.dispersy_enable_candidate_walker_responses: + messages.extend([Message(community, u"dispersy-introduction-request", MemberAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), IntroductionRequestPayload(), self.check_introduction_request, self.on_introduction_request), + Message(community, u"dispersy-introduction-response", MemberAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), IntroductionResponsePayload(), self.check_introduction_response, self.on_introduction_response), + Message(community, u"dispersy-puncture-request", NoAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), PunctureRequestPayload(), self.check_puncture_request, self.on_puncture_request), + Message(community, u"dispersy-puncture", MemberAuthentication(), PublicResolution(), DirectDistribution(), CandidateDestination(), PuncturePayload(), self.check_puncture, self.on_puncture)]) + + return messages + + def define_auto_load(self, community_cls, args=(), kargs=None, load=False): + """ + Tell Dispersy how to load COMMUNITY is needed. + + COMMUNITY_CLS is the community class that is defined. + + ARGS an KARGS are optional arguments and keyword arguments used when a community is loaded + using COMMUNITY_CLS.load_community(self, master, *ARGS, **KARGS). + + When LOAD is True all available communities of this type will be immediately loaded. + + Returns a list with loaded communities. + """ + if __debug__: + from .community import Community + assert self._callback.is_current_thread, "Must be called from the callback thread" + assert issubclass(community_cls, Community), type(community_cls) + assert isinstance(args, tuple), type(args) + assert kargs is None or isinstance(kargs, dict), type(kargs) + assert not community_cls.get_classification() in self._auto_load_communities + assert isinstance(load, bool), type(load) + + if kargs is None: + kargs = {} + self._auto_load_communities[community_cls.get_classification()] = (community_cls, args, kargs) + + communities = [] + if load: + for master in community_cls.get_master_members(self): + if not master.mid in self._communities: + logger.debug("Loading %s at start", community_cls.get_classification()) + community = community_cls.load_community(self, master, *args, **kargs) + communities.append(community) + assert community.master_member.mid == master.mid + assert community.master_member.mid in self._communities + + return communities + + def undefine_auto_load(self, community): + """ + Tell Dispersy to no longer load COMMUNITY. + + COMMUNITY is the community class that is defined. + """ + if __debug__: + from .community import Community + assert issubclass(community, Community) + assert community.get_classification() in self._auto_load_communities + del self._auto_load_communities[community.get_classification()] + + def attach_progress_handler(self, func): + assert callable(func), "handler must be callable" + self._progress_handlers.append(func) + + def detach_progress_handler(self, func): + assert callable(func), "handler must be callable" + assert func in self._progress_handlers, "handler is not attached" + self._progress_handlers.remove(func) + + def get_progress_handlers(self): + return self._progress_handlers + + def get_member(self, public_key, private_key=""): + """ + Returns a Member instance associated with public_key. + + Since we have the public_key, we can create this user when it didn't already exist. Hence, + this method always succeeds. + + @param public_key: The public key of the member we want to obtain. + @type public_key: string + + @return: The Member instance associated with public_key. + @rtype: Member + + @note: This returns -any- Member, it may not be a member that is part of this community. + + @todo: Since this method returns Members that are not specifically bound to any community, + this method should be moved to Dispersy + """ + assert isinstance(public_key, str) + assert isinstance(private_key, str) + member = self._member_cache_by_public_key.get(public_key) + if member: + if private_key and not member.private_key: + member.set_private_key(private_key) + + else: + member = Member(self, public_key, private_key) + + # store in caches + self._member_cache_by_public_key[public_key] = member + self._member_cache_by_hash[member.mid] = member + self._member_cache_by_database_id[member.database_id] = member + + # limit cache length + if len(self._member_cache_by_public_key) > 1024: + _, pop = self._member_cache_by_public_key.popitem(False) + del self._member_cache_by_hash[pop.mid] + del self._member_cache_by_database_id[pop.database_id] + + return member + + def get_new_member(self, curve=u"medium"): + """ + Returns a Member instance created from a newly generated public key. + """ + assert isinstance(curve, unicode), type(curve) + ec = ec_generate_key(curve) + return self.get_member(ec_to_public_bin(ec), ec_to_private_bin(ec)) + + def get_temporary_member_from_id(self, mid): + """ + Returns a temporary Member instance reserving the MID until (hopefully) the public key + becomes available. + + This method should be used with caution as this will create a real Member without having the + public key available. This method is (sometimes) used when joining a community when we only + have its CID (=MID). + + @param mid: The 20 byte sha1 digest indicating a member. + @type mid: string + + @return: A (Dummy)Member instance + @rtype: DummyMember or Member + """ + assert isinstance(mid, str), type(mid) + assert len(mid) == 20, len(mid) + return self._member_cache_by_hash.get(mid) or DummyMember(self, mid) + + def get_members_from_id(self, mid): + """ + Returns zero or more Member instances associated with mid, where mid is the sha1 digest of a + member public key. + + As we are using only 20 bytes to represent the actual member public key, this method may + return multiple possible Member instances. In this case, other ways must be used to figure + out the correct Member instance. For instance: if a signature or encryption is available, + all Member instances could be used, but only one can succeed in verifying or decrypting. + + Since we may not have the public key associated to MID, this method may return an empty + list. In such a case it is sometimes possible to DelayPacketByMissingMember to obtain the + public key. + + @param mid: The 20 byte sha1 digest indicating a member. + @type mid: string + + @return: A list containing zero or more Member instances. + @rtype: [Member] + + @note: This returns -any- Member, it may not be a member that is part of this community. + """ + assert isinstance(mid, str), type(mid) + assert len(mid) == 20, len(mid) + member = self._member_cache_by_hash.get(mid) + if member: + return [member] + + else: + # note that this allows a security attack where someone might obtain a crypographic + # key that has the same sha1 as the master member, however unlikely. the only way to + # prevent this, as far as we know, is to increase the size of the community + # identifier, for instance by using sha256 instead of sha1. + return [self.get_member(str(public_key)) + for public_key, + in list(self._database.execute(u"SELECT public_key FROM member WHERE mid = ?", (buffer(mid),))) + if public_key] + + def get_member_from_database_id(self, database_id): + """ + Returns a Member instance associated with DATABASE_ID or None when this row identifier is + not available. + """ + assert isinstance(database_id, (int, long)), type(database_id) + member = self._member_cache_by_database_id.get(database_id) + if not member: + try: + public_key, = next(self._database.execute(u"SELECT public_key FROM member WHERE id = ?", (database_id,))) + except StopIteration: + pass + else: + member = self.get_member(str(public_key)) + return member + + def attach_community(self, community): + """ + Add a community to the Dispersy instance. + + Each community must be known to Dispersy, otherwise an incoming message will not be able to + be passed along to it's associated community. + + In general this method is called from the Community.__init__(...) method. + + @param community: The community that will be added. + @type community: Community + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + logger.debug("%s %s", community.cid.encode("HEX"), community.get_classification()) + assert not community.cid in self._communities + assert not community in self._walker_commmunities + self._communities[community.cid] = community + community.dispersy_check_database() + + if community.dispersy_enable_candidate_walker: + self._walker_commmunities.insert(0, community) + # restart walker scheduler + self._callback.replace_register(CANDIDATE_WALKER_CALLBACK_ID, self._candidate_walker) + + # count the number of times that a community was attached + self._statistics.dict_inc(self._statistics.attachment, community.cid) + + if __debug__: + # schedule the sanity check... it also checks that the dispersy-identity is available and + # when this is a create or join this message is created only after the attach_community + if "--sanity-check" in sys.argv: + try: + self.sanity_check(community) + except ValueError: + logger.exception("sanity check fail for %s", community) + assert False, "One or more exceptions occurred during sanity check" + + def detach_community(self, community): + """ + Remove an attached community from the Dispersy instance. + + Once a community is detached it will no longer receive incoming messages. When the + community is marked as auto_load it will be loaded, using community.load_community(...), + when a message for this community is received. + + @param community: The community that will be added. + @type community: Community + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + logger.debug("%s %s", community.cid.encode("HEX"), community.get_classification()) + assert community.cid in self._communities + assert self._communities[community.cid] == community + assert not community.dispersy_enable_candidate_walker or community in self._walker_commmunities, [community.dispersy_enable_candidate_walker, community in self._walker_commmunities] + del self._communities[community.cid] + + # stop walker + if community.dispersy_enable_candidate_walker: + self._walker_commmunities.remove(community) + if self._walker_commmunities: + # restart walker scheduler + self._callback.replace_register(CANDIDATE_WALKER_CALLBACK_ID, self._candidate_walker) + else: + # stop walker scheduler + self._callback.unregister(CANDIDATE_WALKER_CALLBACK_ID) + + # remove any items that are left in the cache + for meta in community.get_meta_messages(): + if meta.batch.enabled and meta in self._batch_cache: + task_identifier, _, _ = self._batch_cache[meta] + self._callback.unregister(task_identifier) + + def reclassify_community(self, source, destination): + """ + Change a community classification. + + Each community has a classification that dictates what source code is handling this + community. By default the classification of a community is the unicode name of the class in + the source code. + + In some cases it may be usefull to change the classification, for instance: if community A + has a subclass community B, where B has similar but reduced capabilities, we could + reclassify B to A at some point and keep all messages collected so far while using the + increased capabilities of community A. + + @param source: The community that will be reclassified. This must be either a Community + instance (when the community is loaded) or a Member instance giving the master member (when + the community is not loaded). + @type source: Community or Member + + @param destination: The new community classification. This must be a Community class. + @type destination: Community class + """ + if __debug__: + from .community import Community + assert isinstance(source, (Community, Member)) + assert issubclass(destination, Community) + + destination_classification = destination.get_classification() + + if isinstance(source, Member): + logger.debug("reclassify -> %s", destination_classification) + master = source + + else: + logger.debug("reclassify %s -> %s", source.get_classification(), destination_classification) + assert source.cid in self._communities + assert self._communities[source.cid] == source + master = source.master_member + source.unload_community() + + self._database.execute(u"UPDATE community SET classification = ? WHERE master = ?", + (destination_classification, master.database_id)) + assert self._database.changes == 1 + + if destination_classification in self._auto_load_communities: + cls, args, kargs = self._auto_load_communities[destination_classification] + assert cls == destination, [cls, destination] + else: + args = () + kargs = {} + + return destination.load_community(self, master, *args, **kargs) + + def has_community(self, cid): + """ + Returns True when there is a community CID. + """ + return cid in self._communities + + def get_community(self, cid, load=False, auto_load=True): + """ + Returns a community by its community id. + + The community id, or cid, is the binary representation of the public key of the master + member for the community. + + When the community is available but not currently loaded it will be automatically loaded + when (a) the load parameter is True or (b) the auto_load parameter is True and the auto_load + flag for this community is True (this flag is set in the database). + + @param cid: The community identifier. + @type cid: string, of any size + + @param load: When True, will load the community when available and not yet loaded. + @type load: bool + + @param auto_load: When True, will load the community when available, the auto_load flag is + True, and, not yet loaded. + @type load: bool + + @warning: It is possible, however unlikely, that multiple communities will have the same + cid. This is currently not handled. + """ + assert isinstance(cid, str) + assert isinstance(load, bool), type(load) + assert isinstance(auto_load, bool) + + try: + return self._communities[cid] + + except KeyError: + if load or auto_load: + try: + # have we joined this community + classification, auto_load_flag, master_public_key = self._database.execute(u"SELECT community.classification, community.auto_load, member.public_key FROM community JOIN member ON member.id = community.master WHERE mid = ?", + (buffer(cid),)).next() + + except StopIteration: + pass + + else: + if load or (auto_load and auto_load_flag): + + if classification in self._auto_load_communities: + master = self.get_member(str(master_public_key)) if master_public_key else self.get_temporary_member_from_id(cid) + cls, args, kargs = self._auto_load_communities[classification] + community = cls.load_community(self, master, *args, **kargs) + assert master.mid in self._communities + return community + + else: + logger.warning("unable to auto load %s is an undefined classification [%s]", cid.encode("HEX"), classification) + + else: + logger.debug("not allowed to load [%s]", classification) + + raise KeyError(cid) + + def get_communities(self): + """ + Returns a list with all known Community instances. + """ + return self._communities.values() + + def get_message(self, community, member, global_time): + """ + Returns a Member.Implementation instance uniquely identified by its community, member, and + global_time. + + Returns None if this message is not in the local database. + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + try: + packet, = self._database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, member.database_id, global_time)).next() + except StopIteration: + return None + else: + return self.convert_packet_to_message(str(packet), community) + + def get_last_message(self, community, member, meta): + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(member, Member) + assert isinstance(meta, Message) + try: + packet, = self._database.execute(u"SELECT packet FROM sync WHERE member = ? AND meta_message = ? ORDER BY global_time DESC LIMIT 1", + (member.database_id, meta.database_id)).next() + except StopIteration: + return None + else: + return self.convert_packet_to_message(str(packet), community) + + def wan_address_unvote(self, voter): + """ + Removes and returns one vote made by VOTER. + """ + assert isinstance(voter, Candidate) + for vote, voters in self._wan_address_votes.iteritems(): + if voter.sock_addr in voters: + logger.debug("removing vote for %s made by %s", vote, voter) + voters.remove(voter.sock_addr) + if len(voters) == 0: + del self._wan_address_votes[vote] + return vote + + def wan_address_vote(self, address, voter): + """ + Add one vote and possibly re-determine our wan address. + + Our wan address is determined by majority voting. Each time when we receive a message + that contains anothers opinion about our wan address, we take this into account. The + address with the most votes wins. + + Usually these votes are received through dispersy-candidate-request and + dispersy-candidate-response messages. + + @param address: The wan address that the voter believes us to have. + @type address: (str, int) + + @param voter: The voter candidate. + @type voter: Candidate + """ + assert isinstance(address, tuple) + assert len(address) == 2 + assert isinstance(address[0], str) + assert isinstance(address[1], int) + assert isinstance(voter, Candidate), type(voter) + if self._wan_address[0] in (voter.wan_address[0], voter.sock_addr[0]): + logger.debug("ignoring vote from candidate on the same LAN") + return + + if not self.is_valid_address(address): + logger.debug("got invalid external vote from %s received %s:%s", voter, address[0], address[1]) + return + + if __debug__: + debug_previous_connection_type = self._connection_type + + # undo previous vote + self.wan_address_unvote(voter) + + # do vote + votes = self._wan_address_votes + if not address in votes: + votes[address] = set() + votes[address].add(voter.sock_addr) + + # change when new vote count equal or higher than old address vote count + if self._wan_address != address and len(votes[address]) >= len(votes.get(self._wan_address, ())): + if len(votes) > 1: + logger.debug("not updating WAN address, suspect symmetric NAT",) + self._connection_type = u"symmetric-NAT" + + else: + # it is possible that, for some time after the WAN address changes, we will believe + # that the connection type is symmetric NAT. once votes have been pruned we may + # find that we are no longer behind a symmetric-NAT + if self._connection_type == u"symmetric-NAT": + self._connection_type = u"unknown" + + logger.warn("update WAN address %s:%d -> %s:%d", self._wan_address[0], self._wan_address[1], address[0], address[1]) + self._wan_address = address + + if not self.is_valid_address(self._lan_address): + logger.warn("update LAN address %s:%d -> %s:%d", self._lan_address[0], self._lan_address[1], self._wan_address[0], self._lan_address[1]) + self._lan_address = (self._wan_address[0], self._lan_address[1]) + + # our address may not be a bootstrap address + if self._wan_address in self._bootstrap_candidates: + del self._bootstrap_candidates[self._wan_address] + + # our address may not be a candidate + for community in self._communities.itervalues(): + community.candidates.pop(self._wan_address, None) + + for candidate in [candidate for candidate in community.candidates.itervalues() if candidate.wan_address == self._wan_address]: + community.candidates.pop(candidate.sock_addr, None) + + if self._connection_type == u"unknown" and self._lan_address == self._wan_address: + self._connection_type = u"public" + + if __debug__: + if not debug_previous_connection_type == self._connection_type: + logger.warn("update connection type %s -> %s", debug_previous_connection_type, self._connection_type) + + def _is_duplicate_sync_message(self, message): + """ + Returns True when this message is a duplicate, otherwise the message must be processed. + + === Problem: duplicate message === + The simplest reason to reject an incoming message is when we already have it, based on the + community, member, and global time. No further action is performed. + + === Problem: duplicate message, but that message is undone === + When a message is undone it should no longer be synced. Hence, someone who syncs an undone + message must not be aware of the undo message yet. We will drop this message, but we will + also send the appropriate undo message as a response. + + === Problem: same payload, different signature === + There is a possibility that a message is created that contains exactly the same payload but + has a different signature. This can occur when a message is created, forwarded, and for + some reason the database is reset. The next time that the client starts the exact same + message may be generated. However, because EC signatures contain a random element the + signature will be different. + + This results in continues transfers because the bloom filters identify the two messages + as different while the community/member/global_time triplet is the same. + + To solve this, we will silently replace one message with the other. We choose to keep + the message with the highest binary value while destroying the one with the lower binary + value. + + === Optimization: temporarily modify the bloom filter === + Note: currently we generate bloom filters on the fly, therefore, we can not use this + optimization. + + To further optimize, we will add both messages to our bloom filter whenever we detect + this problem. This will ensure that we do not needlessly receive the 'invalid' message + until the bloom filter is synced with the database again. + """ + community = message.community + # fetch the duplicate binary packet from the database + try: + have_packet, undone = self._database.execute(u"SELECT packet, undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, message.authentication.member.database_id, message.distribution.global_time)).next() + except StopIteration: + logger.debug("this message is not a duplicate") + return False + + else: + have_packet = str(have_packet) + if have_packet == message.packet: + # exact binary duplicate, do NOT process the message + logger.warning("received identical message %s %d@%d from %s %s", + message.name, + message.authentication.member.database_id, + message.distribution.global_time, + message.candidate, + "(this message is undone)" if undone else "") + + if undone: + try: + proof, = self._database.execute(u"SELECT packet FROM sync WHERE id = ?", (undone,)).next() + except StopIteration: + pass + else: + self._statistics.dict_inc(self._statistics.outgoing, u"-duplicate-undo-") + self._endpoint.send([message.candidate], [str(proof)]) + + else: + signature_length = message.authentication.member.signature_length + if have_packet[:signature_length] == message.packet[:signature_length]: + # the message payload is binary unique (only the signature is different) + logger.warning("received identical message %s %d@%d with different signature from %s %s", + message.name, + message.authentication.member.database_id, + message.distribution.global_time, + message.candidate, + "(this message is undone)" if undone else "") + + if have_packet < message.packet: + # replace our current message with the other one + self._database.execute(u"UPDATE sync SET packet = ? WHERE community = ? AND member = ? AND global_time = ?", + (buffer(message.packet), community.database_id, message.authentication.member.database_id, message.distribution.global_time)) + + # notify that global times have changed + # community.update_sync_range(message.meta, [message.distribution.global_time]) + + else: + logger.warning("received message with duplicate community/member/global-time triplet from %s. possibly malicious behaviour", message.candidate) + + # this message is a duplicate + return True + + def _check_full_sync_distribution_batch(self, messages): + """ + Ensure that we do not yet have the messages and that, if sequence numbers are enabled, we + are not missing any previous messages. + + This method is called when a batch of messages with the FullSyncDistribution policy is + received. Duplicate messages will yield DropMessage. And if enable_sequence_number is + True, missing messages will yield the DelayMessageBySequence exception. + + @param messages: The messages that are to be checked. + @type message: [Message.Implementation] + + @return: A generator with messages, DropMessage, or DelayMessageBySequence instances + @rtype: [Message.Implementation|DropMessage|DelayMessageBySequence] + """ + assert isinstance(messages, list) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + + # a message is considered unique when (creator, global-time), + # i.e. (authentication.member.database_id, distribution.global_time), is unique. + unique = set() + execute = self._database.execute + enable_sequence_number = messages[0].meta.distribution.enable_sequence_number + + # sort the messages by their (1) global_time and (2) binary packet + messages = sorted(messages, lambda a, b: cmp(a.distribution.global_time, b.distribution.global_time) or cmp(a.packet, b.packet)) + + # refuse messages where the global time is unreasonably high + acceptable_global_time = messages[0].community.acceptable_global_time + + if enable_sequence_number: + # obtain the highest sequence_number from the database + highest = {} + for message in messages: + if not message.authentication.member.database_id in highest: + last_global_time, seq = execute(u"SELECT MAX(global_time), COUNT(*) FROM sync WHERE member = ? AND meta_message = ?", + (message.authentication.member.database_id, message.database_id)).next() + highest[message.authentication.member.database_id] = (last_global_time or 0, seq) + + # all messages must follow the sequence_number order + for message in messages: + if message.distribution.global_time > acceptable_global_time: + yield DropMessage(message, "global time is not within acceptable range (%d, we accept %d)" % (message.distribution.global_time, acceptable_global_time)) + continue + + if not message.distribution.pruning.is_active(): + yield DropMessage(message, "message has been pruned") + continue + + key = (message.authentication.member.database_id, message.distribution.global_time) + if key in unique: + yield DropMessage(message, "duplicate message by member^global_time (1)") + continue + + unique.add(key) + last_global_time, seq = highest[message.authentication.member.database_id] + + if seq >= message.distribution.sequence_number: + # we already have this message (drop) + + # fetch the corresponding packet from the database (it should be binary identical) + global_time, packet = execute(u"SELECT global_time, packet FROM sync WHERE member = ? AND meta_message = ? ORDER BY global_time, packet LIMIT 1 OFFSET ?", + (message.authentication.member.database_id, message.database_id, message.distribution.sequence_number - 1)).next() + packet = str(packet) + if message.packet == packet: + yield DropMessage(message, "duplicate message by binary packet") + continue + + else: + # we already have a message with this sequence number, but apparently both + # are signed/valid. we need to discard one of them + if (global_time, packet) < (message.distribution.global_time, message.packet): + # we keep PACKET (i.e. the message that we currently have in our database) + yield DropMessage(message, "duplicate message by sequence number (1)") + continue + + else: + # TODO we should undo the messages that we are about to remove (when applicable) + execute(u"DELETE FROM sync WHERE member = ? AND meta_message = ? AND global_time >= ?", + (message.authentication.member.database_id, message.database_id, global_time)) + logger.debug("removed %d entries from sync because the member created multiple sequences", self._database.changes) + + # by deleting messages we changed SEQ and the HIGHEST cache + last_global_time, seq = execute(u"SELECT MAX(global_time), COUNT(*) FROM sync WHERE member = ? AND meta_message = ?", + (message.authentication.member.database_id, message.database_id)).next() + highest[message.authentication.member.database_id] = (last_global_time or 0, seq) + # we can allow MESSAGE to be processed + + if seq + 1 != message.distribution.sequence_number: + # we do not have the previous message (delay and request) + yield DelayMessageBySequence(message, seq + 1, message.distribution.sequence_number - 1) + continue + + # we have the previous message, check for duplicates based on community, + # member, and global_time + if self._is_duplicate_sync_message(message): + # we have the previous message (drop) + yield DropMessage(message, "duplicate message by global_time (1)") + continue + + # ensure that MESSAGE.distribution.global_time > LAST_GLOBAL_TIME + if last_global_time and message.distribution.global_time <= last_global_time: + logger.debug("last_global_time: %d message @%d", last_global_time, message.distribution.global_time) + yield DropMessage(message, "higher sequence number with lower global time than most recent message") + continue + + # we accept this message + highest[message.authentication.member.database_id] = (message.distribution.global_time, seq + 1) + yield message + + else: + for message in messages: + if message.distribution.global_time > acceptable_global_time: + yield DropMessage(message, "global time is not within acceptable range") + continue + + if not message.distribution.pruning.is_active(): + yield DropMessage(message, "message has been pruned") + continue + + key = (message.authentication.member.database_id, message.distribution.global_time) + if key in unique: + yield DropMessage(message, "duplicate message by member^global_time (2)") + continue + + unique.add(key) + + # check for duplicates based on community, member, and global_time + if self._is_duplicate_sync_message(message): + # we have the previous message (drop) + yield DropMessage(message, "duplicate message by global_time (2)") + continue + + # we accept this message + yield message + + def _check_last_sync_distribution_batch(self, messages): + """ + Check that the messages do not violate any database consistency rules. + + This method is called when a batch of messages with the LastSyncDistribution policy is + received. An iterator will be returned where each element is either: DropMessage (for + duplicate and old messages), DelayMessage (for messages that requires something before they + can be processed), or Message.Implementation when the message does not violate any rules. + + The rules: + + - The combination community, member, global_time must be unique. + + - When the MemberAuthentication policy is used: the message owner may not have more than + history_size messages in the database at any one time. Hence, if this limit is reached + and the new message is older than the older message that is already available, it is + dropped. + + - When the DoubleMemberAuthentication policy is used: the members that signed the message + may not have more than history_size messages in the database at any one time. Hence, if + this limit is reached and the new message is older than the older message that is already + available, it is dropped. Note that the signature order is not important. + + @param messages: The messages that are to be checked. + @type message: [Message.Implementation] + + @return: A generator with Message.Implementation or DropMessage instances + @rtype: [Message.Implementation|DropMessage] + """ + assert isinstance(messages, list) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + assert all(isinstance(message.authentication, (MemberAuthentication.Implementation, DoubleMemberAuthentication.Implementation)) for message in messages) + + def check_member_and_global_time(unique, times, message): + """ + The member + global_time combination must always be unique in the database + """ + assert isinstance(unique, set) + assert isinstance(times, dict) + assert isinstance(message, Message.Implementation) + assert isinstance(message.distribution, LastSyncDistribution.Implementation) + + key = (message.authentication.member.database_id, message.distribution.global_time) + if key in unique: + return DropMessage(message, "already processed message by member^global_time") + + else: + unique.add(key) + + if not message.authentication.member.database_id in times: + times[message.authentication.member.database_id] = [global_time for global_time, in self._database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", + (message.community.database_id, message.authentication.member.database_id, message.database_id))] + assert len(times[message.authentication.member.database_id]) <= message.distribution.history_size, [message.packet_id, message.distribution.history_size, times[message.authentication.member.database_id]] + tim = times[message.authentication.member.database_id] + + if message.distribution.global_time in tim and self._is_duplicate_sync_message(message): + return DropMessage(message, "duplicate message by member^global_time (3)") + + elif len(tim) >= message.distribution.history_size and min(tim) > message.distribution.global_time: + # we have newer messages (drop) + + # if the history_size is one, we can send that on message back because + # apparently the sender does not have this message yet + if message.distribution.history_size == 1: + try: + packet, = self._database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? ORDER BY global_time DESC LIMIT 1", + (message.community.database_id, message.authentication.member.database_id)).next() + except StopIteration: + # TODO can still fail when packet is in one of the received messages + # from this batch. + pass + else: + self._statistics.dict_inc(self._statistics.outgoing, u"-sequence-") + self._endpoint.send([message.candidate], [str(packet)]) + + return DropMessage(message, "old message by member^global_time") + + else: + # we accept this message + tim.append(message.distribution.global_time) + return message + + def check_double_member_and_global_time(unique, times, message): + """ + No other message may exist with this message.authentication.members / global_time + combination, regardless of the ordering of the members + """ + assert isinstance(unique, set) + assert isinstance(times, dict) + assert isinstance(message, Message.Implementation) + assert isinstance(message.authentication, DoubleMemberAuthentication.Implementation) + + key = (message.authentication.member.database_id, message.distribution.global_time) + if key in unique: + logger.debug("drop %s %d@%d (in unique)", message.name, message.authentication.member.database_id, message.distribution.global_time) + return DropMessage(message, "already processed message by member^global_time") + + else: + unique.add(key) + + members = tuple(sorted(member.database_id for member in message.authentication.members)) + key = members + (message.distribution.global_time,) + if key in unique: + logger.debug("drop %s %s@%d (in unique)", message.name, members, message.distribution.global_time) + return DropMessage(message, "already processed message by members^global_time") + + else: + unique.add(key) + + if self._is_duplicate_sync_message(message): + # we have the previous message (drop) + logger.debug("drop %s %s@%d (_is_duplicate_sync_message)", message.name, members, message.distribution.global_time) + return DropMessage(message, "duplicate message by member^global_time (4)") + + if not members in times: + # the next query obtains a list with all global times that we have in the + # database for all message.meta messages that were signed by + # message.authentication.members where the order of signing is not taken + # into account. + times[members] = dict((global_time, (packet_id, str(packet))) + for global_time, packet_id, packet + in self._database.execute(u""" +SELECT sync.global_time, sync.id, sync.packet +FROM sync +JOIN double_signed_sync ON double_signed_sync.sync = sync.id +WHERE sync.meta_message = ? AND double_signed_sync.member1 = ? AND double_signed_sync.member2 = ? +""", + (message.database_id,) + members)) + assert len(times[members]) <= message.distribution.history_size, [len(times[members]), message.distribution.history_size] + tim = times[members] + + if message.distribution.global_time in tim: + packet_id, have_packet = tim[message.distribution.global_time] + + if message.packet == have_packet: + # exact binary duplicate, do NOT process the message + logger.debug("received identical message %s %s@%d from %s", message.name, members, message.distribution.global_time, message.candidate) + return DropMessage(message, "duplicate message by binary packet (1)") + + else: + signature_length = sum(member.signature_length for member in message.authentication.members) + member_authentication_begin = 23 # version, version, community-id, message-type + member_authentication_end = member_authentication_begin + 20 * len(message.authentication.members) + if (have_packet[:member_authentication_begin] == message.packet[:member_authentication_begin] and + have_packet[member_authentication_end:signature_length] == message.packet[member_authentication_end:signature_length]): + # the message payload is binary unique (only the member order or signatures are different) + logger.debug("received identical message with different member-order or signatures %s %s@%d from %s", message.name, members, message.distribution.global_time, message.candidate) + + if have_packet < message.packet: + # replace our current message with the other one + self._database.execute(u"UPDATE sync SET member = ?, packet = ? WHERE id = ?", + (message.authentication.member.database_id, buffer(message.packet), packet_id)) + + return DropMessage(message, "replaced existing packet with other packet with the same payload") + + return DropMessage(message, "not replacing existing packet with other packet with the same payload") + + else: + logger.warning("received message with duplicate community/members/global-time triplet from %s. possibly malicious behavior", message.candidate) + return DropMessage(message, "duplicate message by binary packet (2)") + + elif len(tim) >= message.distribution.history_size and min(tim) > message.distribution.global_time: + # we have newer messages (drop) + + # if the history_size is one, we can sent that on message back because + # apparently the sender does not have this message yet + if message.distribution.history_size == 1: + packet_id, have_packet = tim.values()[0] + self._statistics.dict_inc(self._statistics.outgoing, u"-sequence-") + self._endpoint.send([message.candidate], [have_packet]) + + logger.debug("drop %s %s@%d (older than %s)", message.name, members, message.distribution.global_time, min(tim)) + return DropMessage(message, "old message by members^global_time") + + else: + # we accept this message + logger.debug("accept %s %s@%d", message.name, members, message.distribution.global_time) + tim[message.distribution.global_time] = (0, message.packet) + return message + + # meta message + meta = messages[0].meta + + # sort the messages by their (1) global_time and (2) binary packet + messages = sorted(messages, lambda a, b: cmp(a.distribution.global_time, b.distribution.global_time) or cmp(a.packet, b.packet)) + + # refuse messages where the global time is unreasonably high + acceptable_global_time = meta.community.acceptable_global_time + messages = [message if message.distribution.global_time <= acceptable_global_time else DropMessage(message, "global time is not within acceptable range") for message in messages] + + # refuse messages that have been pruned (or soon will be) + messages = [DropMessage(message, "message has been pruned") if isinstance(message, Message.Implementation) and not message.distribution.pruning.is_active() else message for message in messages] + + if isinstance(meta.authentication, MemberAuthentication): + # a message is considered unique when (creator, global-time), i.r. (authentication.member, + # distribution.global_time), is unique. UNIQUE is used in the check_member_and_global_time + # function + unique = set() + times = {} + messages = [message if isinstance(message, DropMessage) else check_member_and_global_time(unique, times, message) for message in messages] + + # instead of storing HISTORY_SIZE messages for each authentication.member, we will store + # HISTORY_SIZE messages for each combination of authentication.members. + else: + assert isinstance(meta.authentication, DoubleMemberAuthentication) + unique = set() + times = {} + messages = [message if isinstance(message, DropMessage) else check_double_member_and_global_time(unique, times, message) for message in messages] + + return messages + + def _check_direct_distribution_batch(self, messages): + """ + Returns the messages in the correct processing order. + + This method is called when a message with the DirectDistribution policy is received. This + message is not stored and hence we will not be able to see if we have already received this + message. + + Receiving the same DirectDistribution multiple times indicates that the sending -wanted- to + send this message multiple times. + + @param messages: Ignored. + @type messages: [Message.Implementation] + + @return: All messages that are not dropped, i.e. all messages + @rtype: [Message.Implementation] + """ + # sort the messages by their (1) global_time and (2) binary packet + messages = sorted(messages, lambda a, b: cmp(a.distribution.global_time, b.distribution.global_time) or cmp(a.packet, b.packet)) + + # direct messages tell us what other people believe is the current global_time + community = messages[0].community + for message in messages: + if isinstance(message.candidate, WalkCandidate): + message.candidate.global_time = message.distribution.global_time + + return messages + + def load_message(self, community, member, global_time, verify=False): + """ + Returns the message identified by community, member, and global_time. + + Each message is uniquely identified by the community that it is created in, the member it is + created by and the global time when it is created. Using these three parameters we return + the associated the Message.Implementation instance. None is returned when we do not have + this message or it can not be decoded. + """ + try: + packet_id, packet = self._database.execute(u"SELECT id, packet FROM sync WHERE community = ? AND member = ? AND global_time = ? LIMIT 1", + (community.database_id, member.database_id, global_time)).next() + except StopIteration: + return None + + # find associated conversion + try: + conversion = community.get_conversion_for_packet(packet) + except KeyError: + logger.warning("unable to convert a %d byte packet (unknown conversion)", len(packet)) + return None + + # attempt conversion + try: + message = conversion.decode_message(LoopbackCandidate(), packet, verify) + + except (DropPacket, DelayPacket) as exception: + logger.warning("unable to convert a %d byte packet (%s)", len(packet), exception) + return None + + message.packet_id = packet_id + return message + + def convert_packet_to_meta_message(self, packet, community=None, load=True, auto_load=True): + """ + Returns the Message representing the packet or None when no conversion is possible. + """ + if __debug__: + from .community import Community + assert isinstance(packet, str) + assert isinstance(community, (type(None), Community)) + assert isinstance(load, bool) + assert isinstance(auto_load, bool) + + # find associated community + if not community: + try: + community = self.get_community(packet[2:22], load, auto_load) + except KeyError: + logger.warning("unable to convert a %d byte packet (unknown community)", len(packet)) + return None + + # find associated conversion + try: + conversion = community.get_conversion_for_packet(packet) + except KeyError: + logger.warning("unable to convert a %d byte packet (unknown conversion)", len(packet)) + return None + + try: + return conversion.decode_meta_message(packet) + + except (DropPacket, DelayPacket) as exception: + logger.warning("unable to convert a %d byte packet (%s)", len(packet), exception) + return None + + def convert_packet_to_message(self, packet, community=None, load=True, auto_load=True, candidate=None, verify=True): + """ + Returns the Message.Implementation representing the packet or None when no conversion is + possible. + """ + if __debug__: + from .community import Community + assert isinstance(packet, str), type(packet) + assert community is None or isinstance(community, Community), type(community) + assert isinstance(load, bool), type(load) + assert isinstance(auto_load, bool), type(auto_load) + assert candidate is None or isinstance(candidate, Candidate), type(candidate) + + # find associated community + if not community: + try: + community = self.get_community(packet[2:22], load, auto_load) + except KeyError: + logger.warning("unable to convert a %d byte packet (unknown community)", len(packet)) + return None + + # find associated conversion + try: + conversion = community.get_conversion_for_packet(packet) + except KeyError: + logger.warning("unable to convert a %d byte packet (unknown conversion)", len(packet)) + return None + + try: + return conversion.decode_message(LoopbackCandidate() if candidate is None else candidate, packet, verify) + + except (DropPacket, DelayPacket) as exception: + logger.warning("unable to convert a %d byte packet (%s)", len(packet), exception) + return None + + def convert_packets_to_messages(self, packets, community=None, load=True, auto_load=True, candidate=None, verify=True): + """ + Returns a list with messages representing each packet or None when no conversion is + possible. + """ + assert isinstance(packets, (list, tuple)), type(packets) + assert all(isinstance(packet, str) for packet in packets), [type(packet) for packet in packets] + return [self.convert_packet_to_message(packet, community, load, auto_load, candidate, verify) for packet in packets] + + def on_incoming_packets(self, packets, cache=True, timestamp=0.0): + """ + Process incoming UDP packets. + + This method is called to process one or more UDP packets. This occurs when new packets are + received, to attempt to process previously delayed packets, or when a member explicitly + creates a packet to process. The last option should only occur for debugging purposes. + + All the received packets are processed in batches, a batch consists of all packets for the + same community and the same meta message. Batches are formed with the following steps: + + 1. The associated community is retrieved. Failure results in packet drop. + + 2. The associated conversion is retrieved. Failure results in packet drop, this probably + indicates that we are running outdated software. + + 3. The associated meta message is retrieved. Failure results in a packet drop, this + probably indicates that we are running outdated software. + + All packets are grouped by their meta message. All batches are scheduled based on the + meta.batch.max_window and meta.batch.priority. Finally, the candidate table is updated in + regards to the incoming source addresses. + + @param packets: The sequence of packets. + @type packets: [(address, packet)] + """ + assert isinstance(packets, (tuple, list)), packets + assert len(packets) > 0, packets + assert all(isinstance(packet, tuple) for packet in packets), packets + assert all(len(packet) == 2 for packet in packets), packets + assert all(isinstance(packet[0], Candidate) for packet in packets), packets + assert all(isinstance(packet[1], str) for packet in packets), packets + assert isinstance(cache, bool), cache + assert isinstance(timestamp, float), timestamp + + self._statistics.received_count += len(packets) + + sort_key = lambda tup: (tup[0].batch.priority, tup[0]) # meta, address, packet, conversion + groupby_key = lambda tup: tup[0] # meta, address, packet, conversion + for meta, iterator in groupby(sorted(self._convert_packets_into_batch(packets), key=sort_key), key=groupby_key): + batch = [(meta.community.candidates.get(candidate.sock_addr) or self._bootstrap_candidates.get(candidate.sock_addr) or candidate, packet, conversion) + for _, candidate, packet, conversion + in iterator] + + # schedule batch processing (taking into account the message priority) + if meta.batch.enabled and cache: + if meta in self._batch_cache: + task_identifier, current_timestamp, current_batch = self._batch_cache[meta] + current_batch.extend(batch) + logger.debug("adding %d %s messages to existing cache", len(batch), meta.name) + + else: + current_timestamp = timestamp + current_batch = batch + task_identifier = self._callback.register(self._on_batch_cache_timeout, (meta, current_timestamp, current_batch), delay=meta.batch.max_window, priority=meta.batch.priority) + self._batch_cache[meta] = (task_identifier, current_timestamp, current_batch) + logger.debug("new cache with %d %s messages (batch window: %d)", len(batch), meta.name, meta.batch.max_window) + + while len(current_batch) > meta.batch.max_size: + # batch exceeds maximum size, schedule first max_size immediately + batch, current_batch = current_batch[:meta.batch.max_size], current_batch[meta.batch.max_size:] + logger.debug("schedule processing %d %s messages immediately (exceeded batch size)", len(batch), meta.name) + self._callback.register(self._on_batch_cache_timeout, (meta, current_timestamp, batch), priority=meta.batch.priority) + + # we can not use callback.replace_register because + # it would not re-schedule the task, i.e. not at + # the end of the task queue + self._callback.unregister(task_identifier) + task_identifier = self._callback.register(self._on_batch_cache_timeout, (meta, timestamp, current_batch), delay=meta.batch.max_window, priority=meta.batch.priority) + self._batch_cache[meta] = (task_identifier, timestamp, current_batch) + + else: + # ignore cache, process batch immediately + logger.debug("processing %d %s messages immediately", len(batch), meta.name) + self._on_batch_cache(meta, batch) + + def _on_batch_cache_timeout(self, meta, timestamp, batch): + """ + Start processing a batch of messages once the cache timeout occurs. + + This method is called meta.batch.max_window seconds after the first message in this batch + arrived. All messages in this batch have been 'cached' together in self._batch_cache[meta]. + Hopefully the delay caused the batch to collect as many messages as possible. + """ + assert isinstance(meta, Message) + assert isinstance(timestamp, float) + assert isinstance(batch, list) + assert len(batch) > 0 + logger.debug("processing %sx %s batched messages", len(batch), meta.name) + + if meta in self._batch_cache and id(self._batch_cache[meta][2]) == id(batch): + logger.debug("pop batch cache for %sx %s", len(batch), meta.name) + self._batch_cache.pop(meta) + + if not self._communities.get(meta.community.cid, None) == meta.community: + logger.warning("dropped %sx %s packets (community no longer loaded)", len(batch), meta.name) + self._statistics.dict_inc(self._statistics.drop, "on_batch_cache_timeout: community no longer loaded", len(batch)) + self._statistics.drop_count += len(batch) + return 0 + + if meta.batch.enabled and timestamp > 0.0 and meta.batch.max_age + timestamp <= time(): + logger.warning("dropped %sx %s packets (can not process these messages on time)", len(batch), meta.name) + self._statistics.dict_inc(self._statistics.drop, "on_batch_cache_timeout: can not process these messages on time", len(batch)) + self._statistics.drop_count += len(batch) + return 0 + + return self._on_batch_cache(meta, batch) + + def _on_batch_cache(self, meta, batch): + """ + Start processing a batch of messages. + + The batch is processed in the following steps: + + 1. All duplicate binary packets are removed. + + 2. All binary packets are converted into Message.Implementation instances. Some packets + are dropped or delayed at this stage. + + 3. All remaining messages are passed to on_message_batch. + """ + # convert binary packets into Message.Implementation instances + messages = list(self._convert_batch_into_messages(batch)) + assert all(isinstance(message, Message.Implementation) for message in messages), "_convert_batch_into_messages must return only Message.Implementation instances" + assert all(message.meta == meta for message in messages), "All Message.Implementation instances must be in the same batch" + logger.debug("%d %s messages after conversion", len(messages), meta.name) + + # handle the incoming messages + if messages: + self.on_message_batch(messages) + + def on_messages(self, messages): + batches = dict() + for message in messages: + if not message.meta in batches: + batches[message.meta] = set() + batches[message.meta].add(message) + + for messages in batches.itervalues(): + self.on_message_batch(list(messages)) + + def on_message_batch(self, messages): + """ + Process one batch of messages. + + This method is called to process one or more Message.Implementation instances that all have + the same meta message. This occurs when new packets are received, to attempt to process + previously delayed messages, or when a member explicitly creates a message to process. The + last option should only occur for debugging purposes. + + The messages are processed with the following steps: + + 1. Messages created by a member in our blacklist are droped. + + 2. Messages that are old or duplicate, based on their distribution policy, are dropped. + + 3. The meta.check_callback(...) is used to allow messages to be dropped or delayed. + + 4. Messages are stored, based on their distribution policy. + + 5. The meta.handle_callback(...) is used to process the messages. + + @param packets: The sequence of messages with the same meta message from the same community. + @type packets: [Message.Implementation] + """ + assert isinstance(messages, list) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + + def _filter_fail(message): + if isinstance(message, DelayMessage): + logger.debug("%s delay %s (%s)", message.delayed.candidate, message.delayed, message) + + if message.create_request(): + self._statistics.delay_send += 1 + self._statistics.dict_inc(self._statistics.delay, "om_message_batch:%s" % message.delayed) + self._statistics.delay_count += 1 + return False + + elif isinstance(message, DropMessage): + logger.debug("%s drop: %s (%s)", message.dropped.candidate, message.dropped.name, message) + self._statistics.dict_inc(self._statistics.drop, "on_message_batch:%s" % message) + self._statistics.drop_count += 1 + return False + + else: + return True + + meta = messages[0].meta + debug_count = len(messages) + debug_begin = time() + + # drop all duplicate or old messages + assert type(meta.distribution) in self._check_distribution_batch_map + messages = list(self._check_distribution_batch_map[type(meta.distribution)](messages)) + assert len(messages) > 0 # should return at least one item for each message + assert all(isinstance(message, (Message.Implementation, DropMessage, DelayMessage)) for message in messages) + + # handle/remove DropMessage and DelayMessage instances + messages = [message for message in messages if isinstance(message, Message.Implementation) or _filter_fail(message)] + if not messages: + return 0 + + # check all remaining messages on the community side. may yield Message.Implementation, + # DropMessage, and DelayMessage instances + try: + messages = list(meta.check_callback(messages)) + except: + logger.exception("exception during check_callback for %s", meta.name) + return 0 + assert len(messages) >= 0 # may return zero messages + assert all(isinstance(message, (Message.Implementation, DropMessage, DelayMessage)) for message in messages) + + if len(messages) == 0: + logger.warning("%s yielded zero messages, drop, or delays. This is allowed but likely to be an error.", meta.check_callback) + + # handle/remove DropMessage and DelayMessage instances + messages = [message for message in messages if _filter_fail(message)] + if not messages: + return 0 + + logger.debug("in... %d %s messages from %s", len(messages), meta.name, " ".join(str(candidate) for candidate in set(message.candidate for message in messages))) + + # store to disk and update locally + if self.store_update_forward(messages, True, True, False): + + self._statistics.dict_inc(self._statistics.success, meta.name, len(messages)) + self._statistics.success_count += len(messages) + + # tell what happened + debug_end = time() + if debug_end - debug_begin > 1.0: + logger.warning("handled %d/%d %.2fs %s messages (with %fs cache window)", len(messages), debug_count, (debug_end - debug_begin), meta.name, meta.batch.max_window) + else: + logger.debug("handled %d/%d %.2fs %s messages (with %fs cache window)", len(messages), debug_count, (debug_end - debug_begin), meta.name, meta.batch.max_window) + + # return the number of messages that were correctly handled (non delay, duplictes, etc) + return len(messages) + + return 0 + + def _convert_packets_into_batch(self, packets): + """ + Convert a list with one or more (candidate, data) tuples into a list with zero or more + (Message, (candidate, packet, conversion)) tuples using a generator. + + # 22/06/11 boudewijn: no longer checks for duplicates. duplicate checking is pointless + # because new duplicates may be introduced because of the caching mechanism. + # + # Duplicate packets are removed. This will result in drops when two we receive the exact same + # binary packet from multiple nodes. While this is usually not a problem, packets are usually + # signed and hence unique, in rare cases this may result in invalid drops. + + Packets from invalid sources are removed. The is_valid_destination_address is used to + determine if the address that the candidate points to is valid. + + Packets associated with an unknown community are removed. Packets from a known community + encoded in an unknown conversion, are also removed. + + The results can be used to easily create a dictionary batch using + > batch = dict(_convert_packets_into_batch(packets)) + """ + assert isinstance(packets, (tuple, list)) + assert len(packets) > 0 + assert all(isinstance(packet, tuple) for packet in packets) + assert all(len(packet) == 2 for packet in packets) + assert all(isinstance(packet[0], Candidate) for packet in packets) + assert all(isinstance(packet[1], str) for packet in packets) + + for candidate, packet in packets: + # find associated community + try: + community = self.get_community(packet[2:22]) + except KeyError: + logger.warning("drop a %d byte packet (received packet for unknown community) from %s", len(packet), candidate) + self._statistics.dict_inc(self._statistics.drop, "_convert_packets_into_batch:unknown community") + self._statistics.drop_count += 1 + continue + + # find associated conversion + try: + conversion = community.get_conversion_for_packet(packet) + except KeyError: + logger.warning("drop a %d byte packet (received packet for unknown conversion) from %s", len(packet), candidate) + self._statistics.dict_inc(self._statistics.drop, "_convert_packets_into_batch:unknown conversion") + self._statistics.drop_count += 1 + continue + + try: + # convert binary data into the meta message + yield conversion.decode_meta_message(packet), candidate, packet, conversion + + except DropPacket as exception: + logger.warning("drop a %d byte packet (%s) from %s", len(packet), exception, candidate) + self._statistics.dict_inc(self._statistics.drop, "_convert_packets_into_batch:decode_meta_message:%s" % exception) + self._statistics.drop_count += 1 + + def _convert_batch_into_messages(self, batch): + if __debug__: + from .conversion import Conversion + assert isinstance(batch, (list, set)) + assert len(batch) > 0 + assert all(isinstance(x, tuple) for x in batch) + assert all(len(x) == 3 for x in batch) + + for candidate, packet, conversion in batch: + assert isinstance(candidate, Candidate) + assert isinstance(packet, str) + assert isinstance(conversion, Conversion) + + try: + # convert binary data to internal Message + yield conversion.decode_message(candidate, packet) + + except DropPacket as exception: + logger.warning("drop a %d byte packet (%s) from %s", len(packet), exception, candidate) + self._statistics.dict_inc(self._statistics.drop, "_convert_batch_into_messages:%s" % exception) + self._statistics.drop_count += 1 + + except DelayPacket as delay: + logger.debug("delay a %d byte packet (%s) from %s", len(packet), delay, candidate) + if delay.create_request(candidate, packet): + self._statistics.delay_send += 1 + self._statistics.dict_inc(self._statistics.delay, "_convert_batch_into_messages:%s" % delay) + self._statistics.delay_count += 1 + + def _store(self, messages): + """ + Store a message in the database. + + Messages with the Last- or Full-SyncDistribution policies need to be stored in the database + to allow them to propagate to other members. + + Messages with the LastSyncDistribution policy may also cause an older message to be removed + from the database. + + Messages created by a member that we have marked with must_store will also be stored in the + database, and hence forwarded to others. + + @param message: The unstored message with the SyncDistribution policy. + @type message: Message.Implementation + """ + assert isinstance(messages, list) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + assert all(isinstance(message.distribution, SyncDistribution.Implementation) for message in messages) + # ensure no duplicate messages are present, this MUST HAVE been checked before calling this + # method! + assert len(messages) == len(set((message.authentication.member.database_id, message.distribution.global_time) for message in messages)), messages[0].name + + meta = messages[0].meta + logger.debug("attempting to store %d %s messages", len(messages), meta.name) + is_double_member_authentication = isinstance(meta.authentication, DoubleMemberAuthentication) + highest_global_time = 0 + + # update_sync_range = set() + for message in messages: + # the signature must be set + assert isinstance(message.authentication, (MemberAuthentication.Implementation, DoubleMemberAuthentication.Implementation)), message.authentication + assert message.authentication.is_signed + assert not message.packet[-10:] == "\x00" * 10, message.packet[-10:].encode("HEX") + # we must have the identity message as well + assert message.authentication.encoding == "bin" or message.authentication.member.has_identity(message.community), [message, message.community, message.authentication.member.database_id] + + logger.debug("%s %d@%d", message.name, message.authentication.member.database_id, message.distribution.global_time) + + # add packet to database + self._database.execute(u"INSERT INTO sync (community, member, global_time, meta_message, packet) VALUES (?, ?, ?, ?, ?)", + (message.community.database_id, + message.authentication.member.database_id, + message.distribution.global_time, + message.database_id, + buffer(message.packet))) + # update_sync_range.add(message.distribution.global_time) + if __debug__: + # must have stored one entry + assert self._database.changes == 1 + # when sequence numbers are enabled, we must have exactly + # message.distribution.sequence_number messages in the database + if isinstance(message.distribution, FullSyncDistribution) and message.distribution.enable_sequence_number: + count_ = self._database.execute(u"SELECT COUNT(*) FROM sync WHERE meta_message = ? AND member = ?", (message.database_id, message.authentication.member.database_id)).next() + assert count_ == message.distribution.sequence_number, [count_, message.distribution.sequence_number] + + # ensure that we can reference this packet + message.packet_id = self._database.last_insert_rowid + logger.debug("stored message %s in database at row %d", message.name, message.packet_id) + + if is_double_member_authentication: + member1 = message.authentication.members[0].database_id + member2 = message.authentication.members[1].database_id + self._database.execute(u"INSERT INTO double_signed_sync (sync, member1, member2) VALUES (?, ?, ?)", + (message.packet_id, member1, member2) if member1 < member2 else (message.packet_id, member2, member1)) + assert self._database.changes == 1 + + # update global time + highest_global_time = max(highest_global_time, message.distribution.global_time) + + if isinstance(meta.distribution, LastSyncDistribution): + # delete packets that have become obsolete + items = set() + if is_double_member_authentication: + order = lambda member1, member2: (member1, member2) if member1 < member2 else (member2, member1) + for member1, member2 in set(order(message.authentication.members[0].database_id, message.authentication.members[1].database_id) for message in messages): + assert member1 < member2, [member1, member2] + all_items = list(self._database.execute(u""" +SELECT sync.id, sync.global_time +FROM sync +JOIN double_signed_sync ON double_signed_sync.sync = sync.id +WHERE sync.meta_message = ? AND double_signed_sync.member1 = ? AND double_signed_sync.member2 = ? +ORDER BY sync.global_time, sync.packet""", (meta.database_id, member1, member2))) + if len(all_items) > meta.distribution.history_size: + items.update(all_items[:len(all_items) - meta.distribution.history_size]) + + else: + for member_database_id in set(message.authentication.member.database_id for message in messages): + all_items = list(self._database.execute(u""" +SELECT id, global_time +FROM sync +WHERE meta_message = ? AND member = ? +ORDER BY global_time""", (meta.database_id, member_database_id))) + if len(all_items) > meta.distribution.history_size: + items.update(all_items[:len(all_items) - meta.distribution.history_size]) + + if items: + self._database.executemany(u"DELETE FROM sync WHERE id = ?", [(syncid,) for syncid, _ in items]) + assert len(items) == self._database.changes + logger.debug("deleted %d messages", self._database.changes) + + if is_double_member_authentication: + self._database.executemany(u"DELETE FROM double_signed_sync WHERE sync = ?", [(syncid,) for syncid, _ in items]) + assert len(items) == self._database.changes + + # update_sync_range.update(global_time for _, _, global_time in items) + + # 12/10/11 Boudewijn: verify that we do not have to many packets in the database + if __debug__: + if not is_double_member_authentication: + for message in messages: + history_size, = self._database.execute(u"SELECT COUNT(*) FROM sync WHERE meta_message = ? AND member = ?", (message.database_id, message.authentication.member.database_id)).next() + assert history_size <= message.distribution.history_size, [count, message.distribution.history_size, message.authentication.member.database_id] + + # update the global time + meta.community.update_global_time(highest_global_time) + + meta.community.dispersy_store(messages) + + # if update_sync_range: + # notify that global times have changed + # meta.community.update_sync_range(meta, update_sync_range) + + @property + def bootstrap_candidates(self): + return self._bootstrap_candidates.itervalues() + + def estimate_lan_and_wan_addresses(self, sock_addr, lan_address, wan_address): + """ + We received a message from SOCK_ADDR claiming to have LAN_ADDRESS and WAN_ADDRESS, returns + the estimated LAN and WAN address for this node. + + The returned LAN address is either ("0.0.0.0", 0) or it is not our LAN address while passing + is_valid_address. Similarly, the returned WAN address is either ("0.0.0.0", 0) or it is not + our WAN address while passing is_valid_address. + """ + if self._lan_address == lan_address or not self.is_valid_address(lan_address): + if lan_address != sock_addr: + logger.debug("estimate a different LAN address %s:%d -> %s:%d", lan_address[0], lan_address[1], sock_addr[0], sock_addr[1]) + lan_address = sock_addr + if self._wan_address == wan_address or not self.is_valid_address(wan_address): + if wan_address != sock_addr: + logger.debug("estimate a different WAN address %s:%d -> %s:%d", wan_address[0], wan_address[1], sock_addr[0], sock_addr[1]) + wan_address = sock_addr + + if sock_addr[0] == self._wan_address[0]: + # we have the same WAN address, we are probably behind the same NAT + if lan_address != sock_addr: + logger.debug("estimate a different LAN address %s:%d -> %s:%d", lan_address[0], lan_address[1], sock_addr[0], sock_addr[1]) + lan_address = sock_addr + + elif self.is_valid_address(sock_addr): + # we have a different WAN address and the sock address is WAN, we are probably behind a different NAT + if wan_address != sock_addr: + logger.debug("estimate a different WAN address %s:%d -> %s:%d", wan_address[0], wan_address[1], sock_addr[0], sock_addr[1]) + wan_address = sock_addr + + elif self.is_valid_address(wan_address): + # we have a different WAN address and the sock address is not WAN, we are probably on the same computer + pass + + else: + # we are unable to determine the WAN address, we are probably behind the same NAT + wan_address = ("0.0.0.0", 0) + + assert self._lan_address != lan_address, [self.lan_address, lan_address] + assert lan_address == ("0.0.0.0", 0) or self.is_valid_address(lan_address), [self._lan_address, lan_address] + assert self._wan_address != wan_address, [self._wan_address, wan_address] + assert wan_address == ("0.0.0.0", 0) or self.is_valid_address(wan_address), [self._wan_address, wan_address] + + return lan_address, wan_address + + def take_step(self, community, allow_sync): + if community.cid in self._communities: + candidate = community.dispersy_get_walk_candidate() + if candidate: + assert community.my_member.private_key + logger.debug("%s %s taking step towards %s", community.cid.encode("HEX"), community.get_classification(), candidate) + community.create_introduction_request(candidate, allow_sync) + return True + else: + logger.debug("%s %s no candidate to take step", community.cid.encode("HEX"), community.get_classification()) + return False + + def handle_missing_messages(self, messages, *classes): + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(issubclass(cls, MissingSomethingCache) for cls in classes) + for message in messages: + for cls in classes: + cache = self._request_cache.pop(cls.message_to_identifier(message), cls) + if cache: + logger.debug("found request cache for %s", message) + for response_func, response_args in cache.callbacks: + response_func(message, *response_args) + + def create_introduction_request(self, community, destination, allow_sync, forward=True): + assert isinstance(destination, WalkCandidate), [type(destination), destination] + + cache = IntroductionRequestCache(community, destination) + destination.walk(time(), cache.timeout_delay) + community.add_candidate(destination) + + # temporary cache object + identifier = self._request_cache.claim(cache) + + # decide if the requested node should introduce us to someone else + # advice = random() < 0.5 or len(community.candidates) <= 5 + advice = True + + # obtain sync range + if not allow_sync or isinstance(destination, BootstrapCandidate): + # do not request a sync when we connecting to a bootstrap candidate + sync = None + + else: + # flush any sync-able items left in the cache before we create a sync + flush_list = [(meta, tup) for meta, tup in self._batch_cache.iteritems() if meta.community == community and isinstance(meta.distribution, SyncDistribution)] + flush_list.sort(key=lambda tup: tup[0].batch.priority, reverse=True) + for meta, (task_identifier, timestamp, batch) in flush_list: + logger.debug("flush cached %dx %s messages (id: %s)", len(batch), meta.name, task_identifier) + self._callback.unregister(task_identifier) + self._on_batch_cache_timeout(meta, timestamp, batch) + + sync = community.dispersy_claim_sync_bloom_filter(cache) + if __debug__: + assert sync is None or isinstance(sync, tuple), sync + if not sync is None: + assert len(sync) == 5, sync + time_low, time_high, modulo, offset, bloom_filter = sync + assert isinstance(time_low, (int, long)), time_low + assert isinstance(time_high, (int, long)), time_high + assert isinstance(modulo, int), modulo + assert isinstance(offset, int), offset + assert isinstance(bloom_filter, BloomFilter), bloom_filter + + # verify that the bloom filter is correct + try: + packets = [str(packet) for packet, in self._database.execute(u""" +SELECT sync.packet +FROM sync +JOIN meta_message ON meta_message.id = sync.meta_message +WHERE sync.community = ? AND meta_message.priority > 32 AND sync.undone = 0 AND global_time BETWEEN ? AND ? AND (sync.global_time + ?) % ? = 0""", + (community.database_id, time_low, community.global_time if time_high == 0 else time_high, offset, modulo))] + except OverflowError: + logger.error("time_low: %d", time_low) + logger.error("time_high: %d", time_high) + logger.error("2**63 - 1: %d", 2 ** 63 - 1) + logger.exception("the sqlite3 python module can not handle values 2**63 or larger. limit time_low and time_high to 2**63-1") + assert False + + # BLOOM_FILTER must be the same after transmission + test_bloom_filter = BloomFilter(bloom_filter.bytes, bloom_filter.functions, prefix=bloom_filter.prefix) + assert bloom_filter.bytes == test_bloom_filter.bytes, "problem with the long <-> binary conversion" + assert list(bloom_filter.not_filter((packet,) for packet in packets)) == [], "does not have all correct bits set before transmission" + assert list(test_bloom_filter.not_filter((packet,) for packet in packets)) == [], "does not have all correct bits set after transmission" + + # BLOOM_FILTER must have been correctly filled + test_bloom_filter.clear() + test_bloom_filter.add_keys(packets) + if not bloom_filter.bytes == bloom_filter.bytes: + if bloom_filter.get_bits_checked() < test_bloom_filter.get_bits_checked(): + logger.error("%d bits in: %s", bloom_filter.get_bits_checked(), bloom_filter.bytes.encode("HEX")) + logger.error("%d bits in: %s", test_bloom_filter.get_bits_checked(), test_bloom_filter.bytes.encode("HEX")) + assert False, "does not match the given range [%d:%d] %%%d+%d packets:%d" % (time_low, time_high, modulo, offset, len(packets)) + + if destination.get_destination_address(self._wan_address) != destination.sock_addr: + logger.warning("destination address, %s should (in theory) be the sock_addr %s", destination.get_destination_address(self._wan_address), destination) + + meta_request = community.get_meta_message(u"dispersy-introduction-request") + request = meta_request.impl(authentication=(community.my_member,), + distribution=(community.global_time,), + destination=(destination,), + payload=(destination.get_destination_address(self._wan_address), self._lan_address, self._wan_address, advice, self._connection_type, sync, identifier)) + + if forward: + if sync: + time_low, time_high, modulo, offset, _ = sync + logger.debug("%s %s sending introduction request to %s [%d:%d] %%%d+%d", community.cid.encode("HEX"), type(community), destination, time_low, time_high, modulo, offset) + else: + logger.debug("%s %s sending introduction request to %s", community.cid.encode("HEX"), type(community), destination) + + self._statistics.walk_attempt += 1 + if isinstance(destination, BootstrapCandidate): + self._statistics.walk_bootstrap_attempt += 1 + if request.payload.advice: + self._statistics.walk_advice_outgoing_request += 1 + self._statistics.dict_inc(self._statistics.outgoing_introduction_request, destination.sock_addr) + + self._forward([request]) + + return request + + def check_introduction_request(self, messages): + """ + We received a dispersy-introduction-request message. + """ + for message in messages: + # 25/01/12 Boudewijn: during all DAS2 NAT node314 often sends requests to herself. This + # results in more candidates (all pointing to herself) being added to the candidate + # list. This converges to only sending requests to herself. To prevent this we will + # drop all requests that have an outstanding identifier. This is not a perfect + # solution, but the change that two nodes select the same identifier and send requests + # to each other is relatively small. + # 30/10/12 Niels: additionally check if both our lan_addresses are the same. They should + # be if we're sending it to ourself. Not checking wan_address as that is subject to change. + if self._request_cache.has(message.payload.identifier, IntroductionRequestCache) and self._lan_address == message.payload.source_lan_address: + logger.debug("dropping dispersy-introduction-request, this identifier is already in use.") + yield DropMessage(message, "Duplicate identifier from %s (most likely received from ourself)" % str(message.candidate)) + continue + + logger.debug("accepting dispersy-introduction-request from %s", message.candidate) + yield message + + def on_introduction_request(self, messages): + community = messages[0].community + meta_introduction_response = community.get_meta_message(u"dispersy-introduction-response") + meta_puncture_request = community.get_meta_message(u"dispersy-puncture-request") + responses = [] + requests = [] + now = time() + self._statistics.walk_advice_incoming_request += len(messages) + + # + # make all candidates available for introduction + # + for message in messages: + candidate = community.get_walkcandidate(message) + message._candidate = candidate + if not candidate: + continue + + payload = message.payload + + # apply vote to determine our WAN address + self.wan_address_vote(payload.destination_address, candidate) + + # until we implement a proper 3-way handshake we are going to assume that the creator of + # this message is associated to this candidate + candidate.associate(message.authentication.member) + + # update sender candidate + source_lan_address, source_wan_address = self.estimate_lan_and_wan_addresses(candidate.sock_addr, payload.source_lan_address, payload.source_wan_address) + candidate.update(candidate.tunnel, source_lan_address, source_wan_address, payload.connection_type) + candidate.stumble(now) + community.add_candidate(candidate) + + community.filter_duplicate_candidate(candidate) + logger.debug("received introduction request from %s", candidate) + + # + # process the walker part of the request + # + + for message in messages: + payload = message.payload + candidate = message.candidate + if not candidate: + continue + + if payload.advice: + introduced = community.dispersy_get_introduce_candidate(candidate) + if introduced == None: + logger.debug("no candidates available to introduce") + else: + introduced = None + + if introduced: + logger.debug("telling %s that %s exists %s", candidate, introduced, type(community)) + self._statistics.walk_advice_outgoing_response += 1 + + # create introduction response + responses.append(meta_introduction_response.impl(authentication=(community.my_member,), distribution=(community.global_time,), destination=(candidate,), payload=(candidate.get_destination_address(self._wan_address), self._lan_address, self._wan_address, introduced.lan_address, introduced.wan_address, self._connection_type, introduced.tunnel, payload.identifier))) + + # create puncture request + requests.append(meta_puncture_request.impl(distribution=(community.global_time,), destination=(introduced,), payload=(source_lan_address, source_wan_address, payload.identifier))) + + else: + logger.debug("responding to %s without an introduction %s", candidate, type(community)) + + none = ("0.0.0.0", 0) + responses.append(meta_introduction_response.impl(authentication=(community.my_member,), distribution=(community.global_time,), destination=(candidate,), payload=(candidate.get_destination_address(self._wan_address), self._lan_address, self._wan_address, none, none, self._connection_type, False, payload.identifier))) + + if responses: + self._forward(responses) + if requests: + self._forward(requests) + + # + # process the bloom filter part of the request + # + + # obtain all available messages for this community + meta_messages = [(meta.distribution.priority, -meta.distribution.synchronization_direction_value, meta) for meta in community.get_meta_messages() if isinstance(meta.distribution, SyncDistribution) and meta.distribution.priority > 32] + meta_messages.sort(reverse=True) + + sub_selects = [] + for _, _, meta in meta_messages: + sub_selects.append(u""" + SELECT * FROM + (SELECT sync.packet FROM sync + WHERE sync.meta_message = ? AND sync.undone = 0 AND sync.global_time BETWEEN ? AND ? AND (sync.global_time + ?) %% ? = 0 + ORDER BY sync.global_time %s)""" % (meta.distribution.synchronization_direction,)) + + sql = "".join((u"SELECT * FROM (", " UNION ALL ".join(sub_selects), ")")) + logger.debug(sql) + + for message in messages: + payload = message.payload + + if payload.sync: + # we limit the response by byte_limit bytes + byte_limit = community.dispersy_sync_response_limit + time_high = payload.time_high if payload.has_time_high else community.global_time + + # 07/05/12 Boudewijn: for an unknown reason values larger than 2^63-1 cause + # overflow exceptions in the sqlite3 wrapper + # 26/02/13 Boudewijn: time_low and time_high must now be given once for every + # sub_selects, taking into account that time_low may not be below the + # inactive_threshold (if given) + sql_arguments = [] + for _, _, meta in meta_messages: + sql_arguments.extend((meta.database_id, + min(max(payload.time_low, community.global_time - meta.distribution.pruning.inactive_threshold + 1), 2 ** 63 - 1) if isinstance(meta.distribution.pruning, GlobalTimePruning) else min(payload.time_low, 2 ** 63 - 1), + min(time_high, 2 ** 63 - 1), + long(payload.offset), + long(payload.modulo))) + logger.debug("%s", sql_arguments) + + packets = [] + generator = ((str(packet),) for packet, in self._database.execute(sql, sql_arguments)) + + for packet, in payload.bloom_filter.not_filter(generator): + logger.debug("found missing (%d bytes) %s for %s", len(packet), sha1(packet).digest().encode("HEX"), message.candidate) + + packets.append(packet) + byte_limit -= len(packet) + if byte_limit <= 0: + logger.debug("bandwidth throttle") + break + + if packets: + logger.debug("syncing %d packets (%d bytes) to %s", len(packets), sum(len(packet) for packet in packets), message.candidate) + self._statistics.dict_inc(self._statistics.outgoing, u"-sync-", len(packets)) + self._endpoint.send([message.candidate], packets) + + def check_introduction_response(self, messages): + for message in messages: + if not self._request_cache.has(message.payload.identifier, IntroductionRequestCache): + self._statistics.walk_invalid_response_identifier += 1 + yield DropMessage(message, "invalid response identifier") + continue + + # check introduced LAN address, if given + if not message.payload.lan_introduction_address == ("0.0.0.0", 0): + if not self.is_valid_address(message.payload.lan_introduction_address): + yield DropMessage(message, "invalid LAN introduction address [is_valid_address]") + continue + + # check introduced WAN address, if given + if not message.payload.wan_introduction_address == ("0.0.0.0", 0): + if not self.is_valid_address(message.payload.wan_introduction_address): + yield DropMessage(message, "invalid WAN introduction address [is_valid_address]") + continue + + if message.payload.wan_introduction_address == self._wan_address: + yield DropMessage(message, "invalid WAN introduction address [introduced to myself]") + continue + + # if WAN ip-addresses match, check if the LAN address is not the same + if message.payload.wan_introduction_address[0] == self._wan_address[0] and message.payload.lan_introduction_address == self._lan_address: + yield DropMessage(message, "invalid LAN introduction address [introduced to myself]") + continue + + # if we do not know the WAN address, make sure that the LAN address is not the same + elif not message.payload.lan_introduction_address == ("0.0.0.0", 0): + if message.payload.lan_introduction_address == self._lan_address: + yield DropMessage(message, "invalid LAN introduction address [introduced to myself]") + continue + + yield message + + def on_introduction_response(self, messages): + community = messages[0].community + now = time() + + for message in messages: + payload = message.payload + + # modify either the senders LAN or WAN address based on how we perceive that node + source_lan_address, source_wan_address = self.estimate_lan_and_wan_addresses(message.candidate.sock_addr, payload.source_lan_address, payload.source_wan_address) + + if isinstance(message.candidate, WalkCandidate): + candidate = message.candidate + candidate.update(candidate.tunnel, source_lan_address, source_wan_address, payload.connection_type) + else: + candidate = community.create_candidate(message.candidate.sock_addr, message.candidate.tunnel, source_lan_address, source_wan_address, payload.connection_type) + + # until we implement a proper 3-way handshake we are going to assume that the creator of + # this message is associated to this candidate + candidate.associate(message.authentication.member) + candidate.walk_response() + community.filter_duplicate_candidate(candidate) + logger.debug("introduction response from %s", candidate) + + # apply vote to determine our WAN address + self.wan_address_vote(payload.destination_address, candidate) + + # increment statistics only the first time + self._statistics.walk_success += 1 + if isinstance(candidate, BootstrapCandidate): + self._statistics.walk_bootstrap_success += 1 + self._statistics.dict_inc(self._statistics.incoming_introduction_response, candidate.sock_addr) + + # get cache object linked to this request and stop timeout from occurring + cache = self._request_cache.pop(payload.identifier, IntroductionRequestCache) + + # handle the introduction + lan_introduction_address = payload.lan_introduction_address + wan_introduction_address = payload.wan_introduction_address + if not (lan_introduction_address == ("0.0.0.0", 0) or wan_introduction_address == ("0.0.0.0", 0) or + lan_introduction_address in self._bootstrap_candidates or wan_introduction_address in self._bootstrap_candidates): + assert self.is_valid_address(lan_introduction_address), lan_introduction_address + assert self.is_valid_address(wan_introduction_address), wan_introduction_address + + # get or create the introduced candidate + self._statistics.walk_advice_incoming_response += 1 + sock_introduction_addr = lan_introduction_address if wan_introduction_address[0] == self._wan_address[0] else wan_introduction_address + introduce = community.get_candidate(sock_introduction_addr, replace=False, lan_address=lan_introduction_address) + if introduce is None: + # create candidate but set its state to inactive to ensure that it will not be + # used. note that we call candidate.intro to allow the candidate to be returned + # by get_walk_candidate and yield_candidates + self._statistics.walk_advice_incoming_response_new += 1 + introduce = community.create_candidate(sock_introduction_addr, payload.tunnel, lan_introduction_address, wan_introduction_address, u"unknown") + introduce.inactive(now) + + # reset the 'I have been introduced' timer + community.add_candidate(introduce) + introduce.intro(now) + community.filter_duplicate_candidate(introduce) + logger.debug("received introduction to %s from %s", introduce, candidate) + + cache.response_candidate = introduce + + # update statistics + if self._statistics.received_introductions != None: + self._statistics.received_introductions[candidate.sock_addr][introduce.sock_addr] += 1 + + # TEMP: see which peers we get returned by the trackers + if self._statistics.bootstrap_candidates != None and isinstance(message.candidate, BootstrapCandidate): + self._statistics.bootstrap_candidates[introduce.sock_addr] = self._statistics.bootstrap_candidates.get(introduce.sock_addr, 0) + 1 + + else: + # update statistics + if self._statistics.received_introductions != None: + self._statistics.received_introductions[candidate.sock_addr][wan_introduction_address] += 1 + + # TEMP: see which peers we get returned by the trackers + if self._statistics.bootstrap_candidates != None and isinstance(message.candidate, BootstrapCandidate): + self._statistics.bootstrap_candidates["none"] = self._statistics.bootstrap_candidates.get("none", 0) + 1 + + def check_puncture_request(self, messages): + for message in messages: + if message.payload.lan_walker_address == message.candidate.sock_addr: + yield DropMessage(message, "invalid LAN walker address [puncture herself]") + continue + + if message.payload.wan_walker_address == message.candidate.sock_addr: + yield DropMessage(message, "invalid WAN walker address [puncture herself]") + continue + + if not self.is_valid_address(message.payload.lan_walker_address): + yield DropMessage(message, "invalid LAN walker address [is_valid_address]") + continue + + if not self.is_valid_address(message.payload.wan_walker_address): + yield DropMessage(message, "invalid WAN walker address [is_valid_address]") + continue + + if message.payload.wan_walker_address == self._wan_address: + yield DropMessage(message, "invalid WAN walker address [puncture myself]") + continue + + if message.payload.wan_walker_address[0] == self._wan_address[0] and message.payload.lan_walker_address == self._lan_address: + yield DropMessage(message, "invalid LAN walker address [puncture myself]") + continue + + yield message + + def on_puncture_request(self, messages): + community = messages[0].community + meta_puncture = community.get_meta_message(u"dispersy-puncture") + punctures = [] + for message in messages: + lan_walker_address = message.payload.lan_walker_address + wan_walker_address = message.payload.wan_walker_address + assert self.is_valid_address(lan_walker_address), lan_walker_address + assert self.is_valid_address(wan_walker_address), wan_walker_address + + # we are asked to send a message to a -possibly- unknown peer get the actual candidate + # or create a dummy candidate + sock_addr = lan_walker_address if wan_walker_address[0] == self._wan_address[0] else wan_walker_address + candidate = community.get_candidate(sock_addr, replace=False, lan_address=lan_walker_address) + if candidate is None: + # assume that tunnel is disabled + tunnel = False + candidate = Candidate(sock_addr, tunnel) + + punctures.append(meta_puncture.impl(authentication=(community.my_member,), distribution=(community.global_time,), destination=(candidate,), payload=(self._lan_address, self._wan_address, message.payload.identifier))) + logger.debug("%s asked us to send a puncture to %s", message.candidate, candidate) + + self._forward(punctures) + + def check_puncture(self, messages): + for message in messages: + if not self._request_cache.has(message.payload.identifier, IntroductionRequestCache): + yield DropMessage(message, "invalid response identifier") + continue + + yield message + + def on_puncture(self, messages): + community = messages[0].community + now = time() + + for message in messages: + # get cache object linked to this request but does NOT stop timeout from occurring + cache = self._request_cache.get(message.payload.identifier, IntroductionRequestCache) + + # when the sender is behind a symmetric NAT and we are not, we will not be able to get + # through using the port that the helper node gave us (symmetric NAT will give a + # different port for each destination address). + + # we can match this source address (message.candidate.sock_addr) to the candidate and + # modify the LAN or WAN address that has been proposed. + sock_addr = message.candidate.sock_addr + lan_address, wan_address = self.estimate_lan_and_wan_addresses(sock_addr, message.payload.source_lan_address, message.payload.source_wan_address) + + if not (lan_address == ("0.0.0.0", 0) or wan_address == ("0.0.0.0", 0)): + assert self.is_valid_address(lan_address), lan_address + assert self.is_valid_address(wan_address), wan_address + + # get or create the introduced candidate + candidate = community.get_candidate(sock_addr, replace=True, lan_address=lan_address) + if candidate is None: + # create candidate but set its state to inactive to ensure that it will not be + # used. note that we call candidate.intro to allow the candidate to be returned + # by get_walk_candidate + candidate = community.create_candidate(sock_addr, message.candidate.tunnel, lan_address, wan_address, u"unknown") + candidate.inactive(now) + + else: + # update candidate + candidate.update(message.candidate.tunnel, lan_address, wan_address, u"unknown") + + # reset the 'I have been introduced' timer + community.add_candidate(candidate) + candidate.intro(now) + logger.debug("received introduction to %s", candidate) + + cache.puncture_candidate = candidate + + def store_update_forward(self, messages, store, update, forward): + """ + Usually we need to do three things when we have a valid messages: (1) store it in our local + database, (2) process the message locally by calling the handle_callback method, and (3) + forward the message to other nodes in the community. This method is a shorthand for doing + those three tasks. + + To reduce the disk activity, namely syncing the database to disk, we will perform the + database commit not after the (1) store operation but after the (2) update operation. This + will ensure that any database changes from handling the message are also synced to disk. It + is important to note that the sync will occur before the (3) forward operation to ensure + that no remote nodes will obtain data that we have not safely synced ourselves. + + For performance reasons messages are processed in batches, where each batch contains only + messages from the same community and the same meta message instance. This method, or more + specifically the methods that handle the actual storage, updating, and forwarding, assume + this clustering. + + @param messages: A list with the messages that need to be stored, updated, and forwarded. + All messages need to be from the same community and meta message instance. + @type messages: [Message.Implementation] + + @param store: When True the messages are stored (as defined by their message distribution + policy) in the local dispersy database. This parameter should (almost always) be True, its + inclusion is mostly to allow certain debugging scenarios. + @type store: bool + + @param update: When True the messages are passed to their handle_callback methods. This + parameter should (almost always) be True, its inclusion is mostly to allow certain + debugging scenarios. + @type update: bool + + @param forward: When True the messages are forwarded (as defined by their message + destination policy) to other nodes in the community. This parameter should (almost always) + be True, its inclusion is mostly to allow certain debugging scenarios. + @type store: bool + """ + assert isinstance(messages, list) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + assert isinstance(store, bool) + assert isinstance(update, bool) + assert isinstance(forward, bool) + + logger.debug("%d %s messages (%s %s %s)", len(messages), messages[0].name, store, update, forward) + + store = store and isinstance(messages[0].meta.distribution, SyncDistribution) + if store: + self._store(messages) + + if update: + try: + messages[0].handle_callback(messages) + except (SystemExit, KeyboardInterrupt, GeneratorExit, AssertionError): + raise + except: + logger.exception("exception during handle_callback for %s", messages[0].name) + return False + + # 07/10/11 Boudewijn: we will only commit if it the message was create by our self. + # Otherwise we can safely skip the commit overhead, since, if a crash occurs, we will be + # able to regain the data eventually + if store: + my_messages = sum(message.authentication.member == message.community.my_member for message in messages) + if my_messages: + logger.debug("commit user generated message") + self._database.commit() + + self._statistics.created_count += my_messages + self._statistics.dict_inc(self._statistics.created, messages[0].meta.name, my_messages) + + if forward: + return self._forward(messages) + + return True + + def _forward(self, messages): + """ + Queue a sequence of messages to be sent to other members. + + First all messages that use the SyncDistribution policy are stored to the database to allow + them to propagate when a dispersy-sync message is received. + + Second all messages are sent depending on their destination policy: + + - CandidateDestination causes a message to be sent to the addresses in + message.destination.candidates. + + - CommunityDestination causes a message to be sent to one or more addresses to be picked + from the database candidate table. + + @param messages: A sequence with one or more messages. + @type messages: [Message.Implementation] + """ + assert isinstance(messages, (tuple, list)) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + assert all(message.community == messages[0].community for message in messages) + assert all(message.meta == messages[0].meta for message in messages) + + result = False + meta = messages[0].meta + if isinstance(meta.destination, CommunityDestination): + # CommunityDestination.node_count is allowed to be zero + if meta.destination.node_count > 0: + result = all(self._send(list(islice(meta.community.dispersy_yield_verified_candidates(), meta.destination.node_count)), [message]) for message in messages) + + elif isinstance(meta.destination, CandidateDestination): + # CandidateDestination.candidates may be empty + result = all(self._send(message.destination.candidates, [message]) for message in messages) + + else: + raise NotImplementedError(meta.destination) + + return result + + def _send(self, candidates, messages, debug=False): + """ + Send a list of messages to a list of candidates. If no candidates are specified or endpoint reported + a failure this method will return False. + + @param candidates: A sequence with one or more candidates. + @type candidates: [Candidate] + + @param messages: A sequence with one or more messages. + @type messages: [Message.Implementation] + """ + assert isinstance(candidates, (tuple, list, set)), type(candidates) + # 04/03/13 boudewijn: CANDIDATES should contain candidates, never None + # candidates = [candidate for candidate in candidates if candidate] + assert all(isinstance(candidate, Candidate) for candidate in candidates) + assert isinstance(messages, (tuple, list)) + assert len(messages) > 0 + assert all(isinstance(message, Message.Implementation) for message in messages) + + messages_send = False + if len(candidates) and len(messages): + packets = [message.packet for message in messages] + messages_send = self._endpoint.send(candidates, packets) + + if messages_send: + for message in messages: + self._statistics.dict_inc(self._statistics.outgoing, message.meta.name, len(candidates)) + + return messages_send + + def declare_malicious_member(self, member, packets): + """ + Provide one or more signed messages that prove that the creator is malicious. + + The messages are stored separately as proof that MEMBER is malicious, furthermore, all other + messages that MEMBER created are removed from the dispersy database (limited to one + community) to prevent further spreading of its data. + + Furthermore, whenever data is received that is signed by a malicious member, the incoming + data is ignored and the proof is given to the sender to allow her to prevent her from + forwarding any more data. + + Finally, the community is notified. The community can choose what to do, however, it is + important to note that messages from the malicious member are no longer propagated. Hence, + unless all traces from the malicious member are removed, no global consensus can ever be + achieved. + + @param member: The malicious member. + @type member: Member + + @param packets: One or more packets proving that the member is malicious. All packets must + be associated to the same community. + @type packets: [Packet] + """ + if __debug__: + assert isinstance(member, Member) + assert not member.must_blacklist, "must not already be blacklisted" + assert isinstance(packets, list) + assert len(packets) > 0 + assert all(isinstance(packet, Packet) for packet in packets) + assert all(packet.meta == packets[0].meta for packet in packets) + + logger.debug("proof based on %d packets", len(packets)) + + # notify the community + community = packets[0].community + community.dispersy_malicious_member_detected(member, packets) + + # set the member blacklisted tag + member.must_blacklist = True + + # store the proof + self._database.executemany(u"INSERT INTO malicious_proof (community, member, packet) VALUES (?, ?, ?)", + ((community.database_id, member.database_id, buffer(packet.packet)) for packet in packets)) + + # remove all messages created by the malicious member + self._database.execute(u"DELETE FROM sync WHERE community = ? AND member = ?", + (community.database_id, member.database_id)) + + # TODO: if we have a address for the malicious member, we can also remove her from the + # candidate table + + def send_malicious_proof(self, community, member, candidate): + """ + If we have proof that MEMBER is malicious in COMMUNITY, usually in the form of one or more + signed messages, then send this proof to CANDIDATE. + + @param community: The community where member was malicious. + @type community: Community + + @param member: The malicious member. + @type member: Member + + @param candidate: The address where we want the proof to be send. + @type candidate: Candidate + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(member, Member) + assert member.must_blacklist, "must be blacklisted" + assert isinstance(candidate, Candidate) + + packets = [str(packet) for packet, in self._database.execute(u"SELECT packet FROM malicious_proof WHERE community = ? AND member = ?", + (community.database_id, member.database_id))] + logger.debug("found %d malicious proof packets, sending to %s", len(packets), candidate) + + if packets: + self._statistics.dict_inc(self._statistics.outgoing, u"-malicious-proof", len(packets)) + self._endpoint.send([candidate], packets) + + def create_missing_message(self, community, candidate, member, global_time, response_func=None, response_args=(), timeout=10.0): + # ensure that the identifier is 'triggered' somewhere, i.e. using + # handle_missing_messages(messages, MissingMessageCache) + + sendRequest = False + + identifier = MissingMessageCache.properties_to_identifier(community, member, global_time) + cache = self._request_cache.get(identifier, MissingMessageCache) + if not cache: + logger.debug("%s", identifier) + cache = MissingMessageCache(timeout) + self._request_cache.set(identifier, cache) + + meta = community.get_meta_message(u"dispersy-missing-message") + request = meta.impl(distribution=(community.global_time,), destination=(candidate,), payload=(member, [global_time])) + self._forward([request]) + + sendRequest = True + + if response_func: + cache.callbacks.append((response_func, response_args)) + + return sendRequest + + def on_missing_message(self, messages): + responses = [] # (candidate, packet) tuples + for message in messages: + candidate = message.candidate + community_database_id = message.community.database_id + member_database_id = message.payload.member.database_id + for global_time in message.payload.global_times: + try: + packet, = self._database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community_database_id, member_database_id, global_time)).next() + except StopIteration: + pass + else: + responses.append((candidate, str(packet))) + + for candidate, responses in groupby(responses, key=lambda tup: tup[0]): + # responses is an iterator, for __debug__ we need a list + responses = list(responses) + self._statistics.dict_inc(self._statistics.outgoing, u"-missing-message", len(responses)) + self._endpoint.send([candidate], [packet for _, packet in responses]) + + def create_missing_last_message(self, community, candidate, member, message, count_, response_func=None, response_args=(), timeout=10.0): + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(candidate, Candidate) + assert isinstance(member, Member) + assert isinstance(message, Message) + assert isinstance(count_, int) + assert response_func is None or callable(response_func) + assert isinstance(response_args, tuple) + assert isinstance(timeout, float) + assert timeout > 0.0 + + sendRequest = False + + identifier = MissingLastMessageCache.properties_to_identifier(community, member, message) + cache = self._request_cache.get(identifier, MissingLastMessageCache) + if not cache: + cache = MissingLastMessageCache(timeout) + self._request_cache.set(identifier, cache) + + meta = community.get_meta_message(u"dispersy-missing-last-message") + request = meta.impl(distribution=(community.global_time,), destination=(candidate,), payload=(member, message, count_)) + self._forward([request]) + sendRequest = True + + cache.callbacks.append((response_func, response_args)) + return sendRequest + + def on_missing_last_message(self, messages): + for message in messages: + payload = message.payload + packets = [str(packet) for packet, in list(self._database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND meta_message = ? ORDER BY global_time DESC LIMIT ?", + (message.community.database_id, payload.member.database_id, payload.message.database_id, payload.count)))] + self._statistics.dict_inc(self._statistics.outgoing, u"-missing-last-message", len(packets)) + self._endpoint.send([message.candidate], packets) + + def is_valid_address(self, address): + """ + Returns True when ADDRESS is valid. + + ADDRESS must be supplied as a (HOST string, PORT integer) tuple. + + An address is valid when it meets the following criteria: + - HOST must be non empty + - HOST must be non '0.0.0.0' + - PORT must be > 0 + - HOST must be 'A.B.C.D' where A, B, and C are numbers higher or equal to 0 and lower or + equal to 255. And where D is higher than 0 and lower than 255 + """ + assert isinstance(address, tuple), type(address) + assert len(address) == 2, len(address) + assert isinstance(address[0], str), type(address[0]) + assert isinstance(address[1], int), type(address[1]) + + if address[0] == "": + return False + + if address[0] == "0.0.0.0": + return False + + if address[1] <= 0: + return False + + try: + binary = inet_aton(address[0]) + except socket_error: + return False + + # ending with .0 +# Niels: is now allowed, subnet mask magic call actually allow for this +# if binary[3] == "\x00": +# return False + + # ending with .255 + if binary[3] == "\xff": + return False + + return True + + def create_identity(self, community, sign_with_master=False, store=True, update=True): + """ + Create a dispersy-identity message for self.my_member. + + The dispersy-identity message contains the public key of a community member. In the future + other data can be included in this message, however, it must consist of data that does not + change over time as this message is only transferred on demand, and not during the sync + phase. + + @param community: The community for wich the dispersy-identity message will be created. + @type community: Community + + @param store: When True the messages are stored (as defined by their message distribution + policy) in the local dispersy database. This parameter should (almost always) be True, its + inclusion is mostly to allow certain debugging scenarios. + @type store: bool + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(store, bool) + meta = community.get_meta_message(u"dispersy-identity") + + # 13/03/12 Boudewijn: currently create_identity is either called when joining or creating a + # community. when creating a community self._global_time should be 1, since the master + # member dispersy-identity message has just been created. when joining a community + # self._global time should be 0, since no messages have been either received or created. + # + # as a security feature we force that the global time on dispersy-identity messages are + # always 2 or higher (except for master members who should get global time 1) + global_time = community.claim_global_time() + while global_time < 2: + global_time = community.claim_global_time() + + message = meta.impl(authentication=(community.master_member if sign_with_master else community.my_member,), + distribution=(global_time,)) + self.store_update_forward([message], store, update, False) + return message + + def on_identity(self, messages): + """ + We received a dispersy-identity message. + """ + for message in messages: + # get cache object linked to this request and stop timeout from occurring + identifier = MissingMemberCache.message_to_identifier(message) + cache = self._request_cache.pop(identifier, MissingMemberCache) + if cache: + for func, args in cache.callbacks: + func(message, *args) + + def create_missing_identity(self, community, candidate, dummy_member, response_func=None, response_args=(), timeout=4.5, forward=True): + """ + Create a dispersy-missing-identity message. + + To verify a message signature we need the corresponding public key from the member who made + the signature. When we are missing a public key, we can request a dispersy-identity message + which contains this public key. + + # @return True if actual request is made + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(candidate, Candidate) + assert isinstance(dummy_member, DummyMember) + assert response_func is None or callable(response_func) + assert isinstance(response_args, tuple) + assert isinstance(timeout, float) + assert isinstance(forward, bool) + + sendRequest = False + + identifier = MissingMemberCache.properties_to_identifier(community, dummy_member) + cache = self._request_cache.get(identifier, MissingMemberCache) + if not cache: + cache = MissingMemberCache(timeout) + self._request_cache.set(identifier, cache) + + logger.debug("%s sending missing-identity %s", candidate, dummy_member.mid.encode("HEX")) + meta = community.get_meta_message(u"dispersy-missing-identity") + request = meta.impl(distribution=(community.global_time,), destination=(candidate,), payload=(dummy_member.mid,)) + self._forward([request]) + + sendRequest = True + + cache.callbacks.append((response_func, response_args)) + return sendRequest + + def on_missing_identity(self, messages): + """ + We received dispersy-missing-identity messages. + + The message contains the mid of a member. The sender would like to obtain one or more + associated dispersy-identity messages. + + @see: create_identity_request + + @param messages: The dispersy-identity message. + @type messages: [Message.Implementation] + """ + meta = messages[0].community.get_meta_message(u"dispersy-identity") + for message in messages: + # we are assuming that no more than 10 members have the same sha1 digest. + sql = u"SELECT packet FROM sync JOIN member ON member.id = sync.member WHERE sync.community = ? AND sync.meta_message = ? AND member.mid = ? LIMIT 10" + packets = [str(packet) for packet, in self._database.execute(sql, (message.community.database_id, meta.database_id, buffer(message.payload.mid)))] + if packets: + logger.debug("responding with %d identity messages", len(packets)) + self._statistics.dict_inc(self._statistics.outgoing, u"-dispersy-identity", len(packets)) + self._endpoint.send([message.candidate], packets) + + else: + assert not message.payload.mid == message.community.my_member.mid, "we should always have our own dispersy-identity" + logger.warning("could not find any missing members. no response is sent [%s, mid:%s, cid:%s]", message.payload.mid.encode("HEX"), message.community.my_member.mid.encode("HEX"), message.community.cid.encode("HEX")) + + def create_signature_request(self, community, candidate, message, response_func, response_args=(), timeout=10.0, forward=True): + """ + Create a dispersy-signature-request message. + + The dispersy-signature-request message contains a sub-message that is to be signed by + another member. The sub-message must use the DoubleMemberAuthentication policy in order to + store the two members and their signatures. + + If the other member decides to add their signature she will sent back a + dispersy-signature-response message. This message contains a (possibly) modified version of + the sub-message. + + Receiving the dispersy-signed-response message results in a call to RESPONSE_FUNC. The + first parameter for this call is the SignatureRequestCache instance returned by + create_signature_request, the second parameter is the proposed message that was sent back, + the third parameter is a boolean indicating weather MESSAGE was modified. + + RESPONSE_FUNC must return a boolean value indicating weather the proposed message (the + second parameter) is accepted. Once we accept all signature responses we will add our own + signature and the last proposed message is stored, updated, and forwarded. + + If not all members sent a reply withing timeout seconds, one final call to response_func is + made with the second parameter set to None. + + @param community: The community for wich the dispersy-signature-request message will be + created. + @type community: Community + + @param candidate: Destination candidate. + @type candidate: Candidate + + @param message: The message that needs the signature. + @type message: Message.Implementation + + @param response_func: The method that is called when a signature or a timeout is received. + @type response_func: callable method + + @param response_args: Optional arguments added when calling response_func. + @type response_args: tuple + + @param timeout: How long before a timeout is generated. + @type timeout: float + + @param forward: When True the messages are forwarded (as defined by their message + destination policy) to other nodes in the community. This parameter should (almost always) + be True, its inclusion is mostly to allow certain debugging scenarios. + @type store: bool + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(candidate, Candidate) + assert isinstance(message, Message.Implementation) + assert isinstance(message.authentication, DoubleMemberAuthentication.Implementation) + assert hasattr(response_func, "__call__") + assert isinstance(response_args, tuple) + assert isinstance(timeout, float) + assert isinstance(forward, bool) + + # the members that need to sign + members = [member for signature, member in message.authentication.signed_members if not (signature or member.private_key)] + assert len(members) == 1 + + # temporary cache object + cache = SignatureRequestCache(members, response_func, response_args, timeout) + identifier = self._request_cache.claim(cache) + + # the dispersy-signature-request message that will hold the + # message that should obtain more signatures + meta = community.get_meta_message(u"dispersy-signature-request") + cache.request = meta.impl(distribution=(community.global_time,), + destination=(candidate,), + payload=(identifier, message)) + + logger.debug("asking %s", [member.mid.encode("HEX") for member in members]) + self._forward([cache.request]) + return cache + + def check_signature_request(self, messages): + assert isinstance(messages[0].meta.authentication, NoAuthentication) + for message in messages: + # we can not timeline.check this message because it uses the NoAuthentication policy + + # submsg contains the double signed message (that currently contains -no- signatures) + submsg = message.payload.message + + has_private_member = False + try: + for is_signed, member in submsg.authentication.signed_members: + # security: do NOT allow to accidentally sign with master member. + if member == message.community.master_member: + raise DropMessage(message, "You may never ask for a master member signature") + + # is this signature missing, and could we provide it + if not is_signed and member.private_key: + has_private_member = True + break + except DropMessage as exception: + yield exception + continue + + # we must be one of the members that needs to sign + if not has_private_member: + yield DropMessage(message, "Nothing to sign") + continue + + # we can not timeline.check the submessage because it uses the DoubleMemberAuthentication policy + # the message that we are signing must be valid according to our timeline + # if not message.community.timeline.check(submsg): + # raise DropMessage("Does not fit timeline") + + # allow message + yield message + + def on_signature_request(self, messages): + """ + We received a dispersy-signature-request message. + + This message contains a sub-message (message.payload.message) that the message creator would + like to have us sign. We can choose for ourselves if we want to add our signature to the + sub-message or not. + + Once we have determined that we could provide a signature and that the sub-message is valid, + from a timeline perspective, we will ask the community to say yes or no to adding our + signature. This question is done by calling the + sub-message.authentication.allow_signature_func method. + + We will only add our signature if the allow_signature_func method returns the same, or a + modified sub-message. If so, a dispersy-signature-response message is send to the creator + of the message, the first one in the authentication list. + + If we can add multiple signatures, i.e. we have the private keys for both the message + creator and the second member, the allow_signature_func is called only once but multiple + signatures will be appended. + + @see: create_signature_request + + @param messages: The dispersy-signature-request messages. + @type messages: [Message.Implementation] + """ + meta = messages[0].community.get_meta_message(u"dispersy-signature-response") + responses = [] + for message in messages: + assert isinstance(message, Message.Implementation), type(message) + assert isinstance(message.payload.message, Message.Implementation), type(message.payload.message) + assert isinstance(message.payload.message.authentication, DoubleMemberAuthentication.Implementation), type(message.payload.message.authentication) + + # the community must allow this signature + submsg = message.payload.message.authentication.allow_signature_func(message.payload.message) + assert submsg is None or isinstance(submsg, Message.Implementation), type(submsg) + if submsg: + responses.append(meta.impl(distribution=(message.community.global_time,), + destination=(message.candidate,), + payload=(message.payload.identifier, submsg))) + + if responses: + self._forward(responses) + + def check_signature_response(self, messages): + unique = set() + + for message in messages: + if message.payload.identifier in unique: + yield DropMessage(message, "duplicate identifier in batch") + continue + + cache = self._request_cache.get(message.payload.identifier, SignatureRequestCache) + if not cache: + yield DropMessage(message, "invalid response identifier") + continue + + old_submsg = cache.request.payload.message + new_submsg = message.payload.message + + if not old_submsg.meta == new_submsg.meta: + yield DropMessage(message, "meta message may not change") + continue + + if not old_submsg.authentication.member == new_submsg.authentication.member: + yield DropMessage(message, "first member may not change") + continue + + if not old_submsg.distribution.global_time == new_submsg.distribution.global_time: + yield DropMessage(message, "global time may not change") + continue + + unique.add(message.payload.identifier) + yield message + + def on_signature_response(self, messages): + """ + Handle one or more dispersy-signature-response messages. + + We sent out a dispersy-signature-request, through the create_signature_request method, and + have now received a dispersy-signature-response in reply. If the signature is valid, we + will call response_func with sub-message, where sub-message is the message parameter given + to the create_signature_request method. + + Note that response_func is also called when the sub-message does not yet contain all the + signatures. This can be checked using sub-message.authentication.is_signed. + """ + for message in messages: + # get cache object linked to this request and stop timeout from occurring + cache = self._request_cache.pop(message.payload.identifier, SignatureRequestCache) + + old_submsg = cache.request.payload.message + new_submsg = message.payload.message + + old_body = old_submsg.packet[:len(old_submsg.packet) - sum([member.signature_length for member in old_submsg.authentication.members])] + new_body = new_submsg.packet[:len(new_submsg.packet) - sum([member.signature_length for member in new_submsg.authentication.members])] + + result = cache.response_func(cache, new_submsg, old_body != new_body, *cache.response_args) + assert isinstance(result, bool), "RESPONSE_FUNC must return a boolean value! True to accept the proposed message, False to reject %s %s" % (type(cache), str(cache.response_func)) + if result: + # add our own signatures and we can handle the message + for signature, member in new_submsg.authentication.signed_members: + if not signature and member.private_key: + new_submsg.authentication.set_signature(member, member.sign(new_body)) + + assert new_submsg.authentication.is_signed + self.store_update_forward([new_submsg], True, True, True) + + def create_missing_sequence(self, community, candidate, member, message, missing_low, missing_high, response_func=None, response_args=(), timeout=10.0): + # ensure that the identifier is 'triggered' somewhere, i.e. using + # handle_missing_messages(messages, MissingSequenceCache) + + sendRequest = False + + # the MissingSequenceCache allows us to match the missing_high to the response_func + identifier = MissingSequenceCache.properties_to_identifier(community, member, message, missing_high) + cache = self._request_cache.get(identifier, MissingSequenceCache) + if not cache: + cache = MissingSequenceCache(timeout) + self._request_cache.set(identifier, cache) + + if response_func: + cache.callbacks.append((response_func, response_args)) + + # the MissingSequenceOverviewCache ensures that we do not request duplicate ranges + identifier = MissingSequenceOverviewCache.properties_to_identifier(community, member, message) + overview = self._request_cache.get(identifier, MissingSequenceOverviewCache) + if not overview: + overview = MissingSequenceOverviewCache(timeout) + self._request_cache.set(identifier, overview) + + if overview.missing_high == 0 or missing_high > overview.missing_high: + missing_low = max(overview.missing_high, missing_low) + overview.missing_high = missing_high + + logger.debug("%s sending missing-sequence %s %s [%d:%d]", candidate, member.mid.encode("HEX"), message.name, missing_low, missing_high) + meta = community.get_meta_message(u"dispersy-missing-sequence") + request = meta.impl(distribution=(community.global_time,), destination=(candidate,), payload=(member, message, missing_low, missing_high)) + self._forward([request]) + + sendRequest = True + + return sendRequest + + def on_missing_sequence(self, messages): + """ + We received a dispersy-missing-sequence message. + + The message contains a member and a range of sequence numbers. We will send the messages, + up to a certain limit, in this range back to the sender. + + To limit the amount of bandwidth used we will not sent back more data after a certain amount + has been sent. This magic number is subject to change. + + Sometimes peers will request overlapping sequence numbers. Only unique messages will be + given back (per batch). Also, if multiple sequence number ranges are requested, these + ranges are translated into one large range, and all containing sequence numbers are given + back. + + @param messages: dispersy-missing-sequence messages. + @type messages: [Message.Implementation] + """ + community = messages[0].community + sources = defaultdict(lambda: defaultdict(set)) + + logger.debug("received %d missing-sequence message for community %d", len(messages), community.database_id) + + # we know that there are buggy clients out there that give numerous overlapping requests. + # we will filter these to perform as few queries on the database as possible + for message in messages: + member_id = message.payload.member.database_id + message_id = message.payload.message.database_id + logger.debug("%s requests member:%d message_id:%d range:[%d:%d]", message.candidate, member_id, message_id, message.payload.missing_low, message.payload.missing_high) + for sequence in xrange(message.payload.missing_low, message.payload.missing_high + 1): + if sequence in sources[message.candidate][(member_id, message_id)]: + logger.debug("ignoring duplicate request for %d:%d:%d from %s", member_id, message_id, sequence, message.candidate) + sources[message.candidate][(member_id, message_id)].update(xrange(message.payload.missing_low, message.payload.missing_high + 1)) + + for candidate, requests in sources.iteritems(): + assert isinstance(candidate, Candidate), type(candidate) + + # we limit the response by byte_limit bytes per incoming candidate + byte_limit = community.dispersy_missing_sequence_response_limit + + # it is much easier to count packets... hence, to optimize we translate the byte_limit + # into a packet limit. we will assume a 256 byte packet size (security packets are + # generally small) + packet_limit = max(1, int(byte_limit / 128)) + logger.debug("will allow at most... byte_limit:%d packet_limit:%d for %s", byte_limit, packet_limit, candidate) + + packets = [] + for (member_id, message_id), sequences in requests.iteritems(): + if not sequences: + # empty set will fail min(...) and max(...) + continue + lowest, highest = min(sequences), max(sequences) + + # limiter + highest = min(lowest + packet_limit, highest) + + logger.debug("fetching member:%d message:%d %d packets from database for %s", member_id, message_id, highest - lowest + 1, candidate) + for packet, in self._database.execute(u"SELECT packet FROM sync WHERE member = ? AND meta_message = ? ORDER BY global_time LIMIT ? OFFSET ?", + (member_id, message_id, highest - lowest + 1, lowest - 1)): + packet = str(packet) + packets.append(packet) + + packet_limit -= 1 + byte_limit -= len(packet) + if byte_limit <= 0: + logger.debug("Bandwidth throttle. byte_limit:%d packet_limit:%d", byte_limit, packet_limit) + break + + if byte_limit <= 0 or packet_limit <= 0: + logger.debug("Bandwidth throttle. byte_limit:%d packet_limit:%d", byte_limit, packet_limit) + break + + if __debug__: + # ensure we are sending the correct sequence numbers back + for packet in packets: + msg = self.convert_packet_to_message(packet, community) + assert msg + assert min(requests[(msg.authentication.member.database_id, msg.database_id)]) <= msg.distribution.sequence_number, ["giving back a seq-number that is smaller than the lowest request", msg.distribution.sequence_number, min(requests[(msg.authentication.member.database_id, msg.database_id)]), max(requests[(msg.authentication.member.database_id, msg.database_id)])] + assert msg.distribution.sequence_number <= max(requests[(msg.authentication.member.database_id, msg.database_id)]), ["giving back a seq-number that is larger than the highest request", msg.distribution.sequence_number, min(requests[(msg.authentication.member.database_id, msg.database_id)]), max(requests[(msg.authentication.member.database_id, msg.database_id)])] + logger.debug("syncing %d bytes, member:%d message:%d sequence:%d explicit:%s to %s", len(packet), msg.authentication.member.database_id, msg.database_id, msg.distribution.sequence_number, "T" if msg.distribution.sequence_number in requests[(msg.authentication.member.database_id, msg.database_id)] else "F", candidate) + + self._statistics.dict_inc(self._statistics.outgoing, u"-sequence-", len(packets)) + self._endpoint.send([candidate], packets) + + def create_missing_proof(self, community, candidate, message, response_func=None, response_args=(), timeout=10.0): + # ensure that the identifier is 'triggered' somewhere, i.e. using + # handle_missing_messages(messages, MissingProofCache) + + sendRequest = False + identifier = MissingProofCache.properties_to_identifier(community) + cache = self._request_cache.get(identifier, MissingProofCache) + if not cache: + logger.debug("%s", identifier) + cache = MissingProofCache(timeout) + self._request_cache.set(identifier, cache) + + key = (message.meta, message.authentication.member) + if not key in cache.duplicates: + cache.duplicates.append(key) + + meta = community.get_meta_message(u"dispersy-missing-proof") + request = meta.impl(distribution=(community.global_time,), destination=(candidate,), payload=(message.authentication.member, message.distribution.global_time)) + self._forward([request]) + sendRequest = True + + if response_func: + cache.callbacks.append((response_func, response_args)) + return sendRequest + + def on_missing_proof(self, messages): + community = messages[0].community + for message in messages: + try: + packet, = self._database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ? LIMIT 1", + (community.database_id, message.payload.member.database_id, message.payload.global_time)).next() + + except StopIteration: + logger.warning("someone asked for proof for a message that we do not have") + + else: + packet = str(packet) + msg = self.convert_packet_to_message(packet, community, verify=False) + allowed, proofs = community.timeline.check(msg) + if allowed and proofs: + logger.debug("we found %d packets containing proof for %s", len(proofs), message.candidate) + self._statistics.dict_inc(self._statistics.outgoing, u"-proof-", len(proofs)) + self._endpoint.send([message.candidate], [proof.packet for proof in proofs]) + + else: + logger.debug("unable to give %s missing proof. allowed:%s. proofs:%d packets", message.candidate, allowed, len(proofs)) + + def create_authorize(self, community, permission_triplets, sign_with_master=False, store=True, update=True, forward=True): + """ + Grant permissions to members in a community. + + This method will generate a message that grants the permissions in permission_triplets. + Each item in permission_triplets contains (Member, Message, permission) where permission is + either u'permit', u'authorize', or u'revoke'. + + By default, community.my_member is doing the authorization. This means, that + community.my_member must have the authorize permission for each of the permissions that she + is authorizing. + + >>> # Authorize Bob to use Permit payload for 'some-message' + >>> from Payload import Permit + >>> bob = dispersy.get_member(bob_public_key) + >>> msg = self.get_meta_message(u"some-message") + >>> self.create_authorize(community, [(bob, msg, u'permit')]) + + @param community: The community where the permissions must be applied. + @type sign_with_master: Community + + @param permission_triplets: The permissions that are granted. Must be a list or tuple + containing (Member, Message, permission) tuples. + @type permissions_pairs: [(Member, Message, string)] + + @param sign_with_master: When True community.master_member is used to sign the authorize + message. Otherwise community.my_member is used. + @type sign_with_master: bool + + @param store: When True the messages are stored (as defined by their message distribution + policy) in the local dispersy database. This parameter should (almost always) be True, its + inclusion is mostly to allow certain debugging scenarios. + @type store: bool + + @param update: When True the messages are passed to their handle_callback methods. This + parameter should (almost always) be True, its inclusion is mostly to allow certain + debugging scenarios. + @type update: bool + + @param forward: When True the messages are forwarded (as defined by their message + destination policy) to other nodes in the community. This parameter should (almost always) + be True, its inclusion is mostly to allow certain debugging scenarios. + @type store: bool + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(permission_triplets, (tuple, list)) + for triplet in permission_triplets: + assert isinstance(triplet, tuple) + assert len(triplet) == 3 + assert isinstance(triplet[0], Member) + assert isinstance(triplet[1], Message) + assert isinstance(triplet[2], unicode) + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo") + + meta = community.get_meta_message(u"dispersy-authorize") + message = meta.impl(authentication=((community.master_member if sign_with_master else community.my_member),), + distribution=(community.claim_global_time(), self._claim_master_member_sequence_number(community, meta) if sign_with_master else meta.distribution.claim_sequence_number()), + payload=(permission_triplets,)) + + self.store_update_forward([message], store, update, forward) + return message + + # def check_authorize(self, messages): + # check = message.community.timeline.check + + # for message in messages: + # allowed, proofs = check(message) + # if allowed: + + # ensure that the author has the authorize permission + # authorize_allowed, authorize_proofs = check(messageauthor, global_time, [(message, u"authorize") for _, message, __ in permission_triplets]) + # if not authorize_allowed: + # yield DelayMessageByProof(message) + + # yield message + # else: + # yield DelayMessageByProof(message) + + def on_authorize(self, messages, initializing=False): + """ + Process a dispersy-authorize message. + + This method is called to process a dispersy-authorize message. This message is either + received from a remote source or locally generated. + + @param messages: The received messages. + @type messages: [Message.Implementation] + + @raise DropMessage: When unable to verify that this message is valid. + @todo: We should raise a DelayMessageByProof to ensure that we request the proof for this + message immediately. + """ + for message in messages: + logger.debug("%s", message) + message.community.timeline.authorize(message.authentication.member, message.distribution.global_time, message.payload.permission_triplets, message) + + # this might be a response to a dispersy-missing-proof or dispersy-missing-sequence + self.handle_missing_messages(messages, MissingProofCache, MissingSequenceCache) + + def create_revoke(self, community, permission_triplets, sign_with_master=False, store=True, update=True, forward=True): + """ + Revoke permissions from a members in a community. + + This method will generate a message that revokes the permissions in permission_triplets. + Each item in permission_triplets contains (Member, Message, permission) where permission is + either u'permit', u'authorize', or u'revoke'. + + By default, community.my_member is doing the revoking. This means, that community.my_member + must have the revoke permission for each of the permissions that she is revoking. + + >>> # Revoke the right of Bob to use Permit payload for 'some-message' + >>> from Payload import Permit + >>> bob = dispersy.get_member(bob_public_key) + >>> msg = self.get_meta_message(u"some-message") + >>> self.create_revoke(community, [(bob, msg, u'permit')]) + + @param community: The community where the permissions must be applied. + @type sign_with_master: Community + + @param permission_triplets: The permissions that are revoked. Must be a list or tuple + containing (Member, Message, permission) tuples. + @type permissions_pairs: [(Member, Message, string)] + + @param sign_with_master: When True community.master_member is used to sign the revoke + message. Otherwise community.my_member is used. + @type sign_with_master: bool + + @param store: When True the messages are stored (as defined by their message distribution + policy) in the local dispersy database. This parameter should (almost always) be True, its + inclusion is mostly to allow certain debugging scenarios. + @type store: bool + + @param update: When True the messages are passed to their handle_callback methods. This + parameter should (almost always) be True, its inclusion is mostly to allow certain + debugging scenarios. + @type update: bool + + @param forward: When True the messages are forwarded (as defined by their message + destination policy) to other nodes in the community. This parameter should (almost always) + be True, its inclusion is mostly to allow certain debugging scenarios. + @type store: bool + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(permission_triplets, (tuple, list)) + for triplet in permission_triplets: + assert isinstance(triplet, tuple) + assert len(triplet) == 3 + assert isinstance(triplet[0], Member) + assert isinstance(triplet[1], Message) + assert isinstance(triplet[2], unicode) + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo") + + meta = community.get_meta_message(u"dispersy-revoke") + message = meta.impl(authentication=((community.master_member if sign_with_master else community.my_member),), + distribution=(community.claim_global_time(), self._claim_master_member_sequence_number(community, meta) if sign_with_master else meta.distribution.claim_sequence_number()), + payload=(permission_triplets,)) + + self.store_update_forward([message], store, update, forward) + return message + + def on_revoke(self, messages, initializing=False): + """ + Process a dispersy-revoke message. + + This method is called to process a dispersy-revoke message. This message is either received + from an external source or locally generated. + + @param messages: The received messages. + @type messages: [Message.Implementation] + + @raise DropMessage: When unable to verify that this message is valid. + @todo: We should raise a DelayMessageByProof to ensure that we request the proof for this + message immediately. + """ + for message in messages: + message.community.timeline.revoke(message.authentication.member, message.distribution.global_time, message.payload.permission_triplets, message) + + # this might be a response to a dispersy-missing-sequence + self.handle_missing_messages(messages, MissingSequenceCache) + + def create_undo(self, community, message, sign_with_master=False, store=True, update=True, forward=True): + """ + Create a dispersy-undo-own or dispersy-undo-other message to undo MESSAGE. + + A dispersy-undo-own message is created when MESSAGE.authentication.member is + COMMUNITY.my_member and SIGN_WITH_MASTER is False. Otherwise a dispersy-undo-other message + is created. + + As a safeguard, when MESSAGE is already marked as undone in the database, the associated + dispersy-undo-own or dispersy-undo-other message is returned instead of creating a new one. + None is returned when MESSAGE is already marked as undone and neither of these messages can + be found. + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(message, Message.Implementation) + assert isinstance(sign_with_master, bool) + assert isinstance(store, bool) + assert isinstance(update, bool) + assert isinstance(forward, bool) + assert message.undo_callback, "message does not allow undo" + assert not message.name in (u"dispersy-undo-own", u"dispersy-undo-other", u"dispersy-authorize", u"dispersy-revoke"), "Currently we do NOT support undoing any of these, as it has consequences for other messages" + + # creating a second dispersy-undo for the same message is malicious behavior (it can cause + # infinate data traffic). nodes that notice this behavior must blacklist the offending + # node. hence we ensure that we did not send an undo before + try: + undone, = self._database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, message.authentication.member.database_id, message.distribution.global_time)).next() + + except StopIteration: + assert False, "The message that we want to undo does not exist. Programming error" + return None + + else: + if undone: + logger.error("you are attempting to undo the same message twice. this should never be attempted as it is considered malicious behavior") + + # already undone. refuse to undo again but return the previous undo message + undo_own_meta = community.get_meta_message(u"dispersy-undo-own") + undo_other_meta = community.get_meta_message(u"dispersy-undo-other") + for packet_id, message_id, packet in self._database.execute(u"SELECT id, meta_message, packet FROM sync WHERE community = ? AND member = ? AND meta_message IN (?, ?)", + (community.database_id, message.authentication.member.database_id, undo_own_meta.database_id, undo_other_meta.database_id)): + msg = Packet(undo_own_meta if undo_own_meta.database_id == message_id else undo_other_meta, str(packet), packet_id).load_message() + if message.distribution.global_time == msg.payload.global_time: + return msg + + # could not find the undo message that caused the sync.undone to be True. the + # undone was probably caused by changing permissions + return None + + else: + # create the undo message + meta = community.get_meta_message(u"dispersy-undo-own" if community.my_member == message.authentication.member and not sign_with_master else u"dispersy-undo-other") + msg = meta.impl(authentication=((community.master_member if sign_with_master else community.my_member),), + distribution=(community.claim_global_time(), self._claim_master_member_sequence_number(community, meta) if sign_with_master else meta.distribution.claim_sequence_number()), + payload=(message.authentication.member, message.distribution.global_time, message)) + + if __debug__: + assert msg.distribution.global_time > message.distribution.global_time + allowed, _ = community.timeline.check(msg) + assert allowed, "create_undo was called without having the permission to undo" + + self.store_update_forward([msg], store, update, forward) + return msg + + def check_undo(self, messages): + # Note: previously all MESSAGES have been checked to ensure that the sequence numbers are + # correct. this check takes into account the messages in the batch. hence, if one of these + # messages is dropped or delayed it can invalidate the sequence numbers of the other + # messages in this batch! + + assert all(message.name in (u"dispersy-undo-own", u"dispersy-undo-other") for message in messages) + community = messages[0].community + + dependencies = {} + + for message in messages: + if message.payload.packet is None: + # message.resume can be many things. for example: another undo message (when delayed by + # missing sequence) or a message (when delayed by missing message). + if (message.resume and + message.resume.community.database_id == community.database_id and + message.resume.authentication.member.database_id == message.payload.member.database_id and + message.resume.distribution.global_time == message.payload.global_time): + logger.debug("using resume cache") + message.payload.packet = message.resume + + else: + # obtain the packet that we are attempting to undo + try: + packet_id, message_name, packet_data = self._database.execute(u"SELECT sync.id, meta_message.name, sync.packet FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? AND sync.member = ? AND sync.global_time = ?", + (community.database_id, message.payload.member.database_id, message.payload.global_time)).next() + except StopIteration: + delay = DelayMessageByMissingMessage(message, message.payload.member, message.payload.global_time) + dependencies[message.authentication.member.public_key] = (message.distribution.sequence_number, delay) + yield delay + continue + + logger.debug("using packet from database") + message.payload.packet = Packet(community.get_meta_message(message_name), str(packet_data), packet_id) + + # ensure that the message in the payload allows undo + if not message.payload.packet.meta.undo_callback: + drop = DropMessage(message, "message does not allow undo") + dependencies[message.authentication.member.public_key] = (message.distribution.sequence_number, drop) + yield drop + continue + + # check the timeline + allowed, _ = message.community.timeline.check(message) + if not allowed: + delay = DelayMessageByProof(message) + dependencies[message.authentication.member.public_key] = (message.distribution.sequence_number, delay) + yield delay + continue + + # check batch dependencies + dependency = dependencies.get(message.authentication.member.public_key) + if dependency: + sequence_number, consequence = dependency + assert sequence_number < message.distribution.sequence_number, [sequence_number, message.distribution.sequence_number] + # MESSAGE gets the same consequence as the previous message + logger.debug("apply same consequence on later message (%s on #%d applies to #%d)", consequence, sequence_number, message.distribution.sequence_number) + yield consequence.duplicate(message) + continue + + try: + undone, = self._database.execute(u"SELECT undone FROM sync WHERE id = ?", (message.payload.packet.packet_id,)).next() + except StopIteration: + assert False, "The conversion ensures that the packet exists in the DB. Hence this should never occur" + undone = 0 + + if undone and message.name == u"dispersy-undo-own": + # the dispersy-undo-own message is a curious beast. Anyone is allowed to create one + # (regardless of the community settings) and everyone is responsible to propagate + # these messages. A malicious member could create an infinite number of + # dispersy-undo-own messages and thereby take down a community. + # + # to prevent this, we allow only one dispersy-undo-own message per message. When we + # detect a second message, the member is declared to be malicious and blacklisted. + # The proof of being malicious is forwarded to other nodes. The malicious node is + # now limited to creating only one dispersy-undo-own message per message that she + # creates. And that can be limited by revoking her right to create messages. + + # search for the second offending dispersy-undo message + member = message.authentication.member + undo_own_meta = community.get_meta_message(u"dispersy-undo-own") + for packet_id, packet in self._database.execute(u"SELECT id, packet FROM sync WHERE community = ? AND member = ? AND meta_message = ?", + (community.database_id, member.database_id, undo_own_meta.database_id)): + msg = Packet(undo_own_meta, str(packet), packet_id).load_message() + if message.payload.global_time == msg.payload.global_time: + logger.warning("detected malicious behavior") + self.declare_malicious_member(member, [msg, message]) + + # the sender apparently does not have the offending dispersy-undo message, lets give + self._statistics.dict_inc(self._statistics.outgoing, msg.name) + self._endpoint.send([message.candidate], [msg.packet]) + + if member == community.my_member: + logger.error("fatal error. apparently we are malicious") + + yield DropMessage(message, "the message proves that the member is malicious") + break + + else: + # did not break, hence, the message is not malicious. more than one members + # undid this message + yield message + + # continue. either the message was malicious or it has already been yielded + continue + + yield message + + def on_undo(self, messages): + """ + Undo a single message. + """ + assert all(message.name in (u"dispersy-undo-own", u"dispersy-undo-other") for message in messages) + + self._database.executemany(u"UPDATE sync SET undone = ? WHERE community = ? AND member = ? AND global_time = ?", + ((message.packet_id, message.community.database_id, message.payload.member.database_id, message.payload.global_time) for message in messages)) + for meta, iterator in groupby(messages, key=lambda x: x.payload.packet.meta): + sub_messages = list(iterator) + meta.undo_callback([(message.payload.member, message.payload.global_time, message.payload.packet) for message in sub_messages]) + + # notify that global times have changed + # meta.community.update_sync_range(meta, [message.payload.global_time for message in sub_messages]) + + # this might be a response to a dispersy-missing-sequence + self.handle_missing_messages(messages, MissingSequenceCache) + + def create_destroy_community(self, community, degree, sign_with_master=False, store=True, update=True, forward=True): + if __debug__: + from .community import Community + assert isinstance(community, Community) + assert isinstance(degree, unicode) + assert degree in (u"soft-kill", u"hard-kill") + + meta = community.get_meta_message(u"dispersy-destroy-community") + message = meta.impl(authentication=((community.master_member if sign_with_master else community.my_member),), + distribution=(community.claim_global_time(),), + payload=(degree,)) + + # in this special case we need to forward the message before processing it locally. + # otherwise the candidate table will have been cleaned and we won't have any destination + # addresses. + self._forward([message]) + + # now store and update without forwarding. forwarding now will result in new entries in our + # candidate table that we just cleane. + self.store_update_forward([message], store, update, False) + return message + + def on_destroy_community(self, messages): + if __debug__: + from .community import Community + + # epidemic spread of the destroy message + self._forward(messages) + + for message in messages: + assert message.name == u"dispersy-destroy-community" + logger.debug("%s", message) + + community = message.community + + try: + # let the community code cleanup first. + new_classification = community.dispersy_cleanup_community(message) + except Exception: + continue + assert issubclass(new_classification, Community) + + # community cleanup is done. Now we will cleanup the dispersy database. + + if message.payload.is_soft_kill: + # soft-kill: The community is frozen. Dispersy will retain the data it has obtained. + # However, no messages beyond the global-time of the dispersy-destroy-community message + # will be accepted. Responses to dispersy-sync messages will be send like normal. + raise NotImplementedError() + + elif message.payload.is_hard_kill: + # hard-kill: The community is destroyed. Dispersy will throw away everything except the + # dispersy-destroy-community message and the authorize chain that is required to verify + # this message. The community should also remove all its data and cleanup as much as + # possible. + + # todo: this should be made more efficient. not all dispersy-destroy-community messages + # need to be kept. Just the ones in the chain to authorize the message that has just + # been received. + + identity_message_id = community.get_meta_message(u"dispersy-identity").database_id + packet_ids = set() + identities = set() + + # we should not remove our own dispersy-identity message + try: + packet_id, = self._database.execute(u"SELECT id FROM sync WHERE meta_message = ? AND member = ?", (identity_message_id, community.my_member.database_id)).next() + except StopIteration: + pass + else: + identities.add(community.my_member.public_key) + packet_ids.add(packet_id) + + # obtain the permission chain + todo = [message] + while todo: + item = todo.pop() + + if not item.packet_id in packet_ids: + packet_ids.add(item.packet_id) + + # ensure that we keep the identity message + if not item.authentication.member.public_key in identities: + identities.add(item.authentication.member.public_key) + try: + packet_id, = self._database.execute(u"SELECT id FROM sync WHERE meta_message = ? AND member = ?", + (identity_message_id, item.authentication.member.database_id)).next() + except StopIteration: + pass + else: + packet_ids.add(packet_id) + + # get proofs required for ITEM + _, proofs = community._timeline.check(item) + todo.extend(proofs) + + # 1. cleanup the double_signed_sync table. + self._database.execute(u"DELETE FROM double_signed_sync WHERE sync IN (SELECT id FROM sync JOIN double_signed_sync ON sync.id = double_signed_sync.sync WHERE sync.community = ?)", (community.database_id,)) + + # 2. cleanup sync table. everything except what we need to tell others this + # community is no longer available + self._database.execute(u"DELETE FROM sync WHERE community = ? AND id NOT IN (" + u", ".join(u"?" for _ in packet_ids) + ")", [community.database_id] + list(packet_ids)) + + # 3. cleanup the malicious_proof table. we need nothing here anymore + self._database.execute(u"DELETE FROM malicious_proof WHERE community = ?", (community.database_id,)) + + self.reclassify_community(community, new_classification) + + def create_dynamic_settings(self, community, policies, sign_with_master=False, store=True, update=True, forward=True): + meta = community.get_meta_message(u"dispersy-dynamic-settings") + message = meta.impl(authentication=((community.master_member if sign_with_master else community.my_member),), + distribution=(community.claim_global_time(), self._claim_master_member_sequence_number(community, meta) if sign_with_master else meta.distribution.claim_sequence_number()), + payload=(policies,)) + self.store_update_forward([message], store, update, forward) + return message + + def on_dynamic_settings(self, community, messages, initializing=False): + assert all(community == message.community for message in messages) + assert isinstance(initializing, bool) + timeline = community.timeline + global_time = community.global_time + changes = {} + + for message in messages: + logger.debug("received %s policy changes", len(message.payload.policies)) + for meta, policy in message.payload.policies: + # TODO currently choosing the range that changed in a naive way, only using the + # lowest global time value + if meta in changes: + range_ = changes[meta] + else: + range_ = [global_time, global_time] + changes[meta] = range_ + range_[0] = min(message.distribution.global_time + 1, range_[0]) + + # apply new policy setting + timeline.change_resolution_policy(meta, message.distribution.global_time, policy, message) + + if not initializing: + logger.debug("updating %d ranges", len(changes)) + execute = self._database.execute + executemany = self._database.executemany + for meta, range_ in changes.iteritems(): + logger.debug("%s [%d:]", meta.name, range_[0]) + undo = [] + redo = [] + + for packet_id, packet, undone in list(execute(u"SELECT id, packet, undone FROM sync WHERE meta_message = ? AND global_time BETWEEN ? AND ?", + (meta.database_id, range_[0], range_[1]))): + message = self.convert_packet_to_message(str(packet), community) + if message: + message.packet_id = packet_id + allowed, _ = timeline.check(message) + if allowed and undone: + logger.debug("redo message %s at time %d", message.name, message.distribution.global_time) + redo.append(message) + + elif not (allowed or undone): + logger.debug("undo message %s at time %d", message.name, message.distribution.global_time) + undo.append(message) + + elif __debug__: + logger.debug("no change for message %s at time %d", message.name, message.distribution.global_time) + + if undo: + executemany(u"UPDATE sync SET undone = 1 WHERE id = ?", ((message.packet_id,) for message in undo)) + assert self._database.changes == len(undo), (self._database.changes, len(undo)) + meta.undo_callback([(message.authentication.member, message.distribution.global_time, message) for message in undo]) + + # notify that global times have changed + # meta.community.update_sync_range(meta, [message.distribution.global_time for message in undo]) + + if redo: + executemany(u"UPDATE sync SET undone = 0 WHERE id = ?", ((message.packet_id,) for message in redo)) + assert self._database.changes == len(redo), (self._database.changes, len(redo)) + meta.handle_callback(redo) + + # notify that global times have changed + # meta.community.update_sync_range(meta, [message.distribution.global_time for message in redo]) + + # this might be a response to a dispersy-missing-proof or dispersy-missing-sequence + self.handle_missing_messages(messages, MissingProofCache, MissingSequenceCache) + + def sanity_check(self, community, test_identity=True, test_undo_other=True, test_binary=False, test_sequence_number=True, test_last_sync=True): + """ + Check everything we can about a community. + + Note that messages that are disabled, i.e. not included in community.get_meta_messages(), + will NOT be checked. + + - the dispersy-identity for my member must be in the database + - the dispersy-identity must be in the database for each member that has one or more messages in the database + - all packets in the database must be valid + - check sequence numbers for FullSyncDistribution + - check history size for LastSyncDistribution + """ + def select(sql, bindings): + assert isinstance(sql, unicode) + assert isinstance(bindings, tuple) + limit = 1000 + for offset in (i * limit for i in count()): + rows = list(self._database.execute(sql, bindings + (limit, offset))) + if rows: + for row in rows: + yield row + else: + break + + logger.debug("%s start sanity check [database-id:%d]", community.cid.encode("HEX"), community.database_id) + enabled_messages = set(meta.database_id for meta in community.get_meta_messages()) + + if test_identity: + try: + meta_identity = community.get_meta_message(u"dispersy-identity") + except KeyError: + # identity is not enabled + pass + else: + # + # ensure that the dispersy-identity for my member must be in the database + # + try: + member_id, = self._database.execute(u"SELECT id FROM member WHERE mid = ?", (buffer(community.my_member.mid),)).next() + except StopIteration: + raise ValueError("unable to find the public key for my member") + + if not member_id == community.my_member.database_id: + raise ValueError("my member's database id is invalid", member_id, community.my_member.database_id) + + try: + self._database.execute(u"SELECT 1 FROM private_key WHERE member = ?", (member_id,)).next() + except StopIteration: + raise ValueError("unable to find the private key for my member") + + try: + self._database.execute(u"SELECT 1 FROM sync WHERE member = ? AND meta_message = ?", (member_id, meta_identity.database_id)).next() + except StopIteration: + raise ValueError("unable to find the dispersy-identity message for my member") + + logger.debug("my identity is OK") + + # + # the dispersy-identity must be in the database for each member that has one or more + # messages in the database + # + A = set(id_ for id_, in self._database.execute(u"SELECT member FROM sync WHERE community = ? GROUP BY member", (community.database_id,))) + B = set(id_ for id_, in self._database.execute(u"SELECT member FROM sync WHERE meta_message = ?", (meta_identity.database_id,))) + if not len(A) == len(B): + raise ValueError("inconsistent dispersy-identity messages.", A.difference(B)) + + if test_undo_other: + try: + meta_undo_other = community.get_meta_message(u"dispersy-undo-other") + except KeyError: + # undo-other is not enabled + pass + else: + + # + # ensure that we have proof for every dispersy-undo-other message + # + # TODO we are not taking into account that undo messages can be undone + for undo_packet_id, undo_packet_global_time, undo_packet in select(u"SELECT id, global_time, packet FROM sync WHERE community = ? AND meta_message = ? ORDER BY id LIMIT ? OFFSET ?", (community.database_id, meta_undo_other.database_id)): + undo_packet = str(undo_packet) + undo_message = self.convert_packet_to_message(undo_packet, community, verify=False) + + # 10/10/12 Boudewijn: the check_callback is required to obtain the + # message.payload.packet + for _ in undo_message.check_callback([undo_message]): + pass + + # get the message that undo_message refers to + try: + packet, undone = self._database.execute(u"SELECT packet, undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", (community.database_id, undo_message.payload.member.database_id, undo_message.payload.global_time)).next() + except StopIteration: + raise ValueError("found dispersy-undo-other but not the message that it refers to") + packet = str(packet) + message = self.convert_packet_to_message(packet, community, verify=False) + + if not undone: + raise ValueError("found dispersy-undo-other but the message that it refers to is not undone") + + if message.undo_callback is None: + raise ValueError("found dispersy-undo-other but the message that it refers to does not have an undo_callback") + + # get the proof that undo_message is valid + allowed, proofs = community.timeline.check(undo_message) + + if not allowed: + raise ValueError("found dispersy-undo-other that, according to the timeline, is not allowed") + + if not proofs: + raise ValueError("found dispersy-undo-other that, according to the timeline, has no proof") + + logger.debug("dispersy-undo-other packet %d@%d referring %s %d@%d is OK", undo_packet_id, undo_packet_global_time, undo_message.payload.packet.name, undo_message.payload.member.database_id, undo_message.payload.global_time) + + if test_binary: + # + # ensure all packets in the database are valid and that the binary packets are consistent + # with the information stored in the database + # + for packet_id, member_id, global_time, meta_message_id, packet in select(u"SELECT id, member, global_time, meta_message, packet FROM sync WHERE community = ? ORDER BY id LIMIT ? OFFSET ?", (community.database_id,)): + if meta_message_id in enabled_messages: + packet = str(packet) + message = self.convert_packet_to_message(packet, community, verify=True) + + if not message: + raise ValueError("unable to convert packet ", packet_id, "@", global_time, " to message") + + if not member_id == message.authentication.member.database_id: + raise ValueError("inconsistent member in packet ", packet_id, "@", global_time) + + if not message.authentication.member.public_key: + raise ValueError("missing public key for member ", member_id, " in packet ", packet_id, "@", global_time) + + if not global_time == message.distribution.global_time: + raise ValueError("inconsistent global time in packet ", packet_id, "@", global_time) + + if not meta_message_id == message.database_id: + raise ValueError("inconsistent meta message in packet ", packet_id, "@", global_time) + + if not packet == message.packet: + raise ValueError("inconsistent binary in packet ", packet_id, "@", global_time) + + logger.debug("packet %d@%d is OK", packet_id, global_time) + + if test_sequence_number: + for meta in community.get_meta_messages(): + # + # ensure that we have all sequence numbers for FullSyncDistribution packets + # + if isinstance(meta.distribution, FullSyncDistribution) and meta.distribution.enable_sequence_number: + counter = 0 + counter_member_id = 0 + exception = None + for packet_id, member_id, packet in select(u"SELECT id, member, packet FROM sync WHERE meta_message = ? ORDER BY member, global_time LIMIT ? OFFSET ?", (meta.database_id,)): + packet = str(packet) + message = self.convert_packet_to_message(packet, community, verify=False) + assert message + + if member_id != counter_member_id: + counter_member_id = member_id + counter = 1 + if exception: + break + + if not counter == message.distribution.sequence_number: + logger.error("%s for member %d has sequence number %d expected %d\n%s", meta.name, member_id, message.distribution.sequence_number, counter, packet.encode("HEX")) + exception = ValueError("inconsistent sequence numbers in packet ", packet_id) + + counter += 1 + + if exception: + raise exception + + if test_last_sync: + for meta in community.get_meta_messages(): + # + # ensure that we have only history-size messages per member + # + if isinstance(meta.distribution, LastSyncDistribution): + if isinstance(meta.authentication, MemberAuthentication): + counter = 0 + counter_member_id = 0 + for packet_id, member_id, packet in select(u"SELECT id, member, packet FROM sync WHERE meta_message = ? ORDER BY member ASC, global_time DESC LIMIT ? OFFSET ?", (meta.database_id,)): + message = self.convert_packet_to_message(str(packet), community, verify=False) + assert message + + if member_id == counter_member_id: + counter += 1 + else: + counter_member_id = member_id + counter = 1 + + if counter > meta.distribution.history_size: + raise ValueError("pruned packet ", packet_id, " still in database") + + logger.debug("LastSyncDistribution for %s is OK", meta.name) + + else: + assert isinstance(meta.authentication, DoubleMemberAuthentication) + for packet_id, member_id, packet in select(u"SELECT id, member, packet FROM sync WHERE meta_message = ? ORDER BY member ASC, global_time DESC LIMIT ? OFFSET ?", (meta.database_id,)): + message = self.convert_packet_to_message(str(packet), community, verify=False) + assert message + + try: + member1, member2 = self._database.execute(u"SELECT member1, member2 FROM double_signed_sync WHERE sync = ?", (packet_id,)).next() + except StopIteration: + raise ValueError("found double signed message without an entry in the double_signed_sync table") + + if not member1 < member2: + raise ValueError("member1 (", member1, ") must always be smaller than member2 (", member2, ")") + + if not (member1 == member_id or member2 == member_id): + raise ValueError("member1 (", member1, ") or member2 (", member2, ") must be the message creator (", member_id, ")") + + logger.debug("LastSyncDistribution for %s is OK", meta.name) + + logger.debug("%s success", community.cid.encode("HEX")) + + def _generic_timeline_check(self, messages): + meta = messages[0].meta + if isinstance(meta.authentication, NoAuthentication): + # we can not timeline.check this message because it uses the NoAuthentication policy + for message in messages: + yield message + + else: + for message in messages: + allowed, proofs = meta.community.timeline.check(message) + if allowed: + yield message + else: + yield DelayMessageByProof(message) + + def _claim_master_member_sequence_number(self, community, meta): + """ + Tries to guess the most recent sequence number used by the master member for META in + COMMUNITY. + + This is a risky method because sequence numbers must be unique, however, we can not + guarantee that two peers do not claim a sequence number for the master member at around the + same time. Unfortunately we can not overcome this problem in a distributed fashion. + + Also note that calling this method twice will give identital values. Ensure that the + message is updated locally before claiming another value to ensure different sequence + numbers are used. + """ + assert isinstance(meta.distribution, FullSyncDistribution), "currently only FullSyncDistribution allows sequence numbers" + sequence_number, = self._database.execute(u"SELECT COUNT(*) FROM sync WHERE member = ? AND sync.meta_message = ?", + (community.master_member.database_id, meta.database_id)).next() + return sequence_number + 1 + + def _watchdog(self): + """ + Periodically called to commit database changes to disk. + """ + while True: + try: + # Arno, 2012-07-12: apswtrace detects 7 s commits with yield 5 min, so reduce + yield 60.0 + + # flush changes to disk every 1 minutes + self._database.commit() + + except Exception as exception: + # OperationalError: database is locked + logger.exception("%s", exception) + + def _commit_now(self): + """ + Flush changes to disk. + """ + self._database.commit() + + def start(self): + """ + Starts Dispersy. + + This method is thread safe. + + 1. starts callback + 2. opens database + 3. opens endpoint + """ + + assert not self._callback.is_running, "Must be called before callback.start()" + + def start(): + assert self._callback.is_current_thread, "Must be called from the callback thread" + self._database.open() + self._endpoint.open(self) + self._endpoint_ready() + + # start + logger.info("starting the Dispersy core...") + self._callback.start() + self._callback.call(start) + logger.info("Dispersy core ready (database: %s, port:%d)", self._database.file_path, self._endpoint.get_address()[1]) + return True + + def stop(self, timeout=10.0): + """ + Stops Dispersy. + + This method is thread safe. + + 1. unload all communities + in reverse define_auto_load order, starting with all undefined communities + 2. closes endpoint + 3. closes database + 4. stops callback + """ + assert self._callback.is_running, "Must be called before the callback.stop()" + assert isinstance(timeout, float), type(timeout) + assert 0.0 <= timeout, timeout + + def unload_communities(communities): + for community in communities: + if community.cid in self._communities: + community.unload_community() + + def ordered_unload_communities(): + # unload communities that are not defined + unload_communities([community + for community + in self._communities.itervalues() + if not community.get_classification() in self._auto_load_communities]) + + # unload communities in reverse auto load order + for classification in reversed(self._auto_load_communities): + unload_communities([community + for community + in self._communities.itervalues() + if community.get_classification() == classification]) + + def stop(): + # unload all communities + ordered_unload_communities() + + # stop endpoint + self._endpoint.close(timeout) + + # Murphy tells us that endpoint just added tasks that caused new communities to load + while True: + # because this task has a very low priority, yielding 0.0 will wait until other + # tasks have finished + if timeout > 0.0: + yield 0.0 + + if not (self._batch_cache or self._communities): + break + + logger.debug("Murphy was right! There are %d batches left. There are %d communities left", len(self._batch_cache), len(self._communities)) + + # force remove incoming messages + for task_identifier, _, _ in self._batch_cache.itervalues(): + self._callback.unregister(task_identifier) + self._batch_cache.clear() + + # unload all communities + ordered_unload_communities() + + # stop the database + self._database.close() + + + # output statistics before we stop + if logger.isEnabledFor(logging.DEBUG): + self._statistics.update() + logger.debug("\n%s", pformat(self._statistics.get_dict(), width=120)) + + logger.info("stopping the Dispersy core...") + self._callback.call(stop, priority= -512) + self._callback.stop(timeout) + logger.info("Dispersy core stopped") + return True + + def _candidate_walker(self): + """ + Periodically select a candidate and take a step in the network. + """ + walker_communities = self._walker_commmunities + + steps = 0 + start = time() + + # delay will never be less than 0.1, hence we can accommodate 50 communities before the + # interval between each step becomes larger than 5.0 seconds + optimaldelay = max(0.1, 5.0 / len(walker_communities)) + logger.debug("there are %d walker enabled communities. pausing %ss (on average) between each step", len(walker_communities), optimaldelay) + + if __debug__: + RESETS = 0 + STEPS = 0 + START = start + DELAY = 0.0 + for community in walker_communities: + community.__MOST_RECENT_WALK = 0.0 + + for community in walker_communities: + community.__most_recent_sync = 0.0 + + while True: + community = walker_communities.pop(0) + walker_communities.append(community) + + actualtime = time() + allow_sync = community.dispersy_enable_bloom_filter_sync and actualtime - community.__most_recent_sync > 4.5 + logger.debug("previous sync was %.1f seconds ago %s", actualtime - community.__most_recent_sync, "" if allow_sync else "(no sync this cycle)") + if allow_sync: + community.__most_recent_sync = actualtime + + if __debug__: + NOW = time() + OPTIMALSTEPS = (NOW - START) / optimaldelay + STEPDIFF = NOW - community.__MOST_RECENT_WALK + community.__MOST_RECENT_WALK = NOW + logger.debug("%s taking step every %.2fs in %d communities. steps: %d/%d ~%.2f. diff: %.1f. resets: %d", + community.cid.encode("HEX"), DELAY, len(walker_communities), steps, int(OPTIMALSTEPS), (-1.0 if OPTIMALSTEPS == 0.0 else (STEPS / OPTIMALSTEPS)), STEPDIFF, RESETS) + STEPS += 1 + + # walk + assert community.dispersy_enable_candidate_walker + assert community.dispersy_enable_candidate_walker_responses + try: + community.dispersy_take_step(allow_sync) + steps += 1 + except Exception: + logger.exception("%s causes an exception during dispersy_take_step", community.cid.encode("HEX")) + + optimaltime = start + steps * optimaldelay + actualtime = time() + + if optimaltime + 5.0 < actualtime: + # way out of sync! reset start time + logger.warning("can not keep up! resetting walker start time!") + start = actualtime + steps = 0 + self._statistics.walk_reset += 1 + if __debug__: + DELAY = 0.0 + RESETS += 1 + + else: + if __debug__: + DELAY = max(0.0, optimaltime - actualtime) + yield max(0.0, optimaltime - actualtime) + + def _stats_candidates(self): + """ + Periodically logs the number of walk and stumble candidates for all communities. + + Enable this output by enabling INFO logging for a logger named "dispersy-stats-candidates". + + Exception: all PreviewChannelCommunity are filter out of the results. + """ + logger = logging.getLogger("dispersy-stats-candidates") + while logger.isEnabledFor(logging.INFO): + yield 5.0 + logger.info("--- %s:%d (%s:%d) %s", self.lan_address[0], self.lan_address[1], self.wan_address[0], self.wan_address[1], self.connection_type) + for community in sorted(self._communities.itervalues(), key=lambda community: community.cid): + if community.get_classification() == u"PreviewChannelCommunity": + continue + + candidates = sorted(community.dispersy_yield_verified_candidates()) + logger.info(" %s %20s with %d%s candidates[:5] %s", + community.cid.encode("HEX"), community.get_classification(), len(candidates), + "" if community.dispersy_enable_candidate_walker else "*", ", ".join(str(candidate) for candidate in candidates[:5])) + + def _stats_detailed_candidates(self): + """ + Periodically logs a detailed list of all candidates (walk, stumble, intro, none) for all communities. + + Enable this output by enabling INFO logging for a logger named "dispersy-stats-detailed-candidates". + + Exception: all PreviewChannelCommunity are filter out of the results. + """ + logger = logging.getLogger("dispersy-stats-detailed-candidates") + while logger.isEnabledFor(logging.INFO): + yield 5.0 + now = time() + logger.info("--- %s:%d (%s:%d) %s", self.lan_address[0], self.lan_address[1], self.wan_address[0], self.wan_address[1], self.connection_type) + logger.info("walk-attempt %d; success %d; invalid %d; boot-attempt %d; boot-success %d; reset %d", + self._statistics.walk_attempt, + self._statistics.walk_success, + self._statistics.walk_invalid_response_identifier, + self._statistics.walk_bootstrap_attempt, + self._statistics.walk_bootstrap_success, + self._statistics.walk_reset) + logger.info("walk-advice-out-request %d; in-response %d; in-new %d; in-request %d; out-response %d", + self._statistics.walk_advice_outgoing_request, + self._statistics.walk_advice_incoming_response, + self._statistics.walk_advice_incoming_response_new, + self._statistics.walk_advice_incoming_request, + self._statistics.walk_advice_outgoing_response) + + for community in sorted(self._communities.itervalues(), key=lambda community: community.cid): + if community.get_classification() == u"PreviewChannelCommunity": + continue + + categories = {u"walk": [], u"stumble": [], u"intro": [], u"none":[]} + for candidate in community.candidates.itervalues(): + if isinstance(candidate, WalkCandidate): + categories[candidate.get_category(now)].append(candidate) + + logger.info("--- %s %s ---", community.cid.encode("HEX"), community.get_classification()) + logger.info("--- [%2d:%2d:%2d:%2d]", len(categories[u"walk"]), len(categories[u"stumble"]), len(categories[u"intro"]), len(self._bootstrap_candidates)) + + for category, candidates in categories.iteritems(): + aged = [(candidate.age(now), candidate) for candidate in candidates] + for age, candidate in sorted(aged): + logger.info("%4ds %s%s%s %-7s %-13s %s", + min(age, 9999), + "O" if candidate.is_obsolete(now) else " ", + "E" if candidate.is_eligible_for_walk(now) else " ", + "B" if isinstance(candidate, BootstrapCandidate) else " ", + category, + candidate.connection_type, + candidate) diff -Nru tribler-6.2.0/Tribler/dispersy/dispersydatabase.py tribler-6.2.0/Tribler/dispersy/dispersydatabase.py --- tribler-6.2.0/Tribler/dispersy/dispersydatabase.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/dispersydatabase.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,554 @@ +""" +This module provides an interface to the Dispersy database. + +@author: Boudewijn Schoon +@organization: Technical University Delft +@contact: dispersy@frayja.com +""" + +import logging +logger = logging.getLogger(__name__) + +from itertools import groupby + +from .database import Database +from .distribution import FullSyncDistribution + +import sys +if "--apswtrace" in getattr(sys, "argv", []): + from .database import APSWDatabase as Database + + +LATEST_VERSION = 16 + +schema = u""" +CREATE TABLE member( + id INTEGER PRIMARY KEY AUTOINCREMENT, + mid BLOB, -- member identifier (sha1 of public_key) + public_key BLOB, -- member public key + tags TEXT DEFAULT '', -- comma separated tags: store, ignore, and blacklist + UNIQUE(public_key)); +CREATE INDEX member_mid_index ON member(mid); + +CREATE TABLE private_key( + member INTEGER PRIMARY KEY REFERENCES member(id), + private_key BLOB); + +CREATE TABLE community( + id INTEGER PRIMARY KEY AUTOINCREMENT, + master INTEGER REFERENCES member(id), -- master member (permission tree root) + member INTEGER REFERENCES member(id), -- my member (used to sign messages) + classification TEXT, -- community type, typically the class name + auto_load BOOL DEFAULT 1, -- when 1 this community is loaded whenever a packet for it is received + database_version INTEGER DEFAULT """ + str(LATEST_VERSION) + """, + UNIQUE(master)); + +CREATE TABLE meta_message( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + name TEXT, + cluster INTEGER DEFAULT 0, + priority INTEGER DEFAULT 128, + direction INTEGER DEFAULT 1, -- direction used when synching (1 for ASC, -1 for DESC) + UNIQUE(community, name)); + +--CREATE TABLE reference_member_sync( +-- member INTEGER REFERENCES member(id), +-- sync INTEGER REFERENCES sync(id), +-- UNIQUE(member, sync)); + +CREATE TABLE double_signed_sync( + sync INTEGER REFERENCES sync(id), + member1 INTEGER REFERENCES member(id), + member2 INTEGER REFERENCES member(id)); +CREATE INDEX double_signed_sync_index_0 ON double_signed_sync(member1, member2); + +CREATE TABLE sync( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + member INTEGER REFERENCES member(id), -- the creator of the message + global_time INTEGER, + meta_message INTEGER REFERENCES meta_message(id), + undone INTEGER DEFAULT 0, + packet BLOB, + UNIQUE(community, member, global_time)); +CREATE INDEX sync_meta_message_undone_global_time_index ON sync(meta_message, undone, global_time); +CREATE INDEX sync_meta_message_member ON sync(meta_message, member); + +CREATE TABLE malicious_proof( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + member INTEGER REFERENCES name(id), + packet BLOB); + +CREATE TABLE option(key TEXT PRIMARY KEY, value BLOB); +INSERT INTO option(key, value) VALUES('database_version', '""" + str(LATEST_VERSION) + """'); +""" + + +class DispersyDatabase(Database): + if __debug__: + __doc__ = schema + + def check_database(self, database_version): + assert isinstance(database_version, unicode) + assert database_version.isdigit() + assert int(database_version) >= 0 + database_version = int(database_version) + + if database_version == 0: + # setup new database with current database_version + self.executescript(schema) + self.commit() + + else: + # upgrade an older version + + # upgrade from version 1 to version 2 + if database_version < 2: + self.executescript(u""" +ALTER TABLE sync ADD COLUMN priority INTEGER DEFAULT 128; +UPDATE option SET value = '2' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 2 to version 3 + if database_version < 3: + self.executescript(u""" +CREATE TABLE malicious_proof( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + user INTEGER REFERENCES name(id), + packet BLOB); +ALTER TABLE sync ADD COLUMN undone BOOL DEFAULT 0; +UPDATE tag SET value = 'blacklist' WHERE key = 4; +UPDATE option SET value = '3' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 3 to version 4 + if database_version < 4: + self.executescript(u""" +-- create new tables + +CREATE TABLE member( + id INTEGER PRIMARY KEY AUTOINCREMENT, + mid BLOB, + public_key BLOB, + tags TEXT DEFAULT '', + UNIQUE(public_key)); +CREATE INDEX member_mid_index ON member(mid); + +CREATE TABLE identity( + community INTEGER REFERENCES community(id), + member INTEGER REFERENCES member(id), + host TEXT DEFAULT '', + port INTEGER DEFAULT -1, + PRIMARY KEY(community, member)); + +CREATE TABLE private_key( + member INTEGER PRIMARY KEY REFERENCES member(id), + private_key BLOB); + +CREATE TABLE new_community( + id INTEGER PRIMARY KEY AUTOINCREMENT, + master INTEGER REFERENCES member(id), + member INTEGER REFERENCES member(id), + classification TEXT, + auto_load BOOL DEFAULT 1, + UNIQUE(master)); + +CREATE TABLE new_reference_member_sync( + member INTEGER REFERENCES member(id), + sync INTEGER REFERENCES sync(id), + UNIQUE(member, sync)); + +CREATE TABLE meta_message( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + name TEXT, + cluster INTEGER DEFAULT 0, + priority INTEGER DEFAULT 128, + direction INTEGER DEFAULT 1, + UNIQUE(community, name)); + +CREATE TABLE new_sync( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + member INTEGER REFERENCES member(id), + global_time INTEGER, + meta_message INTEGER REFERENCES meta_message(id), + undone BOOL DEFAULT 0, + packet BLOB, + UNIQUE(community, member, global_time)); +CREATE INDEX sync_meta_message_index ON new_sync(meta_message); + +CREATE TABLE new_malicious_proof( + id INTEGER PRIMARY KEY AUTOINCREMENT, + community INTEGER REFERENCES community(id), + member INTEGER REFERENCES name(id), + packet BLOB); + +-- populate new tables + +-- no tags have ever been set outside debugging hence we do not upgrade those +INSERT INTO member (id, mid, public_key) SELECT id, mid, public_key FROM user; +INSERT INTO identity (community, member, host, port) SELECT community.id, user.id, user.host, user.port FROM community JOIN user; +INSERT INTO private_key (member, private_key) SELECT member.id, key.private_key FROM key JOIN member ON member.public_key = key.public_key; +INSERT INTO new_community (id, member, master, classification, auto_load) SELECT community.id, community.user, user.id, community.classification, community.auto_load FROM community JOIN user ON user.mid = community.cid; +INSERT INTO new_reference_member_sync (member, sync) SELECT user, sync FROM reference_user_sync; +INSERT INTO new_malicious_proof (id, community, member, packet) SELECT id, community, user, packet FROM malicious_proof ; +""") + + # copy all data from sync and name into new_sync and meta_message + meta_messages = {} + for id, community, name, user, global_time, synchronization_direction, distribution_sequence, destination_cluster, packet, priority, undone in list(self.execute(u"SELECT sync.id, sync.community, name.value, sync.user, sync.global_time, sync.synchronization_direction, sync.distribution_sequence, sync.destination_cluster, sync.packet, sync.priority, sync.undone FROM sync JOIN name ON name.id = sync.name")): + + # get or create meta_message id + key = (community, name) + if not key in meta_messages: + self.execute(u"INSERT INTO meta_message (community, name, cluster, priority, direction) VALUES (?, ?, ?, ?, ?)", + (community, name, destination_cluster, priority, -1 if synchronization_direction == 2 else 1)) + meta_messages[key] = self.last_insert_rowid + meta_message = meta_messages[key] + + self.execute(u"INSERT INTO new_sync (community, member, global_time, meta_message, undone, packet) VALUES (?, ?, ?, ?, ?, ?)", + (community, user, global_time, meta_message, undone, packet)) + + self.executescript(u""" +-- drop old tables and entries + +DROP TABLE community; +DROP TABLE key; +DROP TABLE malicious_proof; +DROP TABLE name; +DROP TABLE reference_user_sync; +DROP TABLE sync; +DROP TABLE tag; +DROP TABLE user; + +-- rename replacement tables + +ALTER TABLE new_community RENAME TO community; +ALTER TABLE new_reference_member_sync RENAME TO reference_member_sync; +ALTER TABLE new_sync RENAME TO sync; +ALTER TABLE new_malicious_proof RENAME TO malicious_proof; + +-- update database version +UPDATE option SET value = '4' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 4 to version 5 + if database_version < 5: + self.executescript(u""" +DROP TABLE candidate; +UPDATE option SET value = '5' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 5 to version 6 + if database_version < 6: + self.executescript(u""" +DROP TABLE identity; +UPDATE option SET value = '6' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 6 to version 7 + if database_version < 7: + self.executescript(u""" +DROP INDEX sync_meta_message_index; +CREATE INDEX sync_meta_message_global_time_index ON sync(meta_message, global_time); +UPDATE option SET value = '7' WHERE key = 'database_version'; +""") + self.commit() + + # upgrade from version 7 to version 8 + if database_version < 8: + logger.debug("upgrade database %d -> %d", database_version, 8) + self.executescript(u""" +ALTER TABLE community ADD COLUMN database_version INTEGER DEFAULT 0; +UPDATE option SET value = '8' WHERE key = 'database_version'; +""") + logger.debug("upgrade database %d -> %d (done)", database_version, 8) + self.commit() + + # upgrade from version 8 to version 9 + if database_version < 9: + logger.debug("upgrade database %d -> %d", database_version, 9) + self.executescript(u""" +DROP INDEX IF EXISTS sync_meta_message_global_time_index; +CREATE INDEX IF NOT EXISTS sync_global_time_undone_meta_message_index ON sync(global_time, undone, meta_message); +UPDATE option SET value = '9' WHERE key = 'database_version'; +""") + logger.debug("upgrade database %d -> %d (done)", database_version, 9) + self.commit() + + # upgrade from version 9 to version 10 + if database_version < 10: + logger.debug("upgrade database %d -> %d", database_version, 10) + self.executescript(u""" +DELETE FROM option WHERE key = 'my_wan_ip'; +DELETE FROM option WHERE key = 'my_wan_port'; +UPDATE option SET value = '10' WHERE key = 'database_version'; +""") + self.commit() + logger.debug("upgrade database %d -> %d (done)", database_version, 10) + + # upgrade from version 10 to version 11 + if database_version < 11: + logger.debug("upgrade database %d -> %d", database_version, 11) + # unfortunately the default SCHEMA did not contain + # sync_global_time_undone_meta_message_index but was still using + # sync_meta_message_global_time_index in database version 10 + self.executescript(u""" +DROP INDEX IF EXISTS sync_meta_message_global_time_index; +DROP INDEX IF EXISTS sync_global_time_undone_meta_message_index; +CREATE INDEX sync_meta_message_undone_global_time_index ON sync(meta_message, undone, global_time); +UPDATE option SET value = '11' WHERE key = 'database_version'; +""") + self.commit() + logger.debug("upgrade database %d -> %d (done)", database_version, 11) + + # upgrade from version 11 to version 12 + if database_version < 12: + # according to the profiler the dispersy/member.py:201(has_identity) has a + # disproportionally long runtime. this is easily improved using the below index. + logger.debug("upgrade database %d -> %d", database_version, 12) + self.executescript(u""" +CREATE INDEX sync_meta_message_member ON sync(meta_message, member); +UPDATE option SET value = '12' WHERE key = 'database_version'; +""") + self.commit() + logger.debug("upgrade database %d -> %d (done)", database_version, 12) + + # upgrade from version 12 to version 13 + if database_version < 13: + logger.debug("upgrade database %d -> %d", database_version, 13) + # reference_member_sync is a very generic but also expensive way to store + # multi-sighned messages. by simplifying the milti-sign into purely double-sign we + # can use a less expensive (in terms of query time) table. note: we simply drop the + # table, we assume that there is no data in there since no release has been made + # that uses the multi-sign feature + self.executescript(u""" +DROP TABLE reference_member_sync; +CREATE TABLE double_signed_sync( + sync INTEGER REFERENCES sync(id), + member1 INTEGER REFERENCES member(id), + member2 INTEGER REFERENCES member(id)); +CREATE INDEX double_signed_sync_index_0 ON double_signed_sync(member1, member2); +UPDATE option SET value = '13' WHERE key = 'database_version'; +""") + self.commit() + logger.debug("upgrade database %d -> %d (done)", database_version, 13) + + # upgrade from version 13 to version 16 + if database_version < 16: + logger.debug("upgrade database %d -> %d", database_version, 16) + # only effects check_community_database + self.executescript(u"""UPDATE option SET value = '16' WHERE key = 'database_version';""") + self.commit() + logger.debug("upgrade database %d -> %d (done)", database_version, 16) + + # upgrade from version 16 to version 17 + if database_version < 17: + # there is no version 17 yet... + # logger.debug("upgrade database %d -> %d", database_version, 17) + # self.executescript(u"""UPDATE option SET value = '17' WHERE key = 'database_version';""") + # self.commit() + # logger.debug("upgrade database %d -> %d (done)", database_version, 17) + pass + + return LATEST_VERSION + + def check_community_database(self, community, database_version): + assert isinstance(database_version, int) + assert database_version >= 0 + + if database_version < 8: + logger.debug("upgrade community %d -> %d", database_version, 8) + + # patch notes: + # + # - the undone column in the sync table is not a boolean anymore. instead it points to + # the row id of one of the associated dispersy-undo-own or dispersy-undo-other + # messages + # + # - we know that Dispersy.create_undo has been called while the member did not have + # permission to do so. hence, invalid dispersy-undo-other messages have been stored + # in the local database, causing problems with the sync. these need to be removed + # + updates = [] + deletes = [] + redoes = [] + convert_packet_to_message = community.dispersy.convert_packet_to_message + undo_own_meta = community.get_meta_message(u"dispersy-undo-own") + undo_other_meta = community.get_meta_message(u"dispersy-undo-other") + + progress = 0 + count, = self.execute(u"SELECT COUNT(1) FROM sync WHERE meta_message = ? OR meta_message = ?", (undo_own_meta.database_id, undo_other_meta.database_id)).next() + logger.debug("upgrading %d undo messages", count) + if count > 50: + progress_handlers = [handler("Upgrading database", "Please wait while we upgrade the database", count) for handler in community.dispersy.get_progress_handlers()] + else: + progress_handlers = [] + + for packet_id, packet in list(self.execute(u"SELECT id, packet FROM sync WHERE meta_message = ?", (undo_own_meta.database_id,))): + message = convert_packet_to_message(str(packet), community, verify=False) + if message: + # 12/09/12 Boudewijn: the check_callback is required to obtain the + # message.payload.packet + for _ in message.check_callback([message]): + pass + updates.append((packet_id, message.payload.packet.packet_id)) + + progress += 1 + for handler in progress_handlers: + handler.Update(progress) + + for packet_id, packet in list(self.execute(u"SELECT id, packet FROM sync WHERE meta_message = ?", (undo_other_meta.database_id,))): + message = convert_packet_to_message(str(packet), community, verify=False) + if message: + # 12/09/12 Boudewijn: the check_callback is required to obtain the + # message.payload.packet + for _ in message.check_callback([message]): + pass + allowed, _ = community._timeline.check(message) + if allowed: + updates.append((packet_id, message.payload.packet.packet_id)) + + else: + deletes.append((packet_id,)) + msg = message.payload.packet.load_message() + redoes.append((msg.packet_id,)) + if msg.undo_callback: + try: + # try to redo the message... this may not always be possible now... + msg.undo_callback([(msg.authentication.member, msg.distribution.global_time, msg)], redo=True) + except Exception as exception: + logger.exception("%s", exception) + + progress += 1 + for handler in progress_handlers: + handler.Update(progress) + + for handler in progress_handlers: + handler.Update(progress, "Saving the results...") + + # note: UPDATE first, REDOES second, since UPDATES contains undo items that may have + # been invalid + self.executemany(u"UPDATE sync SET undone = ? WHERE id = ?", updates) + self.executemany(u"UPDATE sync SET undone = 0 WHERE id = ?", redoes) + self.executemany(u"DELETE FROM sync WHERE id = ?", deletes) + + self.execute(u"UPDATE community SET database_version = 8 WHERE id = ?", (community.database_id,)) + self.commit() + + for handler in progress_handlers: + handler.Destroy() + + if database_version < 16: + logger.debug("upgrade community %d -> %d", database_version, 16) + + # patch 14 -> 15 notes: + # + # because of a bug in handling messages with sequence numbers, it was possible for + # messages to be stored in the database with missing sequence numbers. I.e. numbers 1, + # 2, and 5 could be stored leaving 3 and 4 missing. + # + # this results in the problem that the message with sequence number 5 is believed to be + # a message with sequence number 3. resulting in an inconsistent database and an + # inability to correctly handle missing sequence messages and incoming messages with + # specific sequence numbers. + # + # we will 'solve' this by removing all messages after a 'gap' occurred in the sequence + # numbers. In our example it will result in the message with sequence number 5 to be + # removed. + # + # we choose not to call any undo methods because both the timeline and the votes can + # handle the resulting multiple calls to the undo callback. + # + # patch 15 -> 16 notes: + # + # because of a bug in handling messages with sequence numbers, it was possible for + # messages to be stored in the database with conflicting global time values. For + # example, M@6#1 and M@5#2 could be in the database. + # + # This could occur when a peer removed the Dispersy database but not the public/private + # key files, resulting in a fresh sequence number starting at 1. Different peers would + # store different message combinations. Incoming message checking incorrectly allowed + # this to happen, resulting in many peers consistently dropping messages. + # + # New rules will ensure all peers converge to the same database content. However, we do + # need to remove the messages that have previously been (incorrectly) accepted. + # + # The rules are as follows: + # - seq(M_i), where i = 1 is the first message in the sequence + # - seq(M_j) = seq(M_i) - 1, where i = j - 1 + # - gt(M_i) < gt(M_j), where i = j - 1 + + # all meta messages that use sequence numbers + metas = [meta for meta in community.get_meta_messages() if isinstance(meta.distribution, FullSyncDistribution) and meta.distribution.enable_sequence_number] + convert_packet_to_message = community.dispersy.convert_packet_to_message + + progress = 0 + count = 0 + deletes = [] + for meta in metas: + i, = next(self.execute(u"SELECT COUNT(*) FROM sync WHERE meta_message = ?", (meta.database_id,))) + count += i + logger.debug("checking %d sequence number enabled messages [%s]", count, community.cid.encode("HEX")) + if count > 50: + progress_handlers = [handler("Upgrading database", "Please wait while we upgrade the database", count) for handler in community.dispersy.get_progress_handlers()] + else: + progress_handlers = [] + + for meta in metas: + for member_id, iterator in groupby(list(self.execute(u"SELECT id, member, packet FROM sync WHERE meta_message = ? ORDER BY member, global_time", (meta.database_id,))), key=lambda tup: tup[1]): + last_global_time = 0 + last_sequence_number = 0 + for packet_id, _, packet in iterator: + + message = convert_packet_to_message(str(packet), community, verify=False) + assert message.authentication.member.database_id == member_id + if (last_sequence_number + 1 == message.distribution.sequence_number and + last_global_time < message.distribution.global_time): + # message is OK + last_sequence_number += 1 + last_global_time = message.distribution.global_time + + else: + deletes.append((packet_id,)) + logger.debug("delete id:%d", packet_id) + + progress += 1 + for handler in progress_handlers: + handler.Update(progress) + + for handler in progress_handlers: + handler.Update(progress, "Saving the results...") + + logger.debug("will delete %d packets from the database", len(deletes)) + if deletes: + self.executemany(u"DELETE FROM sync WHERE id = ?", deletes) + assert len(deletes) == self.changes, [len(deletes), self.changes] + + # we may have removed some undo-other or undo-own messages. we must ensure that there + # are no messages in the database that point to these removed messages + updates = list(self.execute(u""" +SELECT a.id +FROM sync a +LEFT JOIN sync b ON a.undone = b.id +WHERE a.community = ? AND a.undone > 0 AND b.id is NULL""", (community.database_id,))) + if updates: + self.executemany(u"UPDATE sync SET undone = 0 WHERE id = ?", updates) + assert len(updates) == self.changes, [len(updates), self.changes] + + self.execute(u"UPDATE community SET database_version = 16 WHERE id = ?", (community.database_id,)) + self.commit() + + for handler in progress_handlers: + handler.Destroy() + + return LATEST_VERSION diff -Nru tribler-6.2.0/Tribler/dispersy/distribution.py tribler-6.2.0/Tribler/dispersy/distribution.py --- tribler-6.2.0/Tribler/dispersy/distribution.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/distribution.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,329 @@ +from .meta import MetaObject + +""" +Each Privilege can be distributed, usualy through the transfer of a message, in different ways. +These ways are defined by DistributionMeta object that is associated to the Privilege. + +The DistributionMeta associated to the Privilege is used to create a Distribution object that is +assigned to the Message. + +Example: A community has a permission called 'user-name'. This Permission has the +LastSyncDistributionMeta object assigned to it. The LastSyncDistributionMeta object dictates some +values such as the size and stepping used for the BloomFilter. + +Whenever a the 'user-name' Permission is used, a LastSyncDistribution object is created. The +LastSyncDistribution object holds additional information for this specific message, such as the +global_time. +""" + + +class Pruning(MetaObject): + + class Implementation(MetaObject.Implementation): + + def __init__(self, meta, distribution): + assert isinstance(distribution, SyncDistribution.Implementation), type(distribution) + super(Pruning.Implementation, self).__init__(meta) + self._distribution = distribution + + def get_state(self): + if self.is_active(): + return "active" + if self.is_inactive(): + return "inactive" + if self.is_pruned(): + return "pruned" + raise RuntimeError("Unable to obtain pruning state") + + def is_active(self): + raise NotImplementedError("missing implementation") + + def is_inactive(self): + raise NotImplementedError("missing implementation") + + def is_pruned(self): + raise NotImplementedError("missing implementation") + + +class NoPruning(Pruning): + + class Implementation(Pruning.Implementation): + + def is_active(self): + return True + + def is_inactive(self): + return False + + def is_pruned(self): + return False + + +class GlobalTimePruning(Pruning): + + class Implementation(Pruning.Implementation): + + @property + def inactive_threshold(self): + return self._meta.inactive_threshold + + @property + def prune_threshold(self): + return self._meta.prune_threshold + + def is_active(self): + return self._distribution.community.global_time - self._distribution.global_time < self._meta.inactive_threshold + + def is_inactive(self): + return self._meta.inactive_threshold <= self._distribution.community.global_time - self._distribution.global_time < self._meta.prune_threshold + + def is_pruned(self): + return self._meta.prune_threshold <= self._distribution.community.global_time - self._distribution.global_time + + def __init__(self, inactive, pruned): + """ + Construct a new GlobalTimePruning object. + + INACTIVE is the number at which the message goes from state active to inactive. + PRUNED is the number at which the message goes from state inactive to pruned. + + A message has the following states: + - active: current_global_time - message_global_time < INACTIVE + - inactive: INACTIVE <= current_global_time - message_global_time < PRUNED + - pruned: PRUNED <= current_global_time - message_global_time + """ + assert isinstance(inactive, int), type(inactive) + assert isinstance(pruned, int), type(pruned) + assert 0 < inactive < pruned, [inactive, pruned] + super(GlobalTimePruning, self).__init__() + self._inactive_threshold = inactive + self._prune_threshold = pruned + + @property + def inactive_threshold(self): + return self._inactive_threshold + + @property + def prune_threshold(self): + return self._prune_threshold + + +class Distribution(MetaObject): + + class Implementation(MetaObject.Implementation): + + def __init__(self, meta, global_time): + assert isinstance(meta, Distribution) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + super(Distribution.Implementation, self).__init__(meta) + # the last known global time + 1 (from the user who signed the + # message) + self._global_time = global_time + + @property + def global_time(self): + return self._global_time + + def setup(self, message): + """ + Setup is called after the meta message is initially created. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + +class SyncDistribution(Distribution): + + """ + Allows gossiping and synchronization of messages thoughout the community. + + The PRIORITY value ranges [0:255] where the 0 is the lowest priority and 255 the highest. Any + messages that have a priority below 32 will not be synced. These messages require a mechanism + to request missing messages whenever they are needed. + + The PRIORITY was introduced when we found that the dispersy-identity messages are the majority + of gossiped messages while very few are actually required. The dispersy-missing-identity + message is used to retrieve an identity whenever it is needed. + """ + + class Implementation(Distribution.Implementation): + + def __init__(self, meta, global_time): + super(SyncDistribution.Implementation, self).__init__(meta, global_time) + self._pruning = meta.pruning.Implementation(meta.pruning, self) + + @property + def community(self): + return self._meta._community + + @property + def synchronization_direction(self): + return self._meta._synchronization_direction + + @property + def synchronization_direction_id(self): + return self._meta._synchronization_direction_id + + @property + def priority(self): + return self._meta._priority + + @property + def database_id(self): + return self._meta._database_id + + @property + def pruning(self): + return self._pruning + + def __init__(self, synchronization_direction, priority, pruning=NoPruning()): + # note: messages with a high priority value are synced before those with a low priority + # value. + # note: the priority has precedence over the global_time based ordering. + # note: the default priority should be 127, use higher or lowe values when needed. + assert isinstance(synchronization_direction, unicode) + assert synchronization_direction in (u"ASC", u"DESC") + assert isinstance(priority, int) + assert 0 <= priority <= 255 + assert isinstance(pruning, Pruning), type(pruning) + self._synchronization_direction = synchronization_direction + self._priority = priority + self._current_sequence_number = 0 + self._pruning = pruning +# self._database_id = 0 + + @property + def community(self): + return self._community + + @property + def synchronization_direction(self): + return self._synchronization_direction + + @property + def synchronization_direction_value(self): + return -1 if self._synchronization_direction == u"DESC" else 1 + + @property + def priority(self): + return self._priority + + @property + def pruning(self): + return self._pruning + + # @property + # def database_id(self): + # return self._database_id + + def setup(self, message): + """ + Setup is called after the meta message is initially created. + + It is used to determine the current sequence number, based on + which messages are already in the database. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + # pruning requires information from the community + self._community = message.community + + # use cache to avoid database queries + assert message.name in message.community.meta_message_cache + cache = message.community.meta_message_cache[message.name] + if not (cache["priority"] == self._priority and cache["direction"] == self.synchronization_direction_value): + message.community.dispersy.database.execute(u"UPDATE meta_message SET priority = ?, direction = ? WHERE id = ?", + (self._priority, self.synchronization_direction_value, message.database_id)) + assert message.community.dispersy.database.changes == 1 + + +class FullSyncDistribution(SyncDistribution): + + """ + Allows gossiping and synchronization of messages thoughout the community. + + Sequence numbers can be enabled or disabled per meta-message. When disabled the sequence number + is always zero. When enabled the claim_sequence_number method can be called to obtain the next + requence number in sequence. + + Currently there is one situation where disabling sequence numbers is required. This is when the + message will be signed by multiple members. In this case the sequence number is claimed but may + not be used (if the other members refuse to add their signature). This causes a missing + sequence message. This in turn could be solved by creating a placeholder message, however, this + is not currently, and my never be, implemented. + """ + class Implementation(SyncDistribution.Implementation): + + def __init__(self, meta, global_time, sequence_number=0): + assert isinstance(sequence_number, (int, long)) + assert (meta._enable_sequence_number and sequence_number > 0) or (not meta._enable_sequence_number and sequence_number == 0), (meta._enable_sequence_number, sequence_number) + super(FullSyncDistribution.Implementation, self).__init__(meta, global_time) + self._sequence_number = sequence_number + + @property + def enable_sequence_number(self): + return self._meta._enable_sequence_number + + @property + def sequence_number(self): + return self._sequence_number + + def __init__(self, synchronization_direction, priority, enable_sequence_number, pruning=NoPruning()): + assert isinstance(enable_sequence_number, bool) + super(FullSyncDistribution, self).__init__(synchronization_direction, priority, pruning) + self._enable_sequence_number = enable_sequence_number + + @property + def enable_sequence_number(self): + return self._enable_sequence_number + + def setup(self, message): + super(FullSyncDistribution, self).setup(message) + if self._enable_sequence_number: + # obtain the most recent sequence number that we have used + self._current_sequence_number, = message.community.dispersy.database.execute(u"SELECT COUNT(1) FROM sync WHERE member = ? AND meta_message = ?", + (message.community.my_member.database_id, message.database_id)).next() + + def claim_sequence_number(self): + assert self._enable_sequence_number + self._current_sequence_number += 1 + return self._current_sequence_number + + +class LastSyncDistribution(SyncDistribution): + + class Implementation(SyncDistribution.Implementation): + + @property + def cluster(self): + return self._meta._cluster + + @property + def history_size(self): + return self._meta._history_size + + def __init__(self, synchronization_direction, priority, history_size, pruning=NoPruning()): + assert isinstance(history_size, int) + assert history_size > 0 + super(LastSyncDistribution, self).__init__(synchronization_direction, priority, pruning) + self._history_size = history_size + + @property + def history_size(self): + return self._history_size + + +class DirectDistribution(Distribution): + + class Implementation(Distribution.Implementation): + pass + + +class RelayDistribution(Distribution): + + class Implementation(Distribution.Implementation): + pass diff -Nru tribler-6.2.0/Tribler/dispersy/doc/filter_code.py tribler-6.2.0/Tribler/dispersy/doc/filter_code.py --- tribler-6.2.0/Tribler/dispersy/doc/filter_code.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/filter_code.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,31 @@ +#!/usr/bin/env python + +import sys + +def extract(begin, end): + filter_ = True + for line in sys.stdin: + if line.startswith(begin): + filter_ = False + + elif line.startswith(end): + if not filter_: + print + filter_ = True + + elif not filter_: + print line.rstrip() + +def main(): + mapping = {"python":("#+BEGIN_SRC python", "#+END_SRC"), + "sh":("#+BEGIN_SRC sh", "#+END_SRC")} + + if len(sys.argv) >= 2 and sys.argv[1] in mapping: + begin, end = mapping[sys.argv[1]] + extract(begin, end) + + else: + print "Usage:", sys.argv[0], "[python|sh]" + +if __name__ == "__main__": + main() diff -Nru tribler-6.2.0/Tribler/dispersy/doc/tutorial-part1.org tribler-6.2.0/Tribler/dispersy/doc/tutorial-part1.org --- tribler-6.2.0/Tribler/dispersy/doc/tutorial-part1.org 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/tutorial-part1.org 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,299 @@ +* Introduction +Dispersy is a library that offers a simple interface to synchronise +data between possibly millions of people, using a peer to peer +overlay. + +This first tutorial will show how to create a basic overlay using +Dispersy. Our goal is to emulate an overlay containing multiple +peers, where each peer will inject new data, i.e. messages, that will +be distributed between all peers in the overlay. + +The code samples are written in such a way, that when placed +sequentially in a python file, it will run the described emulation. +The code samples can be automatically extracted using [[filter_code.py][filter_code.py]] +by running: =cat tutorial-part1.org | python filter_code.py python > +tutorial-part1.py=. + +* Flood community +#+BEGIN_SRC python +import struct +import time + +from dispersy.authentication import MemberAuthentication +from dispersy.callback import Callback +from dispersy.community import Community +from dispersy.conversion import DefaultConversion, BinaryConversion +from dispersy.destination import CommunityDestination +from dispersy.dispersy import Dispersy +from dispersy.distribution import FullSyncDistribution +from dispersy.endpoint import StandaloneEndpoint +from dispersy.message import Message, DropPacket, BatchConfiguration +from dispersy.payload import Payload +from dispersy.resolution import PublicResolution + +class FloodCommunity(Community): + def __init__(self, dispersy, master_member): + super(FloodCommunity, self).__init__(dispersy, master_member) + self.message_received = 0 + + def initiate_conversions(self): + return [DefaultConversion(self), FloodConversion(self)] +#+END_SRC + +The FloodCommunity class contains the design of our emulation. The +constructor provides the community with the Dispersy instance, see +[[../dispersy.py][dispersy.py]] and the =master_member=. The master member is a Member or +DummyMember instance, see [[../member.py][member.py]], and uniquely identifies this +overlay. We will cover this in the [[tutorial-part2.org][next tutorial]]. + +Every message in Dispersy contains version information. The +conversion instances returned by =initiate_conversions= list the +versions that the community is able to read. By default, the last +item in the list is used when we create new messages. + +=DefaultConversion= uses version zero, this conversion must be +available to decode messages from the Dispersy trackers. +=FloodConversion= uses version one, see [[#conversion][section conversion]], and allows +us to encode and decode the message that are created during the +emulation. + +** Message definitions +#+BEGIN_SRC python + def initiate_meta_messages(self): + return [Message(self, + u"flood", + MemberAuthentication(encoding="sha1"), + PublicResolution(), + FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), + CommunityDestination(node_count=22), + FloodPayload(), + self.check_flood, + self.on_flood, + batch=BatchConfiguration(3.0))] +#+END_SRC + +Given the simple nature of this tutorial most of the code consists of +the design of the text message and its handling. +=initiate_meta_messages= returns a list of Message instances. In our +case, just a single message called "flood". Each message is given +policies that decide how this message will behave: + +- =MemberAuthentication= ensures that every flood message is signed + using the cryptographic key of the author. Setting the encoding to + 'sha1' adds the sha1 digest of the author's public key into every + message, Dispersy will automatically retrieve missing public key's. + See [[../authentication.py][authentication.py]] for more options. + +- =PublicResolution= ensures that everyone is allowed to create this + message. See [[../resolution.py][resolution.py]] for more options. + +- =FullSyncDistribution= ensures that every message is distributed to + every peer in the overlay. Note that each overlay is identified by + a 20 byte binary string, hence an infinite amount of distinct + overlays can exist at any time. + + When enabled, =enable_sequence_number= will include a sequence + number in a message. The first message created by someone will have + number one. Every subsequent message, created by the same person, + will have its sequence number incremented by one. Dispersy will + process all messages in sequence order, ensuring that no messages + are missed. Note that every message for every person in each + overlay has its own sequence. + + The FullSyncDistribution policy uses bloom filters to efficiently + find messages that are missing (i.e. pull mechanism), resulting in + low bandwidth overhead. =synchronization_direction= and =priority= + influences the way that the synchronisation is performed. See + [[../distribution.py][distribution.py]] for more options. + +- =CommunityDestination= ensures that the message will be distributed + to everyone in the community. =node_count= determines the number of + people that will receive the message when it is created (i.e. push + mechanism). See [[../destination.py][destination.py]] for more options. + +- =FloodPayload= describes the community specific payload. In our + tutorial the payload is randomly generated. The [[#payload][payload section]] + will explain how payload is defined. + +- =check_flood= and =on_flood= are called when flood messages are + received. The [[#message-handling][message handling]] section explains how messages are + made and processed. + +- And finally, =BatchConfiguration(3.0)= groups all incoming flood + messages that arrived within 3 seconds of each other, allowing us to + process them at the same time. + +** Message handling +#+BEGIN_SRC python + def create_flood(self, count): + meta = self.get_meta_message(u"flood") + messages = [meta.impl(authentication=(self.my_member,), + distribution=(self.claim_global_time(),), + payload=("flood #%d" % i,)) + for i + in xrange(count)] + self.dispersy.store_update_forward(messages, True, True, True) + + def check_flood(self, messages): + for message in messages: + yield message + + def on_flood(self, messages): + self.message_received += len(messages) + print "received %d messages (%d in batch)" % (self.message_received, len(messages)) +#+END_SRC + +Three things must be defined for each Dispersy message: creation, +verification, and handling. + +The =create_flood= method first retrieves the Message instance that +describes the flood message. This is the instance that we returned in +the [[#message-definitions][previous section]]. To get our actual message we need to +/implement/ this meta message by providing it with the author, the +current time, and the payload. + +- The author is =self.my_member=. This is the Member instance + containing the cryptographic key that we use to identify ourselves. + +- The current time is incremented and returned by + =self.claim_global_time()=. The global time of an overlay is + implemented as a Lamport clock (i.e. a counter that is progressively + incremented as new messages are created and received). + +- Finally, the payload for our message is a simple text with an + increasing number for each message created. + +When one or more new (Dispersy ensures that no duplicate messages are +ever passed to either check_text or on_text) messages are received, +they are first passed to =check_flood=. When a message is invalid it +can be (1) dropped by yielding a =DropMessage= instance, or (2) +delayed by yielding a =DelayMessage= instance when it depends on +something not yet available, or (3) accepted by yielding the message +itself. In our case all messages are accepted. + +All valid messages that are ready to be processed are passed to the +=on_flood= method. We will simply print the number of messages +received. + +** Payload +#+BEGIN_SRC python +class FloodPayload(Payload): + class Implementation(Payload.Implementation): + def __init__(self, meta, data): + super(FloodPayload.Implementation, self).__init__(meta) + self.data = data +#+END_SRC + +The FloodPayload class is part of the (meta) Message implementation, +and hence it contains the overlay specific payload settings that we +want all flood messages to follow. In this case there are no such +settings. + +The FloodPayload.Implementation class describes what an actual message +can contain, i.e. one message may contain a single data string. When +a message is received this data string is available at +=message.payload.data=. + +** Conversion +#+BEGIN_SRC python +class FloodConversion(BinaryConversion): + def __init__(self, community): + super(FloodConversion, self).__init__(community, "\x01") + self.define_meta_message(chr(1), community.get_meta_message(u"flood"), self._encode_flood, self._decode_flood) + + def _encode_flood(self, message): + return struct.pack("!L", len(message.payload.data)), message.payload.data + + def _decode_flood(self, placeholder, offset, data): + if len(data) < offset + 4: + raise DropPacket("Insufficient packet size") + data_length, = struct.unpack_from("!L", data, offset) + offset += 4 + + if len(data) < offset + data_length: + raise DropPacket("Insufficient packet size") + data_payload = data[offset:offset + data_length] + offset += data_length + + return offset, placeholder.meta.payload.implement(data_payload) +#+END_SRC + +The FloodConversion class handled the conversion between the +Message.Implementation instances used in the code and the binary +string representation on the wire. + +TODO: explain ="\x01"= and =define_meta_message= + +The =_encode_flood= method must return a tuple containing one or more +strings. For our message, we add the length and value of the +=payload.data= field. + +The =_decode_flood= method must return the new offset and a +FloodPayload.Implementation instance. =placeholder= contains +everything that has been decoded so far, =data= contains the entire +message as a string, and =offset= is the index of the first character +in =data= where the payload starts. + +* Putting it all together +#+BEGIN_SRC python +def join_flood_overlay(dispersy): + master_member = dispersy.get_temporary_member_from_id("-FLOOD-OVERLAY-HASH-") + my_member = dispersy.get_new_member() + return FloodCommunity.join_community(dispersy, master_member, my_member) + +def main(): + callback = Callback() + endpoint = StandaloneEndpoint(10000) + dispersy = Dispersy(callback, endpoint, u".", u":memory:") + dispersy.start() + print "Dispersy is listening on port %d" % dispersy.lan_address[1] + + community = callback.call(join_flood_overlay, (dispersy,)) + callback.register(community.create_flood, (100,), delay=10.0) + + try: + while callback.is_running: + time.sleep(5.0) + + if community.message_received >= 10 * 100: + time.sleep(60.0) + break + + except KeyboardInterrupt: + print "shutdown" + + finally: + dispersy.stop() + +if __name__ == "__main__": + main() +#+END_SRC + +Now that we have our community implemented, we must start Dispersy and +join the overlay. To start Dispersy we need to give it a thread to +run on and a UDP socket to listen to, this is handled by =Callback()= +and =StandaloneEndpoint(...)= respectively. + +We instruct Dispersy to use the current working directory to store any +files, and use a =:memory:= SQLite database. The following +=dispersy.start()= will start the callback thread, bind to an +available UDP port, and create the database. + +Finally, =callback.register(...)= will schedule =join_flood_overlay= +to be run on the callback thread where it will create the +=master_member= that uniquely identifies this overlay, the =member= +that identifies this peer, and the =community= itself. Lastly, +=create_flood= is called, thereby giving the peers in the overlay +something to gossip about. + +#+BEGIN_SRC sh +for (( PEER=1; PEER<10; PEER++ )); do + python -O tutorial-part1.py & +done +wait +#+END_SRC + +With the above shell script we can run multiple peers at the same +time. Once all expected (i.e. 10 * 100) messages have been receive, +the peers will stay online for little while to distribute messages to +other peers. diff -Nru tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.org tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.org --- tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.org 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.org 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,715 @@ +#+TITLE: Dispersy wire protocol\\version 1.3.1 +#+OPTIONS: toc:nil ^:nil author:nil +#+LATEX_HEADER: \usepackage{enumitem} +#+LATEX_HEADER: \setlist{nolistsep} + +# This document is written using orgmode. +# Allowing easy text editing and export to various formats. + +* Introduction +This document describes the Dispersy on the wire message protocol and +its intended behaviors. + +All values are big endian encoded. + +** 30/05/2012 version 1.0 +Initial public release. + +** ??/??/2012 version 1.1 +- added tunnel bit to dispersy-introduction-request +- added tunnel bit to dispersy-introduction-response + +** 12/10/2012 version 1.3 +- added dispersy-signature-request message +- added dispersy-signature-response message + +** 18/07/2013 version 1.3.1 +- fixes incorrect wire protocol documentation + +* <<>> (#248) +Contains the public key for a single member. This message is the +response to a dispersy-missing-identity request. + +The dispersy-identity message is not disseminated through bloom filter +synchronization. Furthermore, only the dispersy-identity message with +the highest global time per member is used. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f8 | unsigned char | message identifier | +| 2 | | unsigned short | public key length | +| | | char[] | public key | +| 8 | | unsigned long long | global time | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +** possible future changes +Historically the most recent IP address of a member was stored in the +payload of its dispersy-identity message. This required the message +to be stored and have a signature. Since this is no longer the case, +the message can be simplified by replacing dispersy-identity with a +non-signed response to a dispersy-missing-identity message. + +* <<>> (#234) +Grants one or more permissions. This message can be the response to a +dispersy-missing-proof request. (TODO: reference a document +describing the permission system.) + +The dispersy-authorize message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-authorize message has a sequence number that is unique +per member, ensuring that members are unable to create dispersy-revoke +messages out of order. A dispersy-authorize message can not be +undone. + +|----+-------+-------+--------------------+------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|----+-------+-------+--------------------+------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f3 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 2 | | unsigned short | public key length | +| + | | | char[] | public key | +| + | 1 | | unsigned char | permission pair length | +| ++ | 1 | | unsigned char | message identifier | +| ++ | 1 | | unsigned char | permission bits | +| | | | char[] | signature | +|----+-------+-------+--------------------+------------------------| + +The dispersy-authorize message payload contains repeating elements. +One or more public key length, public key, permission pair length +pairs may be given. Each of these pairs has one or more message +identifier, permissing bits pairs. + +The permission bits are defined as follows: +- 0000.0001 grants the 'permit' permission +- 0000.0010 grants the 'authorize' permission +- 0000.0100 grants the 'revoke' permission +- 0000.1000 grants the 'undo' permission + +** possible future changes +Currently the permissions are granted on global times after the +dispersy-authorize message was created. To improve flexibility a +global time value can be included in this message that describes +another global time from where the permission applies. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#242) +Revokes one or more permissions. This message can be the response to +a dispersy-missing-proof request. (TODO: reference a document +describing the permission system.) + +The dispersy-revoke message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-revoke message has a sequence number that is unique per +member, ensuring that members are unable to create dispersy-revoke +messages out of order. A dispersy-revoke message can not be undone. + +|----+-------+-------+--------------------+------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|----+-------+-------+--------------------+------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f2 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 2 | | unsigned short | public key length | +| + | | | char[] | public key | +| + | 1 | | unsigned char | permission pair length | +| ++ | 1 | | unsigned char | message identifier | +| ++ | 1 | | unsigned char | permission bits | +| | | | char[] | signature | +|----+-------+-------+--------------------+------------------------| + +They dispersy-revoke message payload contains repeating elements. One +or more public key length, public key, permission pair length pairs +may be given. Each of these pairs has one or more message identifier, +permissing bits pairs. + +The permission bits are defined as follows: +- 0000.0001 revokes the 'permit' permission +- 0000.0010 revokes the 'authorize' permission +- 0000.0100 revokes the 'revoke' permission +- 0000.1000 revokes the 'undo' permission + +** possible future changes +Currently the permissions are granted on global times after the +dispersy-authorize message was created. To improve flexibility a +global time value can be included in this message that describes +another global time from where the permission applies. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#238) +Marks an older message with an undone flag. This allows a member to +undo her own previously created message. Undo messages can only be +created for messages that have an undo defined for them. + +The dispersy-undo-own message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-undo-own message has a sequence number that is unique +per member, ensuring that members are unable to create +dispersy-undo-own messages out of order. A dispersy-undo-own message +can not be undone. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | ee | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 4 | | unsigned long | sequence number | +| 8 | | unsigned long long | target global time | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +The dispersy-undo-own message contains a target global time which, +together with the community identifier and the member identifier, +uniquely identifies the message that is being undone. + +To impose a limit on the number of dispersy-undo-own messages that can +be created, a dispersy-undo-own message may only be accepted when the +message that it points to is available and no dispersy-undo-own has +yet been created for it. + +** possible future changes +The synchronization ordering and priority may be removed. This +feature adds complexity while not providing the intended result once +the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#237) +Marks an older message with an undone flag. This allows a member to +undo a message made by someone else. Undo messages can only be +created for messages that have an undo defined for them. + +The dispersy-undo-other message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-undo-other message has a sequence number that is unique +per member, ensuring that members are unable to create +dispersy-undo-own messages out of order. A dispersy-undo-other +message can not be undone. + +|-------+-------+--------------------+--------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+--------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | ed | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 4 | | unsigned long | sequence number | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +| 8 | | unsigned long long | target global time | +| | | char[] | signature | +|-------+-------+--------------------+--------------------------| + +The dispersy-undo-other message contains a target public key and +target global time which, together with the community identifier, +uniquely identifies the message that is being undone. + +A dispersy-undo-other message may only be accepted when the message +that it points to is available. In contrast to a dispersy-undo-own +message, it is allowed to have multiple dispersy-undo-other messages +targeting the same message. To impose a limit on the number of +dispersy-undo-other messages that can be created, a member must have +an undo permission for the target message. + +** possible future changes +The synchronization ordering and priority may be removed. This +feature adds complexity while not providing the intended result once +the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#236) +Changes one or more message policies. When a message has two or more +policies of a specific type defined, i.e. both PublicResolution and +LinearResolution, the dispersy-dynamic-settings message switches +between them. + +The dispersy-dynamic-settings message is disseminated through bloom +filter synchronization in descending global time order with +priority 191. Each dispersy-dynamic-settings message has a sequence +number that is unique per member, ensuring that members are unable to +create dispersy-dynamic-settings messages out of order. A +dispersy-dynamic-settings message can not be undone. + +|---+-------+-------+--------------------+---------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+---------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | ec | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 1 | | unsigned char | target message identifier | +| + | 1 | 72 | char | target policy type | +| + | 1 | | unsigned char | target policy index | +| | | | char[] | signature | +|---+-------+-------+--------------------+---------------------------| + +The target policy type is currently always HEX 72. This equates to +the character 'r', i.e. resolution policy, which is currently the only +policy type that supports dynamic settings. The target policy index +indicates the index of the new policy in the list of predefined +policies. The policy change is applied from the next global time +after the global time given by the dispersy-dynamic-settings message. + +** possible future changes +Currently it is only possible to switch between PublicResolution and +LinearResolution policies. Switching between other policies should +also be implemented. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#244) +Forces an overlay to go offline. An overlay can be either soft killed +or hard killed. + +A soft killed overlay is frozen. All the currently available data +will be kept, however, messages with a global time that is higher than +the global-time of the dispersy-destroy-community message will be +refused. Responses to dispersy-introduction-request messages will be +send as normal. Currently soft killing an overlay is not supported. + +A hard killed overlay is destroyed. All messages will be removed, +except the dispersy-destroy-community message and the authorize chain +that is required to verify its validity. + +The dispersy-destroy-community message is disseminated through bloom +filter synchronization in ascending global time order with +priority 192. A dispersy-destroy-community message can not be undone. +Hence it is very important to ensure that only trusted peers have the +permission to create this message. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f4 | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| | | char | degree (soft/hard) | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +The kill degree can be either soft (HEX 73, i.e. character 's') or +hard (HEX 68, i.e. character 'h'). + +** possible future changes +Implement the soft killed strategy. + +* <<>> (#252) +Requests a signature for an included message. The included message +may be modified before adding the signature. May respond with a +dispersy-signature-response message. + +The dispersy-signature-request message is not disseminated through +bloom filter synchronization. Instead it is created whenever a double +signed signature is required. + +|-------+-------+----------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+----------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fc | unsigned char | message identifier | +| 2 | | unsigned short | request identifier | +| | | char[] | message | +|-------+-------+----------------+----------------------| + +The request identifier must be part of the +dispersy-signature-response. The message must be a valid dispersy +message except that both signatures must be set to null bytes. + +** version 1.2 +The dispersy-signature-request message was added. + +* <<>> (#251) +Response to a dispersy-signature-request message. The included +message may have been modified from the message in the request. + +The dispersy-signature-response message is not disseminated through +bloom filter synchronization. Instead it is created whenever a double +signed signature is required. + +|-------+-------+----------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+----------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fb | unsigned char | message identifier | +| 2 | | unsigned short | response identifier | +| | | char[] | message | +|-------+-------+----------------+----------------------| + +The response identifier must be equal to the request identifier of the +dispersy-signature-request message. The message must be a valid +dispersy message except that only the sender's signature is set while +the receiver's signature must be set to null bytes. + +** version 1.2 +The dispersy-signature-response message was added. + +* <<>> (#246) +The dispersy-introduction-request message is part of the semi-random +walker. It asks the destination peer to introduce the source peer to +a semi-random neighbor. Sending this request should result in a +dispersy-introduction-response to the sender and a +[[dispersy-puncture-request]] to the semi-random neighbor. (TODO: +reference a document describing the semi-random walker.) + +The dispersy-introduction-request message is not disseminated through +bloom filter synchronization. Instead it is periodically created to +maintain a semi-random overlay. + +|---+-------+-------+--------------------+-----------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+-----------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f6 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 6 | | char[] | destination address | +| | 6 | | char[] | source LAN address | +| | 6 | | char[] | source WAN address | +| | 1 | | unsigned char | option bits | +| | 2 | | unsigned short | request identifier | +| + | 8 | | unsigned long long | sync global time low | +| + | 8 | | unsigned long long | sync global time high | +| + | 2 | | unsigned short | sync modulo | +| + | 2 | | unsigned short | sync offset | +| + | 1 | | unsigned char | sync bloom filter functions | +| + | 2 | | unsigned short | sync bloom filter size | +| + | 1 | | unsigned char | sync bloom filter prefix | +| + | | | char[] | sync bloom filter | +| | | | char[] | signature | +|---+-------+-------+--------------------+-----------------------------| + +The option bits are defined as follows: +- 0000.0001 request an introduction +- 0000.0010 request contains optional sync bloom filter +- 0000.0100 source is behind a tunnel +- 0000.1000 source connection type +- 1000.0000 source has a public address +- 1100.0000 source is behind a symmetric NAT + +The dispersy-introduction-request message contains optional elements. +When the 'request contains optional sync bloom filter' bit is set, all +of the sync fields must be given. In this case the destination peer +should respond with messages that are within the set defined by sync +global time low, sync global time high, sync modulo, and sync offset +and which are not in the sync bloom filter. However, the destination +peer is allowed to limit the number of messages it responds with. +Sync bloom filter size is given in bits and corresponds to the length +of the sync bloom filter. Responses should take into account the +message priority. Otherwise ordering is by either ascending or +descening global time. + +** version 1.1 +The tunnel bit was introduced. + +** possible future changes +There is no feature that requires cryptography on this message. Hence +it may be removed to reduce message size and processing cost. + +There is not enough version information in this message. More should +be added to allow the source and destination peers to determine the +optimal wire protocol to use. Having a three-way handshake would +allow consensus between peers on what version to use. + +Sometimes the source peer may want to receive fewer sync responses +(i.e. to ensure low CPU usage), adding a max bandwidth value allows to +limit the returned packages. + +The walker should be changed into a three-way handshake to secure the +protocol against IP spoofing attacks. + +* <<>> (#245) +The dispersy-introduction-response message is part of the semi-random +walker and should be given as a response when a +dispersy-introduction-request is received. (TODO: reference a +document describing the semi-random walker.) + +The dispersy-introduction-response message is not disseminated through +bloom synchronization. + +|-------+-------+--------------------+-----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+-----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f5 | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | destination address | +| 6 | | char[] | source LAN address | +| 6 | | char[] | source WAN address | +| 6 | | char[] | introduce LAN address | +| 6 | | char[] | introduce WAN address | +| 1 | | unsigned char | option bits | +| 2 | | unsigned short | response identifier | +| | | char[] | signature | +|-------+-------+--------------------+-----------------------| + +The option bits are defined as follows: +- 0000.0100 introduced address is behind a tunnel +- 0000.1000 source connection type +- 1000.0000 source has a public address +- 1100.0000 source is behind a symmetric NAT + +When no neighbor is introduced the introduce LAN address and introduce +WAN address will both be set to null. Otherwise they correspond to +an, at the very least recently, existing neighbor. A +[[dispersy-puncture-request]] should have been send to this neighbor for +NAT puncturing purposes. + +The response identifier is set to the value given in the +dispersy-introduction-request. + +** version 1.2 +The tunnel bit was introduced. + +** version 1.3.1 +Previously this document incorrectly claimed that the 0000.0100 tunnel +bit indicated that the source connection is behind a tunnel. Instead +this bit actually indicates that the introduced address is behind a +tunnel. This is actually the desired behaviour and corresponds to the +documentation in the source code, and what had been implemented. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. + +* <<>> (#250) +The [[dispersy-puncture-request]] is part of the semi-random walker. A +dispersy puncture should be send when this message is received for NAT +puncturing purposes. (TODO: reference a document describing the +semi-random walker.) + +The [[dispersy-puncture-request]] message is not disseminated through +bloom synchronization. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fa | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | target LAN address | +| 6 | | char[] | target WAN address | +| 2 | | unsigned short | response identifier | +|-------+-------+--------------------+----------------------| + +The target LAN address and target WAN address correspond to the source +LAN address and source WAN address of the +dispersy-introduction-request message that caused this +[[dispersy-puncture-request]] to be send. These values may have been +modified to the best of the senders knowledge. + +The response identifier is set to the value given in the +dispersy-introduction-request and dispersy-introduction-response. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. + +* <<>> (#249) +The dispersy-puncture is part of the semi-random walker. It is the +result of, but not a response to, a [[dispersy-puncture-request]] message. +(TODO: reference a document describing the semi-random walker.) + +The dispersy-puncture message is not disseminated through bloom +synchronization. Instead is is send to the target LAN address or +target WAN address given by the corresponding +[[dispersy-puncture-request]] message. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f9 | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | source LAN address | +| 6 | | char[] | source WAN address | +| 2 | | unsigned short | response identifier | +|-------+-------+--------------------+----------------------| + +The response identifier is set to the value given in the +dispersy-introduction-request, dispersy-introduction-response, and +[[dispersy-puncture-request]]. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. +* <<>> (#247) +Requests the public keys associated to a member identifier. Sending +this request should result in one or more dispersy-identity message +responses. + +The dispersy-missing-identity message is not disseminated through +bloom filter synchronization. Instead it is created whenever a +message is received for which no public key is available to perform +the signature verification. + +|---+-------+-------+--------------------+--------------------------| +| | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+--------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f7 | unsigned char | message identifier | +| | 8 | | unsigned long long | global time | +| | 20 | | char[] | target member identifier | +|---+-------+-------+--------------------+--------------------------| + +** possible future changes +See possible future changes described at the dispersy-identity +message. + +* <<>> (#254) +Requests messages in a sequence number range. Sending this request +should result in one or more message responses. + +The dispersy-missing-sequence message is not disseminated through +bloom filter synchronization. Instead it is created whenever a +message is received with a sequence number that leaves a sequence +number gap. + +|-------+-------+--------------------+-----------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+-----------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fe | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 20 | | char[] | target member identifier | +| 1 | | unsigned char | target message identifier | +| 4 | | unsigned long | target sequence number low | +| 4 | | unsigned long | target sequence number high | +|-------+-------+--------------------+-----------------------------| + +The messages sent in response should include sequence numbers starting +at target sequence number low up to, and including, target sequence +number high. + +The destination peer is allowed to limit the number of messages it +responds with, however, the responses should always be ordered by the +sequence numbers. + +** possible future changes +Sometimes the source peer may want to receive fewer responses (i.e. to +ensure low CPU usage), adding a max bandwidth value allows to limit +the returned packages. + +* <<>> (#239) +Requests one or more specific messages identified by a community +identifier, member identifier, and one or more global times. This +request should result in one or more message responses. + +The dispersy-missing-message message is not disseminated through bloom +filter synchronization. Instead it is created whenever one or more +messages are missing. + +|---+-------+-------+--------------------+--------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+--------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | ef | unsigned char | message identifier | +| | 8 | | unsigned long long | global time | +| | 2 | | unsigned short | target public key length | +| | | | char[] | target public key | +| + | 8 | | unsigned long long | target global time | +|---+-------+-------+--------------------+--------------------------| + +The target global time in the dispersy-missing-message message payload +is a repeating element. One or more global time values may be given. +Each uniquely identifies a message. + +* <<>> (#235) +Requests one or more specific messages identified by a community +identifier, member identifier, and one or more global times. This +request should result in one or more message responses. + +The dispersy-missing-last-message message is not disseminated through +bloom filter synchronization. Instead it is created whenever one or +more messages are missing. + +|-------+-------+--------------------+---------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+---------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | eb | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +| 1 | | unsigned char | target message identifier | +| 1 | | unsigned char | max count | +|-------+-------+--------------------+---------------------------| + +* <<>> (#253) +Requests one or more parents of a message in the permission tree. +This request should result in one or more dispersy-authorize and/or +dispersy-revoke messages. (TODO: reference a document describing the +permission system.) + +The dispersy-missing-proof message is not disseminated through bloom +filter synchronization. Instead it is created whenever one or more +messages are received that are invalid according to our current +permission tree. + +|-------+-------+--------------------+--------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+--------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fd | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 8 | | unsigned long long | target global time | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +|-------+-------+--------------------+--------------------------| diff -Nru tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.x tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.x --- tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.x 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_1.x 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,705 @@ +#+TITLE: Dispersy wire protocol\\version 1.3 +#+OPTIONS: toc:nil ^:nil author:nil +#+LATEX_HEADER: \usepackage{enumitem} +#+LATEX_HEADER: \setlist{nolistsep} + +# This document is written using orgmode. +# Allowing easy text editing and export to various formats. + +* Introduction +This document describes the Dispersy on the wire message protocol and +its intended behaviors. + +All values are big endian encoded. + +** 30/05/2012 version 1.0 +Initial public release. + +** ??/??/2012 version 1.1 +- added tunnel bit to dispersy-introduction-request +- added tunnel bit to dispersy-introduction-response + +** 12/10/2012 version 1.3 +- added dispersy-signature-request message +- added dispersy-signature-response message + +* <<>> (#248) +Contains the public key for a single member. This message is the +response to a dispersy-missing-identity request. + +The dispersy-identity message is not disseminated through bloom filter +synchronization. Furthermore, only the dispersy-identity message with +the highest global time per member is used. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f8 | unsigned char | message identifier | +| 2 | | unsigned short | public key length | +| | | char[] | public key | +| 8 | | unsigned long long | global time | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +** possible future changes +Historically the most recent IP address of a member was stored in the +payload of its dispersy-identity message. This required the message +to be stored and have a signature. Since this is no longer the case, +the message can be simplified by replacing dispersy-identity with a +non-signed response to a dispersy-missing-identity message. + +* <<>> (#234) +Grants one or more permissions. This message can be the response to a +dispersy-missing-proof request. (TODO: reference a document +describing the permission system.) + +The dispersy-authorize message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-authorize message has a sequence number that is unique +per member, ensuring that members are unable to create dispersy-revoke +messages out of order. A dispersy-authorize message can not be +undone. + +|----+-------+-------+--------------------+------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|----+-------+-------+--------------------+------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f3 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 2 | | unsigned short | public key length | +| + | | | char[] | public key | +| + | 1 | | unsigned char | permission pair length | +| ++ | 1 | | unsigned char | message identifier | +| ++ | 1 | | unsigned char | permission bits | +| | | | char[] | signature | +|----+-------+-------+--------------------+------------------------| + +The dispersy-authorize message payload contains repeating elements. +One or more public key length, public key, permission pair length +pairs may be given. Each of these pairs has one or more message +identifier, permissing bits pairs. + +The permission bits are defined as follows: +- 0000.0001 grants the 'permit' permission +- 0000.0010 grants the 'authorize' permission +- 0000.0100 grants the 'revoke' permission +- 0000.1000 grants the 'undo' permission + +** possible future changes +Currently the permissions are granted on global times after the +dispersy-authorize message was created. To improve flexibility a +global time value can be included in this message that describes +another global time from where the permission applies. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#242) +Revokes one or more permissions. This message can be the response to +a dispersy-missing-proof request. (TODO: reference a document +describing the permission system.) + +The dispersy-revoke message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-revoke message has a sequence number that is unique per +member, ensuring that members are unable to create dispersy-revoke +messages out of order. A dispersy-revoke message can not be undone. + +|----+-------+-------+--------------------+------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|----+-------+-------+--------------------+------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f2 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 2 | | unsigned short | public key length | +| + | | | char[] | public key | +| + | 1 | | unsigned char | permission pair length | +| ++ | 1 | | unsigned char | message identifier | +| ++ | 1 | | unsigned char | permission bits | +| | | | char[] | signature | +|----+-------+-------+--------------------+------------------------| + +They dispersy-revoke message payload contains repeating elements. One +or more public key length, public key, permission pair length pairs +may be given. Each of these pairs has one or more message identifier, +permissing bits pairs. + +The permission bits are defined as follows: +- 0000.0001 revokes the 'permit' permission +- 0000.0010 revokes the 'authorize' permission +- 0000.0100 revokes the 'revoke' permission +- 0000.1000 revokes the 'undo' permission + +** possible future changes +Currently the permissions are granted on global times after the +dispersy-authorize message was created. To improve flexibility a +global time value can be included in this message that describes +another global time from where the permission applies. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#238) +Marks an older message with an undone flag. This allows a member to +undo her own previously created message. Undo messages can only be +created for messages that have an undo defined for them. + +The dispersy-undo-own message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-undo-own message has a sequence number that is unique +per member, ensuring that members are unable to create +dispersy-undo-own messages out of order. A dispersy-undo-own message +can not be undone. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | ee | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 4 | | unsigned long | sequence number | +| 8 | | unsigned long long | target global time | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +The dispersy-undo-own message contains a target global time which, +together with the community identifier and the member identifier, +uniquely identifies the message that is being undone. + +To impose a limit on the number of dispersy-undo-own messages that can +be created, a dispersy-undo-own message may only be accepted when the +message that it points to is available and no dispersy-undo-own has +yet been created for it. + +** possible future changes +The synchronization ordering and priority may be removed. This +feature adds complexity while not providing the intended result once +the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#237) +Marks an older message with an undone flag. This allows a member to +undo a message made by someone else. Undo messages can only be +created for messages that have an undo defined for them. + +The dispersy-undo-other message is disseminated through bloom filter +synchronization in ascending global time order with priority 128. +Each dispersy-undo-other message has a sequence number that is unique +per member, ensuring that members are unable to create +dispersy-undo-own messages out of order. A dispersy-undo-other +message can not be undone. + +|-------+-------+--------------------+--------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+--------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | ed | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 4 | | unsigned long | sequence number | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +| 8 | | unsigned long long | target global time | +| | | char[] | signature | +|-------+-------+--------------------+--------------------------| + +The dispersy-undo-other message contains a target public key and +target global time which, together with the community identifier, +uniquely identifies the message that is being undone. + +A dispersy-undo-other message may only be accepted when the message +that it points to is available. In contrast to a dispersy-undo-own +message, it is allowed to have multiple dispersy-undo-other messages +targeting the same message. To impose a limit on the number of +dispersy-undo-other messages that can be created, a member must have +an undo permission for the target message. + +** possible future changes +The synchronization ordering and priority may be removed. This +feature adds complexity while not providing the intended result once +the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#236) +Changes one or more message policies. When a message has two or more +policies of a specific type defined, i.e. both PublicResolution and +LinearResolution, the dispersy-dynamic-settings message switches +between them. + +The dispersy-dynamic-settings message is disseminated through bloom +filter synchronization in descending global time order with +priority 191. Each dispersy-dynamic-settings message has a sequence +number that is unique per member, ensuring that members are unable to +create dispersy-dynamic-settings messages out of order. A +dispersy-dynamic-settings message can not be undone. + +|---+-------+-------+--------------------+---------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+---------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | ec | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 4 | | unsigned long | sequence number | +| + | 1 | | unsigned char | target message identifier | +| + | 1 | 72 | char | target policy type | +| + | 1 | | unsigned char | target policy index | +| | | | char[] | signature | +|---+-------+-------+--------------------+---------------------------| + +The target policy type is currently always HEX 72. This equates to +the character 'r', i.e. resolution policy, which is currently the only +policy type that supports dynamic settings. The target policy index +indicates the index of the new policy in the list of predefined +policies. The policy change is applied from the next global time +after the global time given by the dispersy-dynamic-settings message. + +** possible future changes +Currently it is only possible to switch between PublicResolution and +LinearResolution policies. Switching between other policies should +also be implemented. + +Furthermore, the synchronization ordering and priority may be removed. +This feature adds complexity while not providing the intended result +once the overlay has enough messages to require multiple bloom filter +ranges. + +* <<>> (#244) +Forces an overlay to go offline. An overlay can be either soft killed +or hard killed. + +A soft killed overlay is frozen. All the currently available data +will be kept, however, messages with a global time that is higher than +the global-time of the dispersy-destroy-community message will be +refused. Responses to dispersy-introduction-request messages will be +send as normal. Currently soft killing an overlay is not supported. + +A hard killed overlay is destroyed. All messages will be removed, +except the dispersy-destroy-community message and the authorize chain +that is required to verify its validity. + +The dispersy-destroy-community message is disseminated through bloom +filter synchronization in ascending global time order with +priority 192. A dispersy-destroy-community message can not be undone. +Hence it is very important to ensure that only trusted peers have the +permission to create this message. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f4 | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| | | char | degree (soft/hard) | +| | | char[] | signature | +|-------+-------+--------------------+----------------------| + +The kill degree can be either soft (HEX 73, i.e. character 's') or +hard (HEX 68, i.e. character 'h'). + +** possible future changes +Implement the soft killed strategy. + +* <<>> (#252) +Requests a signature for an included message. The included message +may be modified before adding the signature. May respond with a +dispersy-signature-response message. + +The dispersy-signature-request message is not disseminated through +bloom filter synchronization. Instead it is created whenever a double +signed signature is required. + +|-------+-------+----------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+----------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fc | unsigned char | message identifier | +| 2 | | unsigned short | request identifier | +| | | char[] | message | +|-------+-------+----------------+----------------------| + +The request identifier must be part of the +dispersy-signature-response. The message must be a valid dispersy +message except that both signatures must be set to null bytes. + +** version 1.2 +The dispersy-signature-request message was added. + +* <<>> (#251) +Response to a dispersy-signature-request message. The included +message may have been modified from the message in the request. + +The dispersy-signature-response message is not disseminated through +bloom filter synchronization. Instead it is created whenever a double +signed signature is required. + +|-------+-------+----------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+----------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fb | unsigned char | message identifier | +| 2 | | unsigned short | response identifier | +| | | char[] | message | +|-------+-------+----------------+----------------------| + +The response identifier must be equal to the request identifier of the +dispersy-signature-request message. The message must be a valid +dispersy message except that only the sender's signature is set while +the receiver's signature must be set to null bytes. + +** version 1.2 +The dispersy-signature-response message was added. + +* <<>> (#246) +The dispersy-introduction-request message is part of the semi-random +walker. It asks the destination peer to introduce the source peer to +a semi-random neighbor. Sending this request should result in a +dispersy-introduction-response to the sender and a +[[dispersy-puncture-request]] to the semi-random neighbor. (TODO: +reference a document describing the semi-random walker.) + +The dispersy-introduction-request message is not disseminated through +bloom filter synchronization. Instead it is periodically created to +maintain a semi-random overlay. + +|---+-------+-------+--------------------+-----------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+-----------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f6 | unsigned char | message identifier | +| | 20 | | char[] | member identifier | +| | 8 | | unsigned long long | global time | +| | 6 | | char[] | destination address | +| | 6 | | char[] | source LAN address | +| | 6 | | char[] | source WAN address | +| | 1 | | unsigned char | option bits | +| | 2 | | unsigned short | request identifier | +| + | 8 | | unsigned long long | sync global time low | +| + | 8 | | unsigned long long | sync global time high | +| + | 2 | | unsigned short | sync modulo | +| + | 2 | | unsigned short | sync offset | +| + | 1 | | unsigned char | sync bloom filter functions | +| + | 2 | | unsigned short | sync bloom filter size | +| + | 1 | | unsigned char | sync bloom filter prefix | +| + | | | char[] | sync bloom filter | +| | | | char[] | signature | +|---+-------+-------+--------------------+-----------------------------| + +The option bits are defined as follows: +- 0000.0001 request an introduction +- 0000.0010 request contains optional sync bloom filter +- 0000.0100 source is behind a tunnel +- 0000.1000 source connection type +- 1000.0000 source has a public address +- 1100.0000 source is behind a symmetric NAT + +The dispersy-introduction-request message contains optional elements. +When the 'request contains optional sync bloom filter' bit is set, all +of the sync fields must be given. In this case the destination peer +should respond with messages that are within the set defined by sync +global time low, sync global time high, sync modulo, and sync offset +and which are not in the sync bloom filter. However, the destination +peer is allowed to limit the number of messages it responds with. +Sync bloom filter size is given in bits and corresponds to the length +of the sync bloom filter. Responses should take into account the +message priority. Otherwise ordering is by either ascending or +descening global time. + +** version 1.1 +The tunnel bit was introduced. + +** possible future changes +There is no feature that requires cryptography on this message. Hence +it may be removed to reduce message size and processing cost. + +There is not enough version information in this message. More should +be added to allow the source and destination peers to determine the +optimal wire protocol to use. Having a three-way handshake would +allow consensus between peers on what version to use. + +Sometimes the source peer may want to receive fewer sync responses +(i.e. to ensure low CPU usage), adding a max bandwidth value allows to +limit the returned packages. + +The walker should be changed into a three-way handshake to secure the +protocol against IP spoofing attacks. + +* <<>> (#245) +The dispersy-introduction-response message is part of the semi-random +walker and should be given as a response when a +dispersy-introduction-request is received. (TODO: reference a +document describing the semi-random walker.) + +The dispersy-introduction-response message is not disseminated through +bloom synchronization. + +|-------+-------+--------------------+-----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+-----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f5 | unsigned char | message identifier | +| 20 | | char[] | member identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | destination address | +| 6 | | char[] | source LAN address | +| 6 | | char[] | source WAN address | +| 6 | | char[] | introduce LAN address | +| 6 | | char[] | introduce WAN address | +| 1 | | unsigned char | option bits | +| 2 | | unsigned short | response identifier | +| | | char[] | signature | +|-------+-------+--------------------+-----------------------| + +The option bits are defined as follows: +- 0000.0100 source is behind a tunnel +- 0000.1000 source connection type +- 1000.0000 source has a public address +- 1100.0000 source is behind a symmetric NAT + +When no neighbor is introduced the introduce LAN address and introduce +WAN address will both be set to null. Otherwise they correspond to +an, at the very least recently, existing neighbor. A +[[dispersy-puncture-request]] should have been send to this neighbor for +NAT puncturing purposes. + +The response identifier is set to the value given in the +dispersy-introduction-request. + +** version 1.2 +The tunnel bit was introduced. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. + +* <<>> (#250) +The [[dispersy-puncture-request]] is part of the semi-random walker. A +dispersy puncture should be send when this message is received for NAT +puncturing purposes. (TODO: reference a document describing the +semi-random walker.) + +The [[dispersy-puncture-request]] message is not disseminated through +bloom synchronization. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fa | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | target LAN address | +| 6 | | char[] | target WAN address | +| 2 | | unsigned short | response identifier | +|-------+-------+--------------------+----------------------| + +The target LAN address and target WAN address correspond to the source +LAN address and source WAN address of the +dispersy-introduction-request message that caused this +[[dispersy-puncture-request]] to be send. These values may have been +modified to the best of the senders knowledge. + +The response identifier is set to the value given in the +dispersy-introduction-request and dispersy-introduction-response. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. + +* <<>> (#249) +The dispersy-puncture is part of the semi-random walker. It is the +result of, but not a response to, a [[dispersy-puncture-request]] message. +(TODO: reference a document describing the semi-random walker.) + +The dispersy-puncture message is not disseminated through bloom +synchronization. Instead is is send to the target LAN address or +target WAN address given by the corresponding +[[dispersy-puncture-request]] message. + +|-------+-------+--------------------+----------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+----------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | f9 | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 6 | | char[] | source LAN address | +| 6 | | char[] | source WAN address | +| 2 | | unsigned short | response identifier | +|-------+-------+--------------------+----------------------| + +The response identifier is set to the value given in the +dispersy-introduction-request, dispersy-introduction-response, and +[[dispersy-puncture-request]]. + +** possible future changes +See possible future changes described at the +dispersy-introduction-request message. +* <<>> (#247) +Requests the public keys associated to a member identifier. Sending +this request should result in one or more dispersy-identity message +responses. + +The dispersy-missing-identity message is not disseminated through +bloom filter synchronization. Instead it is created whenever a +message is received for which no public key is available to perform +the signature verification. + +|---+-------+-------+--------------------+--------------------------| +| | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+--------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | f7 | unsigned char | message identifier | +| | 8 | | unsigned long long | global time | +| | 20 | | char[] | target member identifier | +|---+-------+-------+--------------------+--------------------------| + +** possible future changes +See possible future changes described at the dispersy-identity +message. + +* <<>> (#254) +Requests messages in a sequence number range. Sending this request +should result in one or more message responses. + +The dispersy-missing-sequence message is not disseminated through +bloom filter synchronization. Instead it is created whenever a +message is received with a sequence number that leaves a sequence +number gap. + +|-------+-------+--------------------+-----------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+-----------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fe | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 20 | | char[] | target member identifier | +| 1 | | unsigned char | target message identifier | +| 4 | | unsigned long | target sequence number low | +| 4 | | unsigned long | target sequence number high | +|-------+-------+--------------------+-----------------------------| + +The messages sent in response should include sequence numbers starting +at target sequence number low up to, and including, target sequence +number high. + +The destination peer is allowed to limit the number of messages it +responds with, however, the responses should always be ordered by the +sequence numbers. + +** possible future changes +Sometimes the source peer may want to receive fewer responses (i.e. to +ensure low CPU usage), adding a max bandwidth value allows to limit +the returned packages. + +* <<>> (#239) +Requests one or more specific messages identified by a community +identifier, member identifier, and one or more global times. This +request should result in one or more message responses. + +The dispersy-missing-message message is not disseminated through bloom +filter synchronization. Instead it is created whenever one or more +messages are missing. + +|---+-------+-------+--------------------+--------------------------| +| + | BYTES | VALUE | C-TYPE | DESCRIPTION | +|---+-------+-------+--------------------+--------------------------| +| | 1 | 00 | unsigned char | dispersy version | +| | 1 | 01 | unsigned char | community version | +| | 20 | | char[] | community identifier | +| | 1 | ef | unsigned char | message identifier | +| | 8 | | unsigned long long | global time | +| | 2 | | unsigned short | target public key length | +| | | | char[] | target public key | +| + | 8 | | unsigned long long | target global time | +|---+-------+-------+--------------------+--------------------------| + +The target global time in the dispersy-missing-message message payload +is a repeating element. One or more global time values may be given. +Each uniquely identifies a message. + +* <<>> (#235) +Requests one or more specific messages identified by a community +identifier, member identifier, and one or more global times. This +request should result in one or more message responses. + +The dispersy-missing-last-message message is not disseminated through +bloom filter synchronization. Instead it is created whenever one or +more messages are missing. + +|-------+-------+--------------------+---------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+---------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | eb | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +| 1 | | unsigned char | target message identifier | +| 1 | | unsigned char | max count | +|-------+-------+--------------------+---------------------------| + +* <<>> (#253) +Requests one or more parents of a message in the permission tree. +This request should result in one or more dispersy-authorize and/or +dispersy-revoke messages. (TODO: reference a document describing the +permission system.) + +The dispersy-missing-proof message is not disseminated through bloom +filter synchronization. Instead it is created whenever one or more +messages are received that are invalid according to our current +permission tree. + +|-------+-------+--------------------+--------------------------| +| BYTES | VALUE | C-TYPE | DESCRIPTION | +|-------+-------+--------------------+--------------------------| +| 1 | 00 | unsigned char | dispersy version | +| 1 | 01 | unsigned char | community version | +| 20 | | char[] | community identifier | +| 1 | fd | unsigned char | message identifier | +| 8 | | unsigned long long | global time | +| 8 | | unsigned long long | target global time | +| 2 | | unsigned short | target public key length | +| | | char[] | target public key | +|-------+-------+--------------------+--------------------------| diff -Nru tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.org tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.org --- tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.org 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.org 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,1366 @@ +#+TITLE: Dispersy wire protocol\\version 2.0 +#+OPTIONS: toc:nil ^:{} author:nil +#+LATEX_HEADER: \usepackage{enumitem} +#+LATEX_HEADER: \setlist{nolistsep} +#+LaTeX_HEADER: \usepackage{framed} +#+LaTeX_HEADER: \usepackage{xcolor} +#+LaTeX_HEADER: \definecolor{shadecolor}{gray}{.9} + +# This document uses orgmode (http://orgmode.org) formatting. + +#+LATEX: \begin{shaded} +* Choices and discussions +** Using sessions +Dispersy 1.x did not have a session. This meant that every message +required basic information such as version and community identifier. +By negotiating a session identifier during the 'walking' process we no +longer need to include these values in every message. + +Available options are: +- Sessions: :: All walker messages will include version and community + identification and results in a session identifier (per + community per peer pair). All non-walker temporary + messages use this session identifier. +- Sessionless: :: All temporary messages will include version and + community identification. Response version and + community is chosen independently from previous + messages. Obviously no session identifier is + negotiated by the walker. +- Hybrid: :: Protocol Buffers support optional fields in messages. + This allows us to optionally negotiate a session + identifier (use sessions). If no session is available + all non-walker temporary messages must include optional + version and community identification (sessionless). + +*** 09/01/2013 Boudewijn +I prefer to use *sessions*. There is a lot of session specific +information available (version, community identity, connection-type, +tunnel, encryption, compression). All of this information can be +negotiated once and will reduce overhead in the non-walker temporary +messages. + +Sessions also make sense from a security perspective, where the +session identifier represents a secure number that only the two +communicating parties know. However, properly doing this requires +some crypto at the expense of CPU cycles. While a crypto handshake +has a very low priority it can be includes easily when sessions are +used. + +I am against using *hybrid*. While this is the most flexible option +it will also require the most code to create and maintain. I consider +this bloatware. + +*** 11/01/2013 Elric +I'm OK with *sessions*, it doesn't look as it would be too hard to +mantain and will allow us cut on bandwith usage. + +*** 10/01/2013 Decision +Currently we use *sessions*. However, this is subject to change until +more opinions are received. + +** Consensus on a real time clock +We can add the local real time clock to every temporary message that +also contains the local global time. This can allow peers to estimate +the average real time in the overlay. + +Having this estimate also allows us to assign real times to received +messages without relying the local time that potential malicious peer +provide. + +This can be usefull for the effort overlay (i.e. for consensus on the +current cycle) and channel overlay (i.e. for consensus on the creation +time of a post, torrent, etc.). + +Available options are: +- Rely on people: :: We can assume that the local time of all + computers is set correctly, either by the user or by an OS + provided mechanism. +- Use time server: :: Synchronizing time is a well known problem. A + well known solution is for each peer to contact one of many + available time servers periodically to obtain the current time. +- Use Dispersy: :: Use a consensus mechanism in Dispersy by adding + local real time to messages containing global + times. + +*** 11/01/2013 Boudewijn +Relying on people to keep their local time up to date is asking for +problems. Using a time server is the simplest solution, but we would +need to perform this check periodically or at startup. Using Dispersy +is distributed and hence more complicated. + +Using a time server feels like cheating. Let me explain by comparing +it with the global time. Currently each peer collects the global time +values from other peers around it. This results in every peer having +more or less the same global time. We could just as well use the +bootstrap servers to aggregate global times from peers. Each peer +could then simply ask the bootstrap servers for the current global +time periodically. Yet, we choose to let every peer find the average +global time in a distributed manner. I would argue that we should +also let peers compute the average real time in a distributed manner +for the same reasons. Hence, I prefer to *use Dispersy*. + +I do believe that it will not be possible to prove that a message was +creates at a certain time. However, I suspect that we -will- be able +to prove that a message was created in a certain time range. Proving +this may, in itself, be an interested paper topic. + +*** 11/01/2013 Elric +I agree on using consensus to decide on the common real time. Of +course taking off the unrealistic values from the sample before +averaging it and finding the proper way to check the result with the +system's local time to validate the result. + +*** 05/02/2013 Johan +Because of financial reasons it is not possible to spend time on this. +The least effort solution should be used. I choose *use time server*. + +*** 10/01/2013 Decision +We should use a time server. It is the responsibility of the +community to contact one. Hence, from Dispersies perspective we +will *rely on the user*, or the community programmer. + +** Announcing the local global time +Dispersy uses a [[http://dl.acm.org/citation.cfm?id=359563][lamport clock]] to ensure that we retain the partial +ordering of all persistent messages in the overlay, i.e. our global +time. + +Available options are: +- Minimal announce: :: We announce our local global time only with the + walker messages. +- Maximal announce: :: We announce our local global time in every + temporary message. +- Optional announce: :: We can add an optional global time field in + every temporary message. + +*** 10/01/2013 Boudewijn +The walker messages, most likely, trigger other temporary +missing-something messages. As such, including our local global time +in those missing-something messages would not improve the performance +of the lamport clock. Hence, I prfer to use *minimal announce*. + +*** 10/01/2013 Decision +Currently we use *minimal announce*. To be precise, only the +dispersy-introduction-request and dispersy-introduction-response +message are used to announce local global time to the neighborhood. +However, this is subject to change until more opinions are received. + +** Encoding signatures into a message +The cryptographic signatures must be transferred as part of a message +in some way. + +Available options are: +- Concat: :: We add the signature directly behind the serialized + message. This requires us to also add a message length + field because otherwise we can not unserialize it again + (protocol buffers will assume the signature is an + optional field in the message). +- Optional signature field: :: We add an optional signature field into + the Message container. We must serialize the submessage, create + the signature from that, and serialize the container message. +- SignedMessage: :: We distinct between Message and SignedMessage + containers. We would still need to serialize both + the submessage and container message. + +*** 14/01/2013 Boudewijn +Adding an *optional signature field* seems the simplest by far. It +also results in only one container message instead of two. One +disadvantage that I forsee is that we will slowly start to extend the +Message container with optional fields, and that is definately not my +intention. + +However, there is one issue that remains. The Message container (not +the submessage) contains the message type, hence the signature would +-not- include the message type. Therefore, a small change must be the +inclusion of another container message that has two fields: binary +message and binary signature. We explicity use the binary +representation of the message because another machine may serialize +the message differently (OS, protocol buffer version, etc) and we can +not afford this to invalidate the signature. + +The concat option is also easy to do, however, I dislike spending a +few bytes for the message length and concatting the length, message, +and signature together. Messing with the bytes should all be done by +protocol buffers. + +*** 14/01/2013 Decision +Currently we use *optional signature field* that is modifier with the +additional message wrapper, see dispersy-message. However, this is +subject to change until more opinions are received. + +** Synchronization bloom filters +In Dispersy 1 we create the bloomfilter by hashing {prefix, +binary-packet}. There are two choices to make: + +First choice. Using either prefix or postfix: +- Prefix: :: Allows you to cache the hashed prefix. Requires: one + cache and N+1 hashes to build one N sized bloom filter. +- Postfix (partial cache): :: Allows you to cache each packet. Every + postfix must be hashed. Requires: M hashes to build M caches + once. And N hashes to build one N sized bloom filter. +- Postfix (full cache): :: Allows you to cache each packet + postfix + combination. Requires: M hashes to build M caches once. Cache + storage is potentially cheaper than the partial cache. + +Second choice. How do we represent the message: +- Binary packet: :: The simplest and method is to hash the binary + packet. The packet is unique, even if the data + encoded in the packet results in duplicate data. +- Identifying information only: :: The most minimalistic method is to + hash only the member identifier and global time. This, combined + with the current community, must uniquely identify every packet. + +*** 17/01/2013 Boudewijn +After several 'timeit' runs I obtained the following statistics: + +#+BEGIN_EXAMPLE +0.003818 # hash one byte +0.005269 +0.001451 138% # hash 300 bytes +0.006416 +0.002598 168% # one byte cache and N times 300 byte update +0.004613 +0.000795 120% # 300 bytes cache and N times 1 byte update +0.006080 +0.002262 159% # 1 + 300 bytes concat hash +#+END_EXAMPLE + +In these statistics the 168\% represents postfix and 120\% represents +postfix (partial cache). Obviously the postfix is faster because +fewer bytes need to be hashed. However, the difference is only +0.001803 seconds for $N=2000$. Taking into account that the faster +option will require more memory, code, and decision making +(i.e. choosing the subset of packets that we want to cache) does not +justify implementing a cache for every packet. + +However, hashing a simple string concatenation, i.e. using no cache at +all, is slightly faster than using a cached prefix. While the +difference is negligible we can use this strategy with a postfix. +This will allow us too (1) cache often used packets for maximal +performance or (2) implements something simple (concat) but allow the +postfix cache to be added later. Hence, I prefer *postfix without +caching*. + +As for what we hash, I prefer *binary packets*. We know that it is +the slower of the two options, yet it is the only one that quarantees +dissemination of all data, even when mistakes are made such as one +member creating multiple messages with the same global time. We've +actually seen this problem occuring (it caused high amounts of +additional traffic) in the effort community. Granted, this was a bug, +but it allowed us to easily observe the problem and fix it. Hence it +saved us a lot of development time. + +** Protocol buffer version control +One option to make protocol buffers easy to upgrade to new versions, +is to make most fields optional. + +#+LATEX: \end{shaded} + +* Introduction +This document describes the Dispersy wire protocol version 2 and its +intended behaviors. Version 2 is *not* backwards compatible. The +most notable changes are the use of [[https://developers.google.com/protocol-buffers][google protocol buffers]] for the +wire format, protection against IP spoofing, and session usage. A +complete list of changes is available in following sections. + +** 01/01/2013 version 2.0 +Changes compared to version 1.3 are: +- Dispersy version, community version, and community identifier have + been replaced with session identifier for temporary messages +- new message dispersy-collection +- new message dispersy-session-request +- new message dispersy-session-response + +* Terminology +- Temporary message: :: A control message that is not stored on disk. + Messages of this type are immediately discarded after they are + processed. +- Persistent message: :: A message that contains information that must + be retained across sessions. Effectively this includes every + message that must be disseminated through the network. + +* Mechanisms +** Global time +Global time is a lamport clock used to provide message ordering +withing a community. Using global time, every message can be uniquely +identified using community, member, and global time. + +Dispersy stores global time values using, at most, 64 bits. Therefore +there is a finite number of global time values available. To avoid +malicious peers from quickly pushing the global time value to the +point where none are left, peers will only accept messages with a +global time that is within a locally evaluated limit. This limit is +set to the median of the neighbors' global time values plus a +predefined margin. + +Persistent messages that are not within the acceptable global time +range are ignored. + +* <<>> +Protocol Buffers allows messages to be defined, encoded, and finally +decoded again. However, the way that we intend to use protocol +buffers caused two issues to arise: +1. Multiple different messages over the same communication channel + requires a method to distinguish message type. The recommended + method, as described by Google in [[https://developers.google.com/protocol-buffers/docs/techniques#self-description][self-describing messages]], is to + encapsulate the message by a message that contains all possible + messages as optional fields; +2. Adding one or more signatures to a message requires the entire + message (including the message type) to be serialized and passed to + the cryptography layer, resulting signatures can only be placed in + a wrapping message. + + This wrapping message must store the message in binary. Otherwise + changes to protocol buffers' internal implementation may cause one + client to produce a different, yet compatible, binary + representation. This would make it impossible to verify the + signature. + +Therefore, the Dispersy protocol will use two wrapping messages. +/Descriptor/ will allow message types to be assigned, while /Message/ +will contain the raw message bytes and optional signatures. + +#+BEGIN_SRC protocol +message Message { + extensions 1024 to max; + required bytes descriptor; + repeated bytes signatures; +} +#+END_SRC + +Descriptor limitations: +- Every temporary or persistent message must have an optional field in + the Descriptor message. Community messages must use the field + values assigned to extensions. +- A dispersy-message may only contain one message, i.e. only one + optional field may be set. + +#+BEGIN_SRC protocol +message Descriptor { + enum Type { + // frequent temporary messages (uses <15 values) + INTRODUCTIONREQUEST = 1; + INTRODUCTIONRESPONSE = 2; + SESSIONREQUEST = 3; + SESSIONRESPONSE = 4; + PUNCTUREREQUEST = 5; + PUNCTURERESPONSE = 6; + COLLECTION = 7; + IDENTITY = 8; + + // infrequent temporary messages (uses >15 values) + MISSINGIDENTITY = 16; + MISSINGSEQUENCE = 17; + MISSINGMESSAGE = 18; + MISSINGLASTMESSAGE = 19; + MISSINGPROOF = 20; + SIGNATUREREQUEST = 21; + SIGNATURERESPONSE = 22; + + // persistent messages (uses >63 values) + AUTHORIZE = 64; + REVOKE = 65; + UNDOOWN = 66; + UNDOOTHER = 67; + DYNAMICSETTINGS = 68; + DESTROYCOMMUNITY = 69; + } + extensions 1024 to max; + optional IntroductionRequest introduction_request = 1; + optional IntroductionResponse introduction_response = 2; + optional SessionRequest session_request = 3; + optional SessionResponse session_response = 4; + optional PunctureRequest puncture_request = 5; + optional PunctureResponse puncture_response = 6; + optional Collection collection = 7; + optional Identity identity = 8; + + optional MissingIdentity missing_identity = 16; + optional MissingSequence missing_sequence = 17; + optional MissingMessage missing_message = 18; + optional MissingLastMessage missing_last_message = 19; + optional MissingProof missing_proof = 20; + optional SignatureRequest signature_request = 21; + optional SignatureResponse signature_response = 22; + + optional Authorize authorize = 64; + optional Revoke revoke = 65; + optional UndoOwn undo_own = 66; + optional UndoOther undo_other = 67; + optional DynamicSettings dynamic_settings = 68; + optional DestroyCommunity destroy_community = 69; +} +#+END_SRC + +Note that field numbers that are higher than 15 are encoded using two +bytes, whereas lower field numbers require one byte, see [[https://developers.google.com/protocol-buffers/docs/proto#simple][defining a +message type]]. Hence the fields that are most common should use low +field numbers. + +* <<>> +A temporary message that contains one or more persistent Dispersy +messages. It is required because persistent Dispersy messages do not +have a session identifier. + +Collection limitations: +- Collection.session is associated with the source address. +- Collection.messages contains one or more messages. + +#+BEGIN_SRC protocol +message Collection { + extensions 1024 to max; + required uint32 session = 1; + repeated Message messages = 2; +} +#+END_SRC + +* <<>> +A temporary message that contains the public key for a single member. +This message is the response to a dispersy-missing-identity request. + +Identity limitations: +- Identity.session is associated with the source address. +- Identity.member must be no larger than 1024 bytes. +- Identity.member must be a valid ECC public key. + +#+BEGIN_SRC protocol +message Identity { + extensions 1024 to max; + required uint32 session = 1; + required bytes member = 2; +} +#+END_SRC + +* <<>> +A persistent message that grants permissions (permit, authorize, +revoke, or undo) for one or more messages to one or more public keys. +This message must be wrapped in a dispersy-collection and is a +response to a dispersy-introduction-request or dispersy-missing-proof. +(TODO: reference a document describing the permission system.) + +Authorize limitations: +- Authorize.version is 1. +- Authorize.community must be 20 bytes. +- Authorize.member must be no larger than 1024 bytes. +- Authorize.member must be a valid EEC public key. +- Authorize.global_time must be one or higher and up to the local + acceptable global time range. +- Authorize.sequence_number must follow already processed Authorize + messages from Authorize.member. Sequence numbers start at one. No + sequence number may be skipped. +- Authorize.targets must contain one or more entries. +- Authorize.targets[].member must be no larger than 1024 bytes. +- Authorize.targets[].member must be a valid EEC public key. +- Authorize.targets[].permissions must contain one or more entries. +- Authorize.targets[].permissions[].message must represent a known + message in the community. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the Authorize.member. + +#+BEGIN_SRC protocol +message Authorize { + enum Type { + PERMIT = 1; + AUTHORIZE = 2; + REVOKE = 3; + UNDO = 4; + } + message Permission { + required Message.Type message = 1; + required Type permission = 2; + } + message Target { + required uint64 global_time = 1; + required bytes member = 2; + repeated Permission permissions = 3; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + repeated Target targets = 6; +} +#+END_SRC + +* <<>> +A persistent message that revokes permissions (permit, authorize, +revoke, or undo) for one or more messages from one or more public +keys. This message must be wrapped in a dispersy-collection and is a +response to a dispersy-introduction-request or dispersy-missing-proof. +(TODO: reference a document describing the permission system.) + +Revoke limitations: +- Revoke.version is 1. +- Revoke.community must be 20 bytes. +- Revoke.member must be no larger than 1024 bytes. +- Revoke.member must be a valid EEC public key. +- Revoke.global_time must be one or higher and up to the local + acceptable global time range. +- Revoke.sequence_number must follow already processed Revoke messages + from Revoke.member. Sequence numbers start at one. No sequence + number may be skipped. +- Revoke.targets must contain one or more entries. +- Revoke.targets[].member must be no larger than 1024 bytes. +- Revoke.targets[].member must be a valid EEC public key. +- Revoke.targets[].permissions must contain one or more entries. +- Revoke.targets[].permissions[].message must represent a known + message in the community. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the Revoke.member. + +#+BEGIN_SRC protocol +message Revoke { + enum Type { + PERMIT = 1; + AUTHORIZE = 2; + REVOKE = 3; + UNDO = 4; + } + message Permission { + required Message.Type message = 1; + required Type permission = 2; + } + message Target { + required uint64 global_time = 1; + required bytes member = 2; + repeated Permission permissions = 3; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + repeated Target targets = 6; +} +#+END_SRC + +* <<>> +A persistent message that marks an older message with an undone flag. +This allows a member to undo her own previously created messages. +This message must be wrapped in a dispersy-collection and is a +response to dispersy-introduction-request or dispersy-missing-proof. +Undo messages can only be created for messages that allow being +undone. (TODO: reference a document describing the permission +system.) + +The dispersy-undo-own message contains a target global time which, +together with the community identifier and the member identifier, +uniquely identifies the message that is being undone. This message +target must allow being undone. + +To impose a limit on the number of dispersy-undo-own messages that can +be created, a dispersy-undo-own message may only be accepted when the +message that it points to is available and no dispersy-undo-own has +yet been created for it. + +UndoOwn limitations: +- UndoOwn.version is 1. +- UndoOwn.community must be 20 bytes. +- UndoOwn.member must be no larger than 1024 bytes. +- UndoOwn.member must be a valid EEC public key. +- UndoOwn.global_time must be one or higher and up to the local + acceptable global time range. +- UndoOwn.sequence_number must follow already processed UndoOwn + messages from UndoOwn.member. Sequence numbers start at + one. No sequence number may be skipped. +- UndoOwn.target_global_time must be one or higher and smaller than + UndoOwn.global_time. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the UndoOwn.member. + +#+BEGIN_SRC protocol +message UndoOwn { + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required uint64 target_global_time = 5; +} +#+END_SRC protocol + +* <<>> +A persistent message that marks an older message with an undone flag. +This allows a member to undo a previously created messages created by +someone else. This message must be wrapped in a dispersy-collection +and is a response to dispersy-introduction-request or +dispersy-missing-proof. Undo messages can only be created for +messages that allow being undone. (TODO: reference a document +describing the permission system.) + +The dispersy-undo-other message contains a target public key and +target global time which, together with the community identifier, +uniquely identifies the message that is being undone. This target +message must allow being undone. + +A dispersy-undo-other message may only be accepted when the message +that it points to is available. In contrast to a dispersy-undo-own +message, it is allowed to have multiple dispersy-undo-other messages +targeting the same message. To impose a limit on the number of +dispersy-undo-other messages that can be created, a member must have +the undo permission for the target message. + +UndoOther limitations: +- UndoOther.version is 1. +- UndoOther.community must be 20 bytes. +- UndoOther.member must be no larger than 1024 bytes. +- UndoOther.member must be a valid EEC public key. +- UndoOther.global_time must be one or higher and up to the local + acceptable global time range. +- UndoOther.sequence_number must follow already processed UndoOther + messages from UndoOther.member. Sequence numbers start + at one. No sequence number may be skipped. +- UndoOther.target_global_time must be one or higher and smaller than + UndoOther.global_time. +- UndoOther.target_member must be no larger than 1024 bytes. +- UndoOther.target_member must be a valid EEC public key. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the UndoOther.member. + +#+BEGIN_SRC protocol +message UndoOther { + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required uint64 target_global_time = 6; + required bytes target_member = 7; +} +#+END_SRC protocol + +* <<>> +A persistent message that changes one or more message policies. When +a message has two or more policies of a specific type defined, +i.e. both PublicResolution and LinearResolution, the +dispersy-dynamic-settings message allows switching between them. This +message must be wrapped in a dispersy-collection and is a response to +a dispersy-introduction-request or dispersy-missing-proof. + +The policy change is applied from the next global time increment after +the global time given by the dispersy-dynamic-settings message. + +DynamicSettings limitations: +- DynamicSettings.version is 1. +- DynamicSettings.community must be 20 bytes. +- DynamicSettings.member must be no larger than 1024 bytes. +- DynamicSettings.member must be a valid EEC public key. +- DynamicSettings.global_time must be one or higher and up to the + local acceptable global time range. +- DynamicSettings.sequence_number must follow already processed + DynamicSettings messages from DynamicSettings.member. + Sequence numbers start at one. No sequence number may be skipped. +- DynamicSettings.target_message must represent a known message in the + community. +- DynamicSettings.target_policy must be a policy that has dynamic + settings enabled. +- DynamicSettings.target_index must be an existing index in the + available dynamic settings. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the DynamicSettings.member. + +#+BEGIN_SRC protocol +message DynamicSettings { + enum Policy { + AUTHENTICATION = 1; + RESOLUTION = 2; + DISTRIBUTION = 3; + DESTINATION = 4; + PAYLOAD = 5; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required Message.Type target_message = 6; + required Policy target_policy = 7; + required uint32 target_index = 8; +} +#+END_SRC + +* <<>> +A persistent message that forces an overlay to go offline. An overlay +can be either soft killed or hard killed. This message must be +wrapped in a dispersy-collection and is a response to +dispersy-introduction-request (for soft kill) or a response to any +temporary message (for hard kill). + +A soft killed overlay is frozen. All existing persistent messages +with global time lower or equal to DestroyCommunity.target_global_time +will be retained but all other persistent messages are undone (where +possible) and removed. New persistent messages with global time lower +or equal to DestroyCommunity.target_global_time are accepted and +processed but all other persistent messages are ignored. Temporary +messages are not effected. + +A hard killed overlay is destroyed. All persistent messages will be +removed without undo, except the dispersy-destroy-community message +and the authorize chain that is required to verify its validity. New +persistent messages are ignored and temporary messages result in the +dispersy-destroy-community and the authorize chain that is required to +verify its validity. + +A dispersy-destroy-community message can not be undone. Hence it is +very important to ensure that only trusted peers have the permission +to create this message. + +DestroyCommunity limitations: +- DestroyCommunity.version is 1. +- DestroyCommunity.community must be 20 bytes. +- DestroyCommunity.member must be no larger than 1024 bytes. +- DestroyCommunity.member must be a valid EEC public key. +- DestroyCommunity.global_time must be one or higher and up to the + local acceptable global time range. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the DestroyCommunity.member. + +#+BEGIN_SRC protocol +message DestroyCommunity { + enum Degree { + SOFT = 1; + HARD = 2; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required Degree degree = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to request a signature for an included message +from another member. The included message may be modified before +adding the signature. May respond with a dispersy-signature-response +message. + +SignatureRequest limitations: +- SignatureRequest.session is associated with the source address. +- SignatureRequest.request is a random number. +- SignatureRequest.message.signatures may not be set. + +#+BEGIN_SRC protocol +message SignatureRequest { + extensions 1024 to max; + required uint32 session = 1; + required uint32 request = 2; + required Message message = 3; +} +#+END_SRC protocol + +* <<>> +A temporary message to respond to a signature request from another +member. The included message may be different from the message given +in the associated request. + +SignatureResponse limitations: +- SignatureResponse.session is associated with the source address. +- SignatureResponse.request is SignatureRequest.request +- SignatureResponse.message.signatures must contain one signature. + +#+BEGIN_SRC protocol +message SignatureResponse { + extensions 1024 to max; + required uint32 session = 1; + required uint32 request = 2; + required Message message = 3; +} +#+END_SRC protocol + + + +# The dispersy-introduction-request message is not disseminated through +# bloom filter synchronization. Instead it is periodically created to +# maintain a semi-random overlay. + +# - supported versions in dispersy version, community version pairs +# - random number +# - possibly suggested cipher suites +# - possibly suggested compression methods +# - possibly session identifier + +# ** Dispersy 1: no:sessions, no:ip-spoofing, yes:public-key, yes:signature (steps: 5/9) +# 1. A -> B introduction-req [Ahash, Arandom, Baddr, Alan, Awan, Atype, Abloom, Asig] +# 2. (first-contact) B -> A missing-key [Ahash] +# 3. (first-contact) A -> B key [Akey] +# 4. B -> C puncture-req [Arandom, Alan, Awan] +# 5. B -> A introduction-resp [Bhash, Arandom, Aaddr, Blan, Bwan, Btype, Clan, Cwan, Bsig] +# 6. B -> A missing-messages +# 7. (first-contact) A -> B missing-key [Bhash] +# 8. (first-contact) B -> A key [Akey] +# 9. C -> A puncture [Chash, Arandom, Clan, Cwan, Csig] + +# This strategy *will not* prevent M from spoofing A's address to +# deliver an introduction-req to B. This attack would cause B to +# respond with, possibly the maximum of allowed bandwidth, to A. +# Resulting in a DOS attack. + +# *** IP spoofing attack +# 1. M -> B introduction-req [Ahash, Arandom, Baddr, Alan, Awan, Atype, Abloom, Asig] +# 2. All other steps follow the origional + +# This can be used as a DOS attack, where M is the attacker who pretends +# (spoofs) to be A and where A and B are the victim. + +# ** Dispersy 2 simple a: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [Arandom, Brandom, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [Arandom, Brandom, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [Brandom] +# 4. B -> C puncture-req [Crandom, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [Arandom, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [Arandom, missing-messages] +# 7. C -> A puncture [Prandom, Clan, Cwan, Ctype] + +# This strategy *will* prevent M from spoofing A's address to deliver an +# introduction-req to B because A will only accept packets from +# Blan/Bwan containing Arandom. Where Arandom is a random number +# generated by A. + +# This strategy *will not* prevent M, after it intercepts Brandom, from +# spoofing A's address to deliver an introduction-req to B. Resulting +# in a DOS attack. + +# This strategy *will not* prevent man in the middle attacks. However, +# there is no proof that any non-centralized system can prevent such an +# attack. + +# *** Discussion +# Steps 2 and 3 can be extended with Bkey and Akey, respectively. We +# can also go further and add Bsig and Asig, although this can not +# prevent any attacks. + +# #+LATEX: \begin{shaded} +# ** Dispersy 2 simple b: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [ABshared, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [Brandom, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [Arandom] +# 4. B -> C puncture-req [BCshared, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [ABshared, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [ABshared, missing-messages] +# 7. C -> A puncture [ACshared, Prandom, Clan, Cwan, Ctype] + +# Having consensus on a shared session identifier reduces the complexity +# and memory consumption as Arandom and Brandom are only required during +# steps 2 and 3. + +# This strategy *will* prevent M from spoofing A's address to deliver an +# introduction-req to B because A will only accept packets from +# Blan/Bwan containing ABshared. Where ABshared = (Arandom + Brandom) +# mod 2^{32}. + +# This strategy *will not* prevent M, after it intercepts ABshared, from +# spoofing A's address to deliver an introduction-req to B. Resulting +# in a DOS attack. + +# This strategy *will not* prevent man in the middle attacks. However, +# there is no proof that any non-centralized system can prevent such an +# attack. + +# *** Discussion +# Steps 2 and 3 can be extended with Bkey and Akey, respectively. We +# can also go further and add Bsig and Asig, although this can not +# prevent any attacks. +# #+LATEX: \end{shaded} + +# ** Dispersy 2 diffie-hellman: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [ABshared, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [DH{AB}p, DH{AB}q, DH{AB}b*, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [DH{AB}a*] +# 4. B -> C puncture-req [BCshared, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [ABshared, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [ABshared, missing-messages] +# 7. C -> A puncture [ACshared, Prandom, Clan, Cwan, Ctype] + +# Discussion: steps 2 and 3 can be extended with Bkey and Akey, +# respectively. We can also go further and add Bsig and Asig, although +# this can not prevent any attacks. + +# ** Stuffs +# |---+-------+-------+--------------------+-----------------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-----------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | random number A | +# | | 20 | | char[] | community identifier | +# | | 1 | | unsigned char | version pair count | +# | + | | | unsigned char | supported dispersy version | +# | + | | | unsigned char | supported community version | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# |---+-------+-------+--------------------+-----------------------------| + +# |---+-------+-------+--------------------+----------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+----------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | random number B | +# | | 1 | | unsigned char | chosen version | +# | | 20 | | char[] | community identifier | +# | | 20 | | char[] | member identifier | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# |---+-------+-------+--------------------+----------------------| + +# |---+-------+-------+--------------------+-------------------------------------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-------------------------------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | (random number A + random number B) modulo 2^32 | +# | | 20 | | char[] | member identifier | +# |---+-------+-------+--------------------+-------------------------------------------------| + + +# |---+-------+-------+--------------------+-----------------------------| +# | + | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-----------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | f6 | unsigned char | message identifier | +# | | 1 | 00 | unsigned char | message version | +# | | 20 | | char[] | community identifier | +# | | 20 | | char[] | member identifier | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# | | 4 | | unsigned long | option bits | +# | | 2 | | unsigned short | request identifier | +# | + | 8 | | unsigned long long | sync global time low | +# | + | 8 | | unsigned long long | sync global time high | +# | + | 2 | | unsigned short | sync modulo | +# | + | 2 | | unsigned short | sync offset | +# | + | 1 | | unsigned char | sync bloom filter functions | +# | + | 2 | | unsigned short | sync bloom filter size | +# | + | 1 | | unsigned char | sync bloom filter prefix | +# | + | | | char[] | sync bloom filter | +# | | | | char[] | signature | +# |---+-------+-------+--------------------+-----------------------------| + +# The option bits are defined as follows: +# - 0000.0001 request an introduction +# - 0000.0010 request contains optional sync bloom filter +# - 0000.0100 source is behind a tunnel +# - 0000.1000 source connection type +# - 1000.0000 source has a public address +# - 1100.0000 source is behind a symmetric NAT + +# The dispersy-introduction-request message contains optional elements. +# When the 'request contains optional sync bloom filter' bit is set, all +# of the sync fields must be given. In this case the destination peer +# should respond with messages that are within the set defined by sync +# global time low, sync global time high, sync modulo, and sync offset +# and which are not in the sync bloom filter. However, the destination +# peer is allowed to limit the number of messages it responds with. +# Sync bloom filter size is given in bits and corresponds to the length +# of the sync bloom filter. Responses should take into account the +# message priority. Otherwise ordering is by either ascending or +# descening global time. + +# ** version 1.1 +# The tunnel bit was introduced. + +# ** possible future changes +# There is no feature that requires cryptography on this message. Hence +# it may be removed to reduce message size and processing cost. + +# There is not enough version information in this message. More should +# be added to allow the source and destination peers to determine the +# optimal wire protocol to use. Having a three-way handshake would +# allow consensus between peers on what version to use. + +# Sometimes the source peer may want to receive fewer sync responses +# (i.e. to ensure low CPU usage), adding a max bandwidth value allows to +# limit the returned packages. + +# The walker should be changed into a three-way handshake to secure the +# protocol against IP spoofing attacks. + + + +* <<>> +A temporary message to contact a peer that we may or may not have +visited already. This message has two tasks: +1. To maintain a semi-random overlay by obtaining one possibly locally + unknown peer (TODO: reference a document describing the semi-random + walker); +2. To obtain eventual consistency by obtaining zero or more unknown + persistent messages (TODO: reference a document describing the + bloom filter synchronization). + +#+LATEX: \begin{shaded} +The dispersy-introduction-request, dispersy-introduction-response, +dispersy-session-request, dispersy-session-response, +[[dispersy-puncture-request]], and dispersy-puncture messages are used +together. The following schema describes the interaction between +peers A, B, and C for a typical walk. Where we call A: initiator, B: +invitor, and C: invitee. + +1. A -> B dispersy-introduction-request \\ + \{shared_{AB}, identifier_{walk}, address_{B}, LAN_{A}, WAN_{A}, bloom_{A}\} + +2. B -> A dispersy-session-request (new session only) \\ + \{random_{B}, identifier_{walk}, address_{A}, LAN_{B}, WAN_{B}\} + +3. A -> B dispersy-session-response (new session only) \\ + \{random_{A}, identifier_{walk}\} + +4. B -> C [[dispersy-puncture-request]] \\ + \{shared_{BC}, identifier_{walk}, LAN_{A}, WAN_{A}\} + +5. B -> A dispersy-introduction-response \\ + \{shared_{AB}, identifier_{walk}, LAN_{C}, WAN_{C}\} + +6. B -> A dispersy-collection \\ + \{shared_{AB}, missing messages\} + +7. C -> A dispersy-puncture \\ + \{shared_{AC}, identifier_{walk}, LAN_{C}, WAN_{C}\} +#+LATEX: \end{shaded} + +IntroductionRequest limitations: +- IntroductionRequest.session is associated with the source address or + zero to initiate a new session. +- IntroductionRequest.community must be 20 bytes. +- IntroductionRequest.global_time must be one or higher and up to the + local acceptable global time range. +- IntroductionRequest.random must be a non-zero random value used for + PunctureRequest.random and Puncture.random. +- IntroductionRequest.destination is the IPv4 address where the + IntroductionRequest is sent. +- IntroductionRequest.source_lan is the senders IPv4 LAN address. +- IntroductionRequest.source_wan is the senders IPv4 WAN address. +- IntroductionRequest.connection_type is the senders connection type. + The connection_type is only given when it is known. +- IntroductionRequest.synchronization contains a bloomfilter + representation of a subset of the senders known persistent messages. + It is only given when the sender wants to obtain new persistent + messages. + +#+BEGIN_SRC protocol +message IntroductionRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + message Synchronization { + required uint64 low = 1 [default = 1]; + required uint64 hight = 2 [default = 1]; + required uint32 modulo = 3 [default = 1]; + required uint64 offset = 4; + required bytes bloomfilter = 5; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 2; + required bytes community = 3; + required uint64 global_time = 4; + required Address destination = 5; + repeated Address sources = 6; + optional Synchronization synchronization = 9; +} +#+END_SRC protocol + +** TODO add optional tunnel flag +** TODO add optional bootstrap flag + +* <<>> +A temporary message to negotiate a session identifier. This message +is a response to a dispersy-introduction-request when the session is +zero or unknown. TODO: reference a document describing the +semi-random walker. + +Negotiating a session identifier will prevent a malicious peer M from +spoofing the address of peer A to deliver a +dispersy-introduction-request to peer B because A will only accept +packets from LAN_{B} or WAN_{B} containing random_{A}. Where +random_{A} is a random number generated by A. This will prevent DOS +attacks through IP spoofing. + +SessionRequest limitations: +- TODO + +#+BEGIN_SRC protocol +message SessionRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 version = 1; + repeated uint32 version_blacklist = 3; + required uint32 walk = 4; + required uint32 random_b = 5; + required Address destination = 5; + repeated Address source = 6; +} +#+END_SRC protocol + +* <<>> +A temporary message to negotiate a session identifier. This message +is a response to a dispersy-session-request. TODO: reference a +document describing the semi-random walker. + +Once this message has been received both sides can compute the session +identifier $session = random_{A} + random_{B} ~(mod ~2^{32})$. This +session identifier is present in all temporary messages, except for +dispersy-session-request and dispersy-session-response. + +SessionResponse limitations: +- SessionResponse.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message SessionResponse { + extensions 1024 to max; + required uint32 version = 1; + required uint32 walk = 4; + required uint32 random_a = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to introduce a, possibly new, peer to the +receiving peer. This message is a response to a +dispersy-introduction-request (when a session exists) or a +dispersy-session-response (when a session was negotiated). TODO: +reference a document describing the semi-random walker. + +Limitation: +- SessionResponse.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message IntroductionResponse { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + required uint64 global_time = 4; + repeated Address invitee = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to request the destination peer to puncture a hole +in it's NAT. This message is a consequence introducing a two peers +after receiving a dispersy-introduction-request. TODO: reference a +document describing the semi-random walker. + +PunctureRequest limitations: +- PunctureRequest.walk is IntroductionRequest.walk. +- PunctureRequest.initiator is one or more addresses corresponding to + a single peer. These addresses may be modified to the best of the + senders knowledge. +- TODO + +#+BEGIN_SRC protocol +message PunctureRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + required uint64 global_time = 4; + repeated Address initiator = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to puncture a hole in the senders NAT. This +message is the consequence of being introduced to a peer after +receiving a [[dispersy-puncture-request]], TODO: reference a document +describing the semi-random walker. + +Puncture limitations: +- Puncture.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message PunctureRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + repeated Address source = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests the public keys associated to a member +identifier. Receiving this request should result in a +dispersy-collection message containing one or more dispersy-identity +messages. + +DispersyMissingIdentity limitations: +- DispersyMissingIdentity.session must be associated with the source + address. +- DispersyMissingIdentity.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingIdentity.member must be no larger than 1024 bytes. +- DispersyMissingIdentity.member must be a valid EEC public key. + +TODO: dispersy-collection should be renamed into something along the +lines of dispersy-bulk. This message will contain additional +information to facilitate a bulk transfer, for this message it will +likely not be used, but it will be used for the bulk bloomfilter sync. + +#+BEGIN_SRC protocol +message DispersyMissingIdentity { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests messages in a sequence number range. +Receiving this request should result in a dispersy-collection message +containing one or more messages matching the request. + +DispersyMissingSequence limitations: +- DispersyMissingSequence.session must be associated with the source + address. +- DispersyMissingSequence.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingSequence.member must be no larger than 1024 bytes. +- DispersyMissingSequence.member must be a valid EEC public key. +- DispersyMissingSequence.descriptor must be the persistent message + identifier. +- DispersyMissingSequence.sequence_low must be the first sequence + number that is being requested. +- DispersyMissingSequence.sequence_high must be the last sequence + number that is being requested. + +#+BEGIN_SRC protocol +message DispersyMissingSequence { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + required Descriptor.Type descriptor = 4; + required uint32 sequence_low = 5; + required uint32 sequence_high = 6; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests one or more messages identified by a +community identifier, member identifier, and one or more global times. +This request should result in a dispersy-collection message containing +one or more message messages matching the request. + +DispersyMissingMessage limitations: +- DispersyMissingMessage.session must be associated with the source + address. +- DispersyMissingMessage.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingMessage.member must be no larger than 1024 bytes. +- DispersyMissingMessage.member must be a valid EEC public key. +- DispersyMissingMessage.global_times must be one or more global_time + values. + +#+BEGIN_SRC protocol +message DispersyMissingMessage { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + repeated uint64 global_times = 4; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests one or more most recent messages +identified by a community identifier and member. This request should +result in a dispersy-collection message containing one or more +messages matching the request. + +DispersyMissingLastMessage limitations: +- DispersyMissingLastMessage.session must be associated with the + source address. +- DispersyMissingLastMessage.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingLastMessage.member must be no larger than 1024 bytes. +- DispersyMissingLastMessage.member must be a valid EEC public key. +- DispersyMissingLastMessage.descriptor must be the persistent message + identifier. + +#+BEGIN_SRC protocol +message DispersyMissingLastMessage { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + required Descriptor.Type descriptor = 4; +} +#+END_SRC protocol + +* <<>> (#253) +A temporary message to requests one or more persistent messages from +the permission tree that prove that that a given message is allowed. +This request should result in a dispersy-collection message containing +one or more dispersy-authorize and/or dispersy-revoke messages. +(TODO: reference a document describing the permission system.) + +DispersyMissingProof limitations: +- DispersyMissingProof.session must be associated with the source + address. +- DispersyMissingProof.random must be a non-zero random value used to + identity the response dispersy-collection. +- DispersyMissingProof.member must be no larger than 1024 bytes. +- DispersyMissingProof.member must be a valid EEC public key. +- DispersyMissingProof.global_times must be one or more global_time + values. + +#+BEGIN_SRC protocol +message DispersyMissingProof { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + repeated uint64 global_times = 4; +} +#+END_SRC protocol diff -Nru tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.x tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.x --- tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.x 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/doc/wireprotocol_2.x 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,1366 @@ +#+TITLE: Dispersy wire protocol\\version 2.0 +#+OPTIONS: toc:nil ^:{} author:nil +#+LATEX_HEADER: \usepackage{enumitem} +#+LATEX_HEADER: \setlist{nolistsep} +#+LaTeX_HEADER: \usepackage{framed} +#+LaTeX_HEADER: \usepackage{xcolor} +#+LaTeX_HEADER: \definecolor{shadecolor}{gray}{.9} + +# This document uses orgmode (http://orgmode.org) formatting. + +#+LATEX: \begin{shaded} +* Choices and discussions +** Using sessions +Dispersy 1.x did not have a session. This meant that every message +required basic information such as version and community identifier. +By negotiating a session identifier during the 'walking' process we no +longer need to include these values in every message. + +Available options are: +- Sessions: :: All walker messages will include version and community + identification and results in a session identifier (per + community per peer pair). All non-walker temporary + messages use this session identifier. +- Sessionless: :: All temporary messages will include version and + community identification. Response version and + community is chosen independently from previous + messages. Obviously no session identifier is + negotiated by the walker. +- Hybrid: :: Protocol Buffers support optional fields in messages. + This allows us to optionally negotiate a session + identifier (use sessions). If no session is available + all non-walker temporary messages must include optional + version and community identification (sessionless). + +*** 09/01/2013 Boudewijn +I prefer to use *sessions*. There is a lot of session specific +information available (version, community identity, connection-type, +tunnel, encryption, compression). All of this information can be +negotiated once and will reduce overhead in the non-walker temporary +messages. + +Sessions also make sense from a security perspective, where the +session identifier represents a secure number that only the two +communicating parties know. However, properly doing this requires +some crypto at the expense of CPU cycles. While a crypto handshake +has a very low priority it can be includes easily when sessions are +used. + +I am against using *hybrid*. While this is the most flexible option +it will also require the most code to create and maintain. I consider +this bloatware. + +*** 11/01/2013 Elric +I'm OK with *sessions*, it doesn't look as it would be too hard to +mantain and will allow us cut on bandwith usage. + +*** 10/01/2013 Decision +Currently we use *sessions*. However, this is subject to change until +more opinions are received. + +** Consensus on a real time clock +We can add the local real time clock to every temporary message that +also contains the local global time. This can allow peers to estimate +the average real time in the overlay. + +Having this estimate also allows us to assign real times to received +messages without relying the local time that potential malicious peer +provide. + +This can be usefull for the effort overlay (i.e. for consensus on the +current cycle) and channel overlay (i.e. for consensus on the creation +time of a post, torrent, etc.). + +Available options are: +- Rely on people: :: We can assume that the local time of all + computers is set correctly, either by the user or by an OS + provided mechanism. +- Use time server: :: Synchronizing time is a well known problem. A + well known solution is for each peer to contact one of many + available time servers periodically to obtain the current time. +- Use Dispersy: :: Use a consensus mechanism in Dispersy by adding + local real time to messages containing global + times. + +*** 11/01/2013 Boudewijn +Relying on people to keep their local time up to date is asking for +problems. Using a time server is the simplest solution, but we would +need to perform this check periodically or at startup. Using Dispersy +is distributed and hence more complicated. + +Using a time server feels like cheating. Let me explain by comparing +it with the global time. Currently each peer collects the global time +values from other peers around it. This results in every peer having +more or less the same global time. We could just as well use the +bootstrap servers to aggregate global times from peers. Each peer +could then simply ask the bootstrap servers for the current global +time periodically. Yet, we choose to let every peer find the average +global time in a distributed manner. I would argue that we should +also let peers compute the average real time in a distributed manner +for the same reasons. Hence, I prefer to *use Dispersy*. + +I do believe that it will not be possible to prove that a message was +creates at a certain time. However, I suspect that we -will- be able +to prove that a message was created in a certain time range. Proving +this may, in itself, be an interested paper topic. + +*** 11/01/2013 Elric +I agree on using consensus to decide on the common real time. Of +course taking off the unrealistic values from the sample before +averaging it and finding the proper way to check the result with the +system's local time to validate the result. + +*** 05/02/2013 Johan +Because of financial reasons it is not possible to spend time on this. +The least effort solution should be used. I choose *use time server*. + +*** 10/01/2013 Decision +We should use a time server. It is the responsibility of the +community to contact one. Hence, from Dispersies perspective we +will *rely on the user*, or the community programmer. + +** Announcing the local global time +Dispersy uses a [[http://dl.acm.org/citation.cfm?id=359563][lamport clock]] to ensure that we retain the partial +ordering of all persistent messages in the overlay, i.e. our global +time. + +Available options are: +- Minimal announce: :: We announce our local global time only with the + walker messages. +- Maximal announce: :: We announce our local global time in every + temporary message. +- Optional announce: :: We can add an optional global time field in + every temporary message. + +*** 10/01/2013 Boudewijn +The walker messages, most likely, trigger other temporary +missing-something messages. As such, including our local global time +in those missing-something messages would not improve the performance +of the lamport clock. Hence, I prfer to use *minimal announce*. + +*** 10/01/2013 Decision +Currently we use *minimal announce*. To be precise, only the +dispersy-introduction-request and dispersy-introduction-response +message are used to announce local global time to the neighborhood. +However, this is subject to change until more opinions are received. + +** Encoding signatures into a message +The cryptographic signatures must be transferred as part of a message +in some way. + +Available options are: +- Concat: :: We add the signature directly behind the serialized + message. This requires us to also add a message length + field because otherwise we can not unserialize it again + (protocol buffers will assume the signature is an + optional field in the message). +- Optional signature field: :: We add an optional signature field into + the Message container. We must serialize the submessage, create + the signature from that, and serialize the container message. +- SignedMessage: :: We distinct between Message and SignedMessage + containers. We would still need to serialize both + the submessage and container message. + +*** 14/01/2013 Boudewijn +Adding an *optional signature field* seems the simplest by far. It +also results in only one container message instead of two. One +disadvantage that I forsee is that we will slowly start to extend the +Message container with optional fields, and that is definately not my +intention. + +However, there is one issue that remains. The Message container (not +the submessage) contains the message type, hence the signature would +-not- include the message type. Therefore, a small change must be the +inclusion of another container message that has two fields: binary +message and binary signature. We explicity use the binary +representation of the message because another machine may serialize +the message differently (OS, protocol buffer version, etc) and we can +not afford this to invalidate the signature. + +The concat option is also easy to do, however, I dislike spending a +few bytes for the message length and concatting the length, message, +and signature together. Messing with the bytes should all be done by +protocol buffers. + +*** 14/01/2013 Decision +Currently we use *optional signature field* that is modifier with the +additional message wrapper, see dispersy-message. However, this is +subject to change until more opinions are received. + +** Synchronization bloom filters +In Dispersy 1 we create the bloomfilter by hashing {prefix, +binary-packet}. There are two choices to make: + +First choice. Using either prefix or postfix: +- Prefix: :: Allows you to cache the hashed prefix. Requires: one + cache and N+1 hashes to build one N sized bloom filter. +- Postfix (partial cache): :: Allows you to cache each packet. Every + postfix must be hashed. Requires: M hashes to build M caches + once. And N hashes to build one N sized bloom filter. +- Postfix (full cache): :: Allows you to cache each packet + postfix + combination. Requires: M hashes to build M caches once. Cache + storage is potentially cheaper than the partial cache. + +Second choice. How do we represent the message: +- Binary packet: :: The simplest and method is to hash the binary + packet. The packet is unique, even if the data + encoded in the packet results in duplicate data. +- Identifying information only: :: The most minimalistic method is to + hash only the member identifier and global time. This, combined + with the current community, must uniquely identify every packet. + +*** 17/01/2013 Boudewijn +After several 'timeit' runs I obtained the following statistics: + +#+BEGIN_EXAMPLE +0.003818 # hash one byte +0.005269 +0.001451 138% # hash 300 bytes +0.006416 +0.002598 168% # one byte cache and N times 300 byte update +0.004613 +0.000795 120% # 300 bytes cache and N times 1 byte update +0.006080 +0.002262 159% # 1 + 300 bytes concat hash +#+END_EXAMPLE + +In these statistics the 168\% represents postfix and 120\% represents +postfix (partial cache). Obviously the postfix is faster because +fewer bytes need to be hashed. However, the difference is only +0.001803 seconds for $N=2000$. Taking into account that the faster +option will require more memory, code, and decision making +(i.e. choosing the subset of packets that we want to cache) does not +justify implementing a cache for every packet. + +However, hashing a simple string concatenation, i.e. using no cache at +all, is slightly faster than using a cached prefix. While the +difference is negligible we can use this strategy with a postfix. +This will allow us too (1) cache often used packets for maximal +performance or (2) implements something simple (concat) but allow the +postfix cache to be added later. Hence, I prefer *postfix without +caching*. + +As for what we hash, I prefer *binary packets*. We know that it is +the slower of the two options, yet it is the only one that quarantees +dissemination of all data, even when mistakes are made such as one +member creating multiple messages with the same global time. We've +actually seen this problem occuring (it caused high amounts of +additional traffic) in the effort community. Granted, this was a bug, +but it allowed us to easily observe the problem and fix it. Hence it +saved us a lot of development time. + +** Protocol buffer version control +One option to make protocol buffers easy to upgrade to new versions, +is to make most fields optional. + +#+LATEX: \end{shaded} + +* Introduction +This document describes the Dispersy wire protocol version 2 and its +intended behaviors. Version 2 is *not* backwards compatible. The +most notable changes are the use of [[https://developers.google.com/protocol-buffers][google protocol buffers]] for the +wire format, protection against IP spoofing, and session usage. A +complete list of changes is available in following sections. + +** 01/01/2013 version 2.0 +Changes compared to version 1.3 are: +- Dispersy version, community version, and community identifier have + been replaced with session identifier for temporary messages +- new message dispersy-collection +- new message dispersy-session-request +- new message dispersy-session-response + +* Terminology +- Temporary message: :: A control message that is not stored on disk. + Messages of this type are immediately discarded after they are + processed. +- Persistent message: :: A message that contains information that must + be retained across sessions. Effectively this includes every + message that must be disseminated through the network. + +* Mechanisms +** Global time +Global time is a lamport clock used to provide message ordering +withing a community. Using global time, every message can be uniquely +identified using community, member, and global time. + +Dispersy stores global time values using, at most, 64 bits. Therefore +there is a finite number of global time values available. To avoid +malicious peers from quickly pushing the global time value to the +point where none are left, peers will only accept messages with a +global time that is within a locally evaluated limit. This limit is +set to the median of the neighbors' global time values plus a +predefined margin. + +Persistent messages that are not within the acceptable global time +range are ignored. + +* <<>> +Protocol Buffers allows messages to be defined, encoded, and finally +decoded again. However, the way that we intend to use protocol +buffers caused two issues to arise: +1. Multiple different messages over the same communication channel + requires a method to distinguish message type. The recommended + method, as described by Google in [[https://developers.google.com/protocol-buffers/docs/techniques#self-description][self-describing messages]], is to + encapsulate the message by a message that contains all possible + messages as optional fields; +2. Adding one or more signatures to a message requires the entire + message (including the message type) to be serialized and passed to + the cryptography layer, resulting signatures can only be placed in + a wrapping message. + + This wrapping message must store the message in binary. Otherwise + changes to protocol buffers' internal implementation may cause one + client to produce a different, yet compatible, binary + representation. This would make it impossible to verify the + signature. + +Therefore, the Dispersy protocol will use two wrapping messages. +/Descriptor/ will allow message types to be assigned, while /Message/ +will contain the raw message bytes and optional signatures. + +#+BEGIN_SRC protocol +message Message { + extensions 1024 to max; + required bytes descriptor; + repeated bytes signatures; +} +#+END_SRC + +Descriptor limitations: +- Every temporary or persistent message must have an optional field in + the Descriptor message. Community messages must use the field + values assigned to extensions. +- A dispersy-message may only contain one message, i.e. only one + optional field may be set. + +#+BEGIN_SRC protocol +message Descriptor { + enum Type { + // frequent temporary messages (uses <15 values) + INTRODUCTIONREQUEST = 1; + INTRODUCTIONRESPONSE = 2; + SESSIONREQUEST = 3; + SESSIONRESPONSE = 4; + PUNCTUREREQUEST = 5; + PUNCTURERESPONSE = 6; + COLLECTION = 7; + IDENTITY = 8; + + // infrequent temporary messages (uses >15 values) + MISSINGIDENTITY = 16; + MISSINGSEQUENCE = 17; + MISSINGMESSAGE = 18; + MISSINGLASTMESSAGE = 19; + MISSINGPROOF = 20; + SIGNATUREREQUEST = 21; + SIGNATURERESPONSE = 22; + + // persistent messages (uses >63 values) + AUTHORIZE = 64; + REVOKE = 65; + UNDOOWN = 66; + UNDOOTHER = 67; + DYNAMICSETTINGS = 68; + DESTROYCOMMUNITY = 69; + } + extensions 1024 to max; + optional IntroductionRequest introduction_request = 1; + optional IntroductionResponse introduction_response = 2; + optional SessionRequest session_request = 3; + optional SessionResponse session_response = 4; + optional PunctureRequest puncture_request = 5; + optional PunctureResponse puncture_response = 6; + optional Collection collection = 7; + optional Identity identity = 8; + + optional MissingIdentity missing_identity = 16; + optional MissingSequence missing_sequence = 17; + optional MissingMessage missing_message = 18; + optional MissingLastMessage missing_last_message = 19; + optional MissingProof missing_proof = 20; + optional SignatureRequest signature_request = 21; + optional SignatureResponse signature_response = 22; + + optional Authorize authorize = 64; + optional Revoke revoke = 65; + optional UndoOwn undo_own = 66; + optional UndoOther undo_other = 67; + optional DynamicSettings dynamic_settings = 68; + optional DestroyCommunity destroy_community = 69; +} +#+END_SRC + +Note that field numbers that are higher than 15 are encoded using two +bytes, whereas lower field numbers require one byte, see [[https://developers.google.com/protocol-buffers/docs/proto#simple][defining a +message type]]. Hence the fields that are most common should use low +field numbers. + +* <<>> +A temporary message that contains one or more persistent Dispersy +messages. It is required because persistent Dispersy messages do not +have a session identifier. + +Collection limitations: +- Collection.session is associated with the source address. +- Collection.messages contains one or more messages. + +#+BEGIN_SRC protocol +message Collection { + extensions 1024 to max; + required uint32 session = 1; + repeated Message messages = 2; +} +#+END_SRC + +* <<>> +A temporary message that contains the public key for a single member. +This message is the response to a dispersy-missing-identity request. + +Identity limitations: +- Identity.session is associated with the source address. +- Identity.member must be no larger than 1024 bytes. +- Identity.member must be a valid ECC public key. + +#+BEGIN_SRC protocol +message Identity { + extensions 1024 to max; + required uint32 session = 1; + required bytes member = 2; +} +#+END_SRC + +* <<>> +A persistent message that grants permissions (permit, authorize, +revoke, or undo) for one or more messages to one or more public keys. +This message must be wrapped in a dispersy-collection and is a +response to a dispersy-introduction-request or dispersy-missing-proof. +(TODO: reference a document describing the permission system.) + +Authorize limitations: +- Authorize.version is 1. +- Authorize.community must be 20 bytes. +- Authorize.member must be no larger than 1024 bytes. +- Authorize.member must be a valid EEC public key. +- Authorize.global_time must be one or higher and up to the local + acceptable global time range. +- Authorize.sequence_number must follow already processed Authorize + messages from Authorize.member. Sequence numbers start at one. No + sequence number may be skipped. +- Authorize.targets must contain one or more entries. +- Authorize.targets[].member must be no larger than 1024 bytes. +- Authorize.targets[].member must be a valid EEC public key. +- Authorize.targets[].permissions must contain one or more entries. +- Authorize.targets[].permissions[].message must represent a known + message in the community. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the Authorize.member. + +#+BEGIN_SRC protocol +message Authorize { + enum Type { + PERMIT = 1; + AUTHORIZE = 2; + REVOKE = 3; + UNDO = 4; + } + message Permission { + required Message.Type message = 1; + required Type permission = 2; + } + message Target { + required uint64 global_time = 1; + required bytes member = 2; + repeated Permission permissions = 3; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + repeated Target targets = 6; +} +#+END_SRC + +* <<>> +A persistent message that revokes permissions (permit, authorize, +revoke, or undo) for one or more messages from one or more public +keys. This message must be wrapped in a dispersy-collection and is a +response to a dispersy-introduction-request or dispersy-missing-proof. +(TODO: reference a document describing the permission system.) + +Revoke limitations: +- Revoke.version is 1. +- Revoke.community must be 20 bytes. +- Revoke.member must be no larger than 1024 bytes. +- Revoke.member must be a valid EEC public key. +- Revoke.global_time must be one or higher and up to the local + acceptable global time range. +- Revoke.sequence_number must follow already processed Revoke messages + from Revoke.member. Sequence numbers start at one. No sequence + number may be skipped. +- Revoke.targets must contain one or more entries. +- Revoke.targets[].member must be no larger than 1024 bytes. +- Revoke.targets[].member must be a valid EEC public key. +- Revoke.targets[].permissions must contain one or more entries. +- Revoke.targets[].permissions[].message must represent a known + message in the community. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the Revoke.member. + +#+BEGIN_SRC protocol +message Revoke { + enum Type { + PERMIT = 1; + AUTHORIZE = 2; + REVOKE = 3; + UNDO = 4; + } + message Permission { + required Message.Type message = 1; + required Type permission = 2; + } + message Target { + required uint64 global_time = 1; + required bytes member = 2; + repeated Permission permissions = 3; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + repeated Target targets = 6; +} +#+END_SRC + +* <<>> +A persistent message that marks an older message with an undone flag. +This allows a member to undo her own previously created messages. +This message must be wrapped in a dispersy-collection and is a +response to dispersy-introduction-request or dispersy-missing-proof. +Undo messages can only be created for messages that allow being +undone. (TODO: reference a document describing the permission +system.) + +The dispersy-undo-own message contains a target global time which, +together with the community identifier and the member identifier, +uniquely identifies the message that is being undone. This message +target must allow being undone. + +To impose a limit on the number of dispersy-undo-own messages that can +be created, a dispersy-undo-own message may only be accepted when the +message that it points to is available and no dispersy-undo-own has +yet been created for it. + +UndoOwn limitations: +- UndoOwn.version is 1. +- UndoOwn.community must be 20 bytes. +- UndoOwn.member must be no larger than 1024 bytes. +- UndoOwn.member must be a valid EEC public key. +- UndoOwn.global_time must be one or higher and up to the local + acceptable global time range. +- UndoOwn.sequence_number must follow already processed UndoOwn + messages from UndoOwn.member. Sequence numbers start at + one. No sequence number may be skipped. +- UndoOwn.target_global_time must be one or higher and smaller than + UndoOwn.global_time. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the UndoOwn.member. + +#+BEGIN_SRC protocol +message UndoOwn { + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required uint64 target_global_time = 5; +} +#+END_SRC protocol + +* <<>> +A persistent message that marks an older message with an undone flag. +This allows a member to undo a previously created messages created by +someone else. This message must be wrapped in a dispersy-collection +and is a response to dispersy-introduction-request or +dispersy-missing-proof. Undo messages can only be created for +messages that allow being undone. (TODO: reference a document +describing the permission system.) + +The dispersy-undo-other message contains a target public key and +target global time which, together with the community identifier, +uniquely identifies the message that is being undone. This target +message must allow being undone. + +A dispersy-undo-other message may only be accepted when the message +that it points to is available. In contrast to a dispersy-undo-own +message, it is allowed to have multiple dispersy-undo-other messages +targeting the same message. To impose a limit on the number of +dispersy-undo-other messages that can be created, a member must have +the undo permission for the target message. + +UndoOther limitations: +- UndoOther.version is 1. +- UndoOther.community must be 20 bytes. +- UndoOther.member must be no larger than 1024 bytes. +- UndoOther.member must be a valid EEC public key. +- UndoOther.global_time must be one or higher and up to the local + acceptable global time range. +- UndoOther.sequence_number must follow already processed UndoOther + messages from UndoOther.member. Sequence numbers start + at one. No sequence number may be skipped. +- UndoOther.target_global_time must be one or higher and smaller than + UndoOther.global_time. +- UndoOther.target_member must be no larger than 1024 bytes. +- UndoOther.target_member must be a valid EEC public key. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the UndoOther.member. + +#+BEGIN_SRC protocol +message UndoOther { + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required uint64 target_global_time = 6; + required bytes target_member = 7; +} +#+END_SRC protocol + +* <<>> +A persistent message that changes one or more message policies. When +a message has two or more policies of a specific type defined, +i.e. both PublicResolution and LinearResolution, the +dispersy-dynamic-settings message allows switching between them. This +message must be wrapped in a dispersy-collection and is a response to +a dispersy-introduction-request or dispersy-missing-proof. + +The policy change is applied from the next global time increment after +the global time given by the dispersy-dynamic-settings message. + +DynamicSettings limitations: +- DynamicSettings.version is 1. +- DynamicSettings.community must be 20 bytes. +- DynamicSettings.member must be no larger than 1024 bytes. +- DynamicSettings.member must be a valid EEC public key. +- DynamicSettings.global_time must be one or higher and up to the + local acceptable global time range. +- DynamicSettings.sequence_number must follow already processed + DynamicSettings messages from DynamicSettings.member. + Sequence numbers start at one. No sequence number may be skipped. +- DynamicSettings.target_message must represent a known message in the + community. +- DynamicSettings.target_policy must be a policy that has dynamic + settings enabled. +- DynamicSettings.target_index must be an existing index in the + available dynamic settings. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the DynamicSettings.member. + +#+BEGIN_SRC protocol +message DynamicSettings { + enum Policy { + AUTHENTICATION = 1; + RESOLUTION = 2; + DISTRIBUTION = 3; + DESTINATION = 4; + PAYLOAD = 5; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required uint32 sequence_number = 5; + required Message.Type target_message = 6; + required Policy target_policy = 7; + required uint32 target_index = 8; +} +#+END_SRC + +* <<>> +A persistent message that forces an overlay to go offline. An overlay +can be either soft killed or hard killed. This message must be +wrapped in a dispersy-collection and is a response to +dispersy-introduction-request (for soft kill) or a response to any +temporary message (for hard kill). + +A soft killed overlay is frozen. All existing persistent messages +with global time lower or equal to DestroyCommunity.target_global_time +will be retained but all other persistent messages are undone (where +possible) and removed. New persistent messages with global time lower +or equal to DestroyCommunity.target_global_time are accepted and +processed but all other persistent messages are ignored. Temporary +messages are not effected. + +A hard killed overlay is destroyed. All persistent messages will be +removed without undo, except the dispersy-destroy-community message +and the authorize chain that is required to verify its validity. New +persistent messages are ignored and temporary messages result in the +dispersy-destroy-community and the authorize chain that is required to +verify its validity. + +A dispersy-destroy-community message can not be undone. Hence it is +very important to ensure that only trusted peers have the permission +to create this message. + +DestroyCommunity limitations: +- DestroyCommunity.version is 1. +- DestroyCommunity.community must be 20 bytes. +- DestroyCommunity.member must be no larger than 1024 bytes. +- DestroyCommunity.member must be a valid EEC public key. +- DestroyCommunity.global_time must be one or higher and up to the + local acceptable global time range. +- Can not be undone using dispersy-undo-own or dispersy-undo-other. +- Requires a signature matching the DestroyCommunity.member. + +#+BEGIN_SRC protocol +message DestroyCommunity { + enum Degree { + SOFT = 1; + HARD = 2; + } + extensions 1024 to max; + required uint32 version = 1; + required bytes community = 2; + required bytes member = 3; + required uint64 global_time = 4; + required Degree degree = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to request a signature for an included message +from another member. The included message may be modified before +adding the signature. May respond with a dispersy-signature-response +message. + +SignatureRequest limitations: +- SignatureRequest.session is associated with the source address. +- SignatureRequest.request is a random number. +- SignatureRequest.message.signatures may not be set. + +#+BEGIN_SRC protocol +message SignatureRequest { + extensions 1024 to max; + required uint32 session = 1; + required uint32 request = 2; + required Message message = 3; +} +#+END_SRC protocol + +* <<>> +A temporary message to respond to a signature request from another +member. The included message may be different from the message given +in the associated request. + +SignatureResponse limitations: +- SignatureResponse.session is associated with the source address. +- SignatureResponse.request is SignatureRequest.request +- SignatureResponse.message.signatures must contain one signature. + +#+BEGIN_SRC protocol +message SignatureResponse { + extensions 1024 to max; + required uint32 session = 1; + required uint32 request = 2; + required Message message = 3; +} +#+END_SRC protocol + + + +# The dispersy-introduction-request message is not disseminated through +# bloom filter synchronization. Instead it is periodically created to +# maintain a semi-random overlay. + +# - supported versions in dispersy version, community version pairs +# - random number +# - possibly suggested cipher suites +# - possibly suggested compression methods +# - possibly session identifier + +# ** Dispersy 1: no:sessions, no:ip-spoofing, yes:public-key, yes:signature (steps: 5/9) +# 1. A -> B introduction-req [Ahash, Arandom, Baddr, Alan, Awan, Atype, Abloom, Asig] +# 2. (first-contact) B -> A missing-key [Ahash] +# 3. (first-contact) A -> B key [Akey] +# 4. B -> C puncture-req [Arandom, Alan, Awan] +# 5. B -> A introduction-resp [Bhash, Arandom, Aaddr, Blan, Bwan, Btype, Clan, Cwan, Bsig] +# 6. B -> A missing-messages +# 7. (first-contact) A -> B missing-key [Bhash] +# 8. (first-contact) B -> A key [Akey] +# 9. C -> A puncture [Chash, Arandom, Clan, Cwan, Csig] + +# This strategy *will not* prevent M from spoofing A's address to +# deliver an introduction-req to B. This attack would cause B to +# respond with, possibly the maximum of allowed bandwidth, to A. +# Resulting in a DOS attack. + +# *** IP spoofing attack +# 1. M -> B introduction-req [Ahash, Arandom, Baddr, Alan, Awan, Atype, Abloom, Asig] +# 2. All other steps follow the origional + +# This can be used as a DOS attack, where M is the attacker who pretends +# (spoofs) to be A and where A and B are the victim. + +# ** Dispersy 2 simple a: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [Arandom, Brandom, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [Arandom, Brandom, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [Brandom] +# 4. B -> C puncture-req [Crandom, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [Arandom, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [Arandom, missing-messages] +# 7. C -> A puncture [Prandom, Clan, Cwan, Ctype] + +# This strategy *will* prevent M from spoofing A's address to deliver an +# introduction-req to B because A will only accept packets from +# Blan/Bwan containing Arandom. Where Arandom is a random number +# generated by A. + +# This strategy *will not* prevent M, after it intercepts Brandom, from +# spoofing A's address to deliver an introduction-req to B. Resulting +# in a DOS attack. + +# This strategy *will not* prevent man in the middle attacks. However, +# there is no proof that any non-centralized system can prevent such an +# attack. + +# *** Discussion +# Steps 2 and 3 can be extended with Bkey and Akey, respectively. We +# can also go further and add Bsig and Asig, although this can not +# prevent any attacks. + +# #+LATEX: \begin{shaded} +# ** Dispersy 2 simple b: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [ABshared, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [Brandom, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [Arandom] +# 4. B -> C puncture-req [BCshared, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [ABshared, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [ABshared, missing-messages] +# 7. C -> A puncture [ACshared, Prandom, Clan, Cwan, Ctype] + +# Having consensus on a shared session identifier reduces the complexity +# and memory consumption as Arandom and Brandom are only required during +# steps 2 and 3. + +# This strategy *will* prevent M from spoofing A's address to deliver an +# introduction-req to B because A will only accept packets from +# Blan/Bwan containing ABshared. Where ABshared = (Arandom + Brandom) +# mod 2^{32}. + +# This strategy *will not* prevent M, after it intercepts ABshared, from +# spoofing A's address to deliver an introduction-req to B. Resulting +# in a DOS attack. + +# This strategy *will not* prevent man in the middle attacks. However, +# there is no proof that any non-centralized system can prevent such an +# attack. + +# *** Discussion +# Steps 2 and 3 can be extended with Bkey and Akey, respectively. We +# can also go further and add Bsig and Asig, although this can not +# prevent any attacks. +# #+LATEX: \end{shaded} + +# ** Dispersy 2 diffie-hellman: yes:sessions, yes:ip-spoofing (steps: 5/7) +# 1. A -> B introduction-req [ABshared, Prandom, Baddr, Alan, Awan, Atype, Abloom] +# 2. (new-session) B -> A session-req [DH{AB}p, DH{AB}q, DH{AB}b*, Aaddr, Blan, Bwan, Btype] +# 3. (new-session) A -> B session-res [DH{AB}a*] +# 4. B -> C puncture-req [BCshared, Prandom, Alan, Awan, Atype] +# 5. B -> A introduction-resp [ABshared, Prandom, Clan, Cwan, Ctype] +# 6. B -> A synchronize-res [ABshared, missing-messages] +# 7. C -> A puncture [ACshared, Prandom, Clan, Cwan, Ctype] + +# Discussion: steps 2 and 3 can be extended with Bkey and Akey, +# respectively. We can also go further and add Bsig and Asig, although +# this can not prevent any attacks. + +# ** Stuffs +# |---+-------+-------+--------------------+-----------------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-----------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | random number A | +# | | 20 | | char[] | community identifier | +# | | 1 | | unsigned char | version pair count | +# | + | | | unsigned char | supported dispersy version | +# | + | | | unsigned char | supported community version | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# |---+-------+-------+--------------------+-----------------------------| + +# |---+-------+-------+--------------------+----------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+----------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | random number B | +# | | 1 | | unsigned char | chosen version | +# | | 20 | | char[] | community identifier | +# | | 20 | | char[] | member identifier | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# |---+-------+-------+--------------------+----------------------| + +# |---+-------+-------+--------------------+-------------------------------------------------| +# | | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-------------------------------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | fb | unsigned char | message identifier | +# | | 4 | | unsigned long | (random number A + random number B) modulo 2^32 | +# | | 20 | | char[] | member identifier | +# |---+-------+-------+--------------------+-------------------------------------------------| + + +# |---+-------+-------+--------------------+-----------------------------| +# | + | BYTES | VALUE | C-TYPE | DESCRIPTION | +# |---+-------+-------+--------------------+-----------------------------| +# | | 4 | | unsigned long | session identifier | +# | | 1 | f6 | unsigned char | message identifier | +# | | 1 | 00 | unsigned char | message version | +# | | 20 | | char[] | community identifier | +# | | 20 | | char[] | member identifier | +# | | 8 | | unsigned long long | global time | +# | | 6 | | char[] | destination address | +# | | 6 | | char[] | source LAN address | +# | | 6 | | char[] | source WAN address | +# | | 4 | | unsigned long | option bits | +# | | 2 | | unsigned short | request identifier | +# | + | 8 | | unsigned long long | sync global time low | +# | + | 8 | | unsigned long long | sync global time high | +# | + | 2 | | unsigned short | sync modulo | +# | + | 2 | | unsigned short | sync offset | +# | + | 1 | | unsigned char | sync bloom filter functions | +# | + | 2 | | unsigned short | sync bloom filter size | +# | + | 1 | | unsigned char | sync bloom filter prefix | +# | + | | | char[] | sync bloom filter | +# | | | | char[] | signature | +# |---+-------+-------+--------------------+-----------------------------| + +# The option bits are defined as follows: +# - 0000.0001 request an introduction +# - 0000.0010 request contains optional sync bloom filter +# - 0000.0100 source is behind a tunnel +# - 0000.1000 source connection type +# - 1000.0000 source has a public address +# - 1100.0000 source is behind a symmetric NAT + +# The dispersy-introduction-request message contains optional elements. +# When the 'request contains optional sync bloom filter' bit is set, all +# of the sync fields must be given. In this case the destination peer +# should respond with messages that are within the set defined by sync +# global time low, sync global time high, sync modulo, and sync offset +# and which are not in the sync bloom filter. However, the destination +# peer is allowed to limit the number of messages it responds with. +# Sync bloom filter size is given in bits and corresponds to the length +# of the sync bloom filter. Responses should take into account the +# message priority. Otherwise ordering is by either ascending or +# descening global time. + +# ** version 1.1 +# The tunnel bit was introduced. + +# ** possible future changes +# There is no feature that requires cryptography on this message. Hence +# it may be removed to reduce message size and processing cost. + +# There is not enough version information in this message. More should +# be added to allow the source and destination peers to determine the +# optimal wire protocol to use. Having a three-way handshake would +# allow consensus between peers on what version to use. + +# Sometimes the source peer may want to receive fewer sync responses +# (i.e. to ensure low CPU usage), adding a max bandwidth value allows to +# limit the returned packages. + +# The walker should be changed into a three-way handshake to secure the +# protocol against IP spoofing attacks. + + + +* <<>> +A temporary message to contact a peer that we may or may not have +visited already. This message has two tasks: +1. To maintain a semi-random overlay by obtaining one possibly locally + unknown peer (TODO: reference a document describing the semi-random + walker); +2. To obtain eventual consistency by obtaining zero or more unknown + persistent messages (TODO: reference a document describing the + bloom filter synchronization). + +#+LATEX: \begin{shaded} +The dispersy-introduction-request, dispersy-introduction-response, +dispersy-session-request, dispersy-session-response, +[[dispersy-puncture-request]], and dispersy-puncture messages are used +together. The following schema describes the interaction between +peers A, B, and C for a typical walk. Where we call A: initiator, B: +invitor, and C: invitee. + +1. A -> B dispersy-introduction-request \\ + \{shared_{AB}, identifier_{walk}, address_{B}, LAN_{A}, WAN_{A}, bloom_{A}\} + +2. B -> A dispersy-session-request (new session only) \\ + \{random_{B}, identifier_{walk}, address_{A}, LAN_{B}, WAN_{B}\} + +3. A -> B dispersy-session-response (new session only) \\ + \{random_{A}, identifier_{walk}\} + +4. B -> C [[dispersy-puncture-request]] \\ + \{shared_{BC}, identifier_{walk}, LAN_{A}, WAN_{A}\} + +5. B -> A dispersy-introduction-response \\ + \{shared_{AB}, identifier_{walk}, LAN_{C}, WAN_{C}\} + +6. B -> A dispersy-collection \\ + \{shared_{AB}, missing messages\} + +7. C -> A dispersy-puncture \\ + \{shared_{AC}, identifier_{walk}, LAN_{C}, WAN_{C}\} +#+LATEX: \end{shaded} + +IntroductionRequest limitations: +- IntroductionRequest.session is associated with the source address or + zero to initiate a new session. +- IntroductionRequest.community must be 20 bytes. +- IntroductionRequest.global_time must be one or higher and up to the + local acceptable global time range. +- IntroductionRequest.random must be a non-zero random value used for + PunctureRequest.random and Puncture.random. +- IntroductionRequest.destination is the IPv4 address where the + IntroductionRequest is sent. +- IntroductionRequest.source_lan is the senders IPv4 LAN address. +- IntroductionRequest.source_wan is the senders IPv4 WAN address. +- IntroductionRequest.connection_type is the senders connection type. + The connection_type is only given when it is known. +- IntroductionRequest.synchronization contains a bloomfilter + representation of a subset of the senders known persistent messages. + It is only given when the sender wants to obtain new persistent + messages. + +#+BEGIN_SRC protocol +message IntroductionRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + message Synchronization { + required uint64 low = 1 [default = 1]; + required uint64 hight = 2 [default = 1]; + required uint32 modulo = 3 [default = 1]; + required uint64 offset = 4; + required bytes bloomfilter = 5; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 2; + required bytes community = 3; + required uint64 global_time = 4; + required Address destination = 5; + repeated Address sources = 6; + optional Synchronization synchronization = 9; +} +#+END_SRC protocol + +** TODO add optional tunnel flag +** TODO add optional bootstrap flag + +* <<>> +A temporary message to negotiate a session identifier. This message +is a response to a dispersy-introduction-request when the session is +zero or unknown. TODO: reference a document describing the +semi-random walker. + +Negotiating a session identifier will prevent a malicious peer M from +spoofing the address of peer A to deliver a +dispersy-introduction-request to peer B because A will only accept +packets from LAN_{B} or WAN_{B} containing random_{A}. Where +random_{A} is a random number generated by A. This will prevent DOS +attacks through IP spoofing. + +SessionRequest limitations: +- TODO + +#+BEGIN_SRC protocol +message SessionRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 version = 1; + repeated uint32 version_blacklist = 3; + required uint32 walk = 4; + required uint32 random_b = 5; + required Address destination = 5; + repeated Address source = 6; +} +#+END_SRC protocol + +* <<>> +A temporary message to negotiate a session identifier. This message +is a response to a dispersy-session-request. TODO: reference a +document describing the semi-random walker. + +Once this message has been received both sides can compute the session +identifier $session = random_{A} + random_{B} ~(mod ~2^{32})$. This +session identifier is present in all temporary messages, except for +dispersy-session-request and dispersy-session-response. + +SessionResponse limitations: +- SessionResponse.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message SessionResponse { + extensions 1024 to max; + required uint32 version = 1; + required uint32 walk = 4; + required uint32 random_a = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to introduce a, possibly new, peer to the +receiving peer. This message is a response to a +dispersy-introduction-request (when a session exists) or a +dispersy-session-response (when a session was negotiated). TODO: +reference a document describing the semi-random walker. + +Limitation: +- SessionResponse.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message IntroductionResponse { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + required uint64 global_time = 4; + repeated Address invitee = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to request the destination peer to puncture a hole +in it's NAT. This message is a consequence introducing a two peers +after receiving a dispersy-introduction-request. TODO: reference a +document describing the semi-random walker. + +PunctureRequest limitations: +- PunctureRequest.walk is IntroductionRequest.walk. +- PunctureRequest.initiator is one or more addresses corresponding to + a single peer. These addresses may be modified to the best of the + senders knowledge. +- TODO + +#+BEGIN_SRC protocol +message PunctureRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + required uint64 global_time = 4; + repeated Address initiator = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to puncture a hole in the senders NAT. This +message is the consequence of being introduced to a peer after +receiving a [[dispersy-puncture-request]], TODO: reference a document +describing the semi-random walker. + +Puncture limitations: +- Puncture.walk is IntroductionRequest.walk. +- TODO + +#+BEGIN_SRC protocol +message PunctureRequest { + enum ConnectionType { + public = 1; + unknown_NAT = 2; + } + message Address { + optional fixed32 ipv4_host = 1; + optional uint32 ipv4_port = 2; + optional ConnectionType type = 3; + } + extensions 1024 to max; + required uint32 session = 1; + required uint32 walk = 4; + repeated Address source = 5; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests the public keys associated to a member +identifier. Receiving this request should result in a +dispersy-collection message containing one or more dispersy-identity +messages. + +DispersyMissingIdentity limitations: +- DispersyMissingIdentity.session must be associated with the source + address. +- DispersyMissingIdentity.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingIdentity.member must be no larger than 1024 bytes. +- DispersyMissingIdentity.member must be a valid EEC public key. + +TODO: dispersy-collection should be renamed into something along the +lines of dispersy-bulk. This message will contain additional +information to facilitate a bulk transfer, for this message it will +likely not be used, but it will be used for the bulk bloomfilter sync. + +#+BEGIN_SRC protocol +message DispersyMissingIdentity { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests messages in a sequence number range. +Receiving this request should result in a dispersy-collection message +containing one or more messages matching the request. + +DispersyMissingSequence limitations: +- DispersyMissingSequence.session must be associated with the source + address. +- DispersyMissingSequence.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingSequence.member must be no larger than 1024 bytes. +- DispersyMissingSequence.member must be a valid EEC public key. +- DispersyMissingSequence.descriptor must be the persistent message + identifier. +- DispersyMissingSequence.sequence_low must be the first sequence + number that is being requested. +- DispersyMissingSequence.sequence_high must be the last sequence + number that is being requested. + +#+BEGIN_SRC protocol +message DispersyMissingSequence { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + required Descriptor.Type descriptor = 4; + required uint32 sequence_low = 5; + required uint32 sequence_high = 6; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests one or more messages identified by a +community identifier, member identifier, and one or more global times. +This request should result in a dispersy-collection message containing +one or more message messages matching the request. + +DispersyMissingMessage limitations: +- DispersyMissingMessage.session must be associated with the source + address. +- DispersyMissingMessage.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingMessage.member must be no larger than 1024 bytes. +- DispersyMissingMessage.member must be a valid EEC public key. +- DispersyMissingMessage.global_times must be one or more global_time + values. + +#+BEGIN_SRC protocol +message DispersyMissingMessage { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + repeated uint64 global_times = 4; +} +#+END_SRC protocol + +* <<>> +A temporary message to requests one or more most recent messages +identified by a community identifier and member. This request should +result in a dispersy-collection message containing one or more +messages matching the request. + +DispersyMissingLastMessage limitations: +- DispersyMissingLastMessage.session must be associated with the + source address. +- DispersyMissingLastMessage.random must be a non-zero random value used + to identity the response dispersy-collection. +- DispersyMissingLastMessage.member must be no larger than 1024 bytes. +- DispersyMissingLastMessage.member must be a valid EEC public key. +- DispersyMissingLastMessage.descriptor must be the persistent message + identifier. + +#+BEGIN_SRC protocol +message DispersyMissingLastMessage { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + required Descriptor.Type descriptor = 4; +} +#+END_SRC protocol + +* <<>> (#253) +A temporary message to requests one or more persistent messages from +the permission tree that prove that that a given message is allowed. +This request should result in a dispersy-collection message containing +one or more dispersy-authorize and/or dispersy-revoke messages. +(TODO: reference a document describing the permission system.) + +DispersyMissingProof limitations: +- DispersyMissingProof.session must be associated with the source + address. +- DispersyMissingProof.random must be a non-zero random value used to + identity the response dispersy-collection. +- DispersyMissingProof.member must be no larger than 1024 bytes. +- DispersyMissingProof.member must be a valid EEC public key. +- DispersyMissingProof.global_times must be one or more global_time + values. + +#+BEGIN_SRC protocol +message DispersyMissingProof { + extensions 1024 to max; + required uint32 session = 1; + required uint32 random = 2; + required bytes member = 3; + repeated uint64 global_times = 4; +} +#+END_SRC protocol diff -Nru tribler-6.2.0/Tribler/dispersy/dprint.py tribler-6.2.0/Tribler/dispersy/dprint.py --- tribler-6.2.0/Tribler/dispersy/dprint.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/dprint.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,925 @@ +""" +Easily print a message to the console or a remote application + +It is important to note that this module must be as independent from +other user made modules as possible since dprint is often used to +report bugs in other modules. +""" + +# todo: maybe add a feature that does not redisplay a msg that repeats itself again and again + +# from pickle import dumps +from os.path import dirname, basename, expanduser, isfile, join +from sys import stdout, stderr, exc_info +# from threading import current_thread, Thread, Lock +from time import time, strftime +from traceback import extract_stack, print_exception, print_stack, format_list +import inspect +import re +# import socket +# from os import getcwd +from pprint import pformat + +# maxsize is introduced in Python 2.6 +try: + from sys import maxsize +except ImportError: + from sys import maxint as maxsize + +LEVEL_DEBUG = 0 +LEVEL_NORMAL = 128 +LEVEL_LOG = 142 +LEVEL_NOTICE = 167 +LEVEL_WARNING = 192 +LEVEL_ERROR = 224 +LEVEL_FORCE = 1024 +level_map = {"debug":LEVEL_DEBUG, # informative only to a developer + "normal":LEVEL_NORMAL, # informative to a user running from console + "log":LEVEL_LOG, # a message that is logged + "notice":LEVEL_NOTICE, # something is wrong but we can recover (external failure, we are not the cause nor can we fix this) + "warning":LEVEL_WARNING, # something is wrong but we can recover (internal failure, we are the cause and should fix this) + "error":LEVEL_ERROR, # something is wrong and recovering is impossible + "force":LEVEL_FORCE} # explicitly force this print to pass through the filter +level_tag_map = {LEVEL_DEBUG:"D", + LEVEL_NORMAL:" ", + LEVEL_LOG:"L", + LEVEL_NOTICE:"N", + LEVEL_WARNING:"W", + LEVEL_ERROR:"E", + LEVEL_FORCE:"F"} + +# allows us to determine that the last dprint call was an error, and hence, that the message 'see +# stderr...' has already been printed to stdout +_last_dprint_see_stderr = False + +_dprint_settings = { + "binary":False, # print a binary representation of the arguments + "box":False, # add a single line above and below the message + "box_char":"-", # when a box is added to the message use this character to generate the line + "callback":None, # optional callback. the callback is only performed if the filters accept the message. the callback result is added to the displayed message + "exception":False, # add the last occured exception, including its stacktrace, to the message + "force":False, # ignore all filters, equivalent to level="force" + "glue":"", # use this string to join() *args together + "level":LEVEL_NORMAL, # either "debug", "normal", "warning", "error", or a number in the range [0, 255] + "line":False, # add a single line above the message + "line_char":"-", # when a line is added to the message use this character to generate the line + "lines":False, # write each value on a seperate line + "meta":False, # write each value on a seperate line including metadata + "pprint":False, # pretty print arg[0] if there is only one argument, otherwize pretty print arg + "remote":False, # write message to remote logging application + "remote_host":"localhost", # when remote logging is enabled this hostname is used + "remote_port":12345, # when remote logging is enabled this port is used + "source_file":None, # force a source filename. otherwise the filename is retrieved from the callstack + "source_function":None, # force a source function. otherwise the function is retrieved from the callstack + "source_line":None, # force a source line. otherwise the line number is retrieved from the callstack + "stack":False, # add a stacktrace to the message. optionally this can be a list optained through extract_stack() + "stack_ident":None, # when stack is printed use this ident to determine the thread name + "stack_origin_modifier":-1, # modify the length of the callstack that is displayed and used to retrieve the source-filename, -function, and -line + "stderr":False, # write message to sys.stderr + "stdout":True, # write message to sys.stdout + "style":"column", # output style. either "short" or "column" + "time":False, # include a timestamp at the start of each line + "time_format":"%H:%M:%S", # the timestamp format (see strftime) + "width":80} + +# We allow message filtering in a 'iptables like' fashion. Each +# messages is passed to the ENTRY chain in _filters, when a +# filter in the chain matches its target is used (accept, drop, +# continue, or jump). If no filters in a chain match the default chain +# policy (accept/True, drop/False, or return/None) is used. An +# exception to this is the ENTRY chain which may only use accept and +# drop its default. +# +# _filters contains chain-name:[policy, chain-list] +# pairs. Where policy can be accept/True, drop/False, or return/None. +# +# chain-list contains lists in the form: [function, target]. Where +# target can be accept/True, drop/False, continue/None, or +# jump/callable. +_filters = {"ENTRY":[False, []]} +_filter_entry = _filters["ENTRY"] +_filter_policy_map = {"accept":True, "drop":False, "return":None} +_filter_reverse_policy_map = {True:"accept", False:"drop", None:"return"} +_filter_reverse_target_map = {True:"accept", False:"drop", None:"continue"} +_filter_target_map = {"accept":True, "drop":False, "continue":None} + +def _filter_reverse_dictionary_lookup(dic, value): + """Returns key associated with the first value that matches VALUE""" + for key, value_ in dic.items(): + if value is value_: + return key + return None + +def filter_chains_get(): + """ + Return a list of (chain-name, default-policy) tuples. + + Where chain-name is the name of the chain. And where + default-policy is either accept, drop, or return. + """ + return [(chain, _filter_reverse_policy_map[policy]) for chain, (policy, rules) in _filters.items()] + +def filter_get(chain): + """ + Return a list of (check, target, jump) tuples. + + Where check is the name of the function used to check the + rule. Where target is either accept, drop, continue, or jump. And + where jump is either None or the name of the target chain. + """ + assert chain in _filters, chain + return [(function.__name__, + target in (True, False, None) and _filter_reverse_target_map[target] or "jump", + not target in (True, False, None) and _filter_reverse_dictionary_lookup(_filters, target) or None) + for function, target + in _filters[chain][1]] + +def filter_chain_create(chain, policy): + """ + Create a new chain + + CHAIN must indicate a non-existing chain ("ENTRY" always exists) + POLICY must be either accept, drop, or return + """ + assert not chain in _filters, "Chain \"%s\" already exists" % chain + assert policy in _filter_policy_map, "Invalid policy \"%s\"" % policy + _filters[chain] = [_filter_policy_map[policy], []] + +def filter_chain_policy(chain, policy): + """ + Set the policy of an exiting chain + + CHAIN must indicate an existing chain ("ENTRY" always exists) + POLICY must be either accept, drop, or return + """ + assert chain in _filters, "Unknown chain \"%s\"" % chain + assert policy in _filter_policy_map, "Invalid policy \"%s\"" % policy + _filters[chain][0] = _filter_policy_map[policy] + +def filter_chain_remove(chain): + """ + Remove an existing chain + """ + assert chain in _filters, chain + # todo: also remove jumps to this chain + del _filters[chain] + +def filter_add(chain, function, target, jump=None, position=maxsize): + """ + Add a filter entry to an existing chain. + + CHAIN must indicate an existing chain ("ENTRY" always exists) + FUNCTION must be a callable function that returns True or False + TARGET must be either accept, drop, continue, or jump + JUMP must be an existing chain name when TARGET is jump + POSITION indicates the position in the chain to insert the rule. The default is the end of the chain + """ + assert chain in _filters, chain + assert hasattr(function, "__call__"), function + assert target == "jump" or target in _filter_target_map, "Invalid target [%s]" % target + assert target != "jump" or jump in _filters, jump + assert type(position) is int, position + if target in _filter_target_map: + target = _filter_target_map[target] + else: + target =_filters[jump] + _filters[chain][1].insert(position, [function, target]) + +def filter_remove(chain, position): + """ + Remove the n'th rule from an existing chain + + CHAIN must indicate an existing chain ("ENTRY" always exists) + POSITION indicates the n'th rule in the chain. The first rule has number 0 + """ + assert chain in _filters, chain + assert -len(_filters[chain][1]) < position < len(_filters[chain][1]), position + del _filters[chain][1][position] + +def filter_add_by_source(chain, target, file=None, function=None, path=None, jump=None, position=maxsize): + """ + Helper function for filter_add to add a filter on the message source + + CHAIN must indicate an existing chain ("ENTRY" always exists) + TARGET must be either accept, drop, continue, or jump + FILE indicates an optional file path. matches if: source_file.endswith(FILE) + FUNCTION indicates an optional function. matches if: source_function == FUNCTION + PATH indicates an optional path. directory seperators should be '.' and not OS dependend '/', or '\', etc. matches if: PATH in source_file + JUMP must be an existing chain name when TARGET is jump + POSITION indicates the position in the chain to insert the rule. The default is the end of the chain + + At least one of FILE, FUNCTION, or PATH must be given. When more + then one are given the source message will match if all given + filters match. + """ + # assert for CHAIN is done in filter_add + # assert for TARGET is done in filter_add + # assert for POSITION is done in filter_add + # assert for JUMP is done in filter_add + assert file or function or path, "At least one of FILE, FUNCTION, or PATH must be given" + assert file is None or type(file) is str, file + assert function is None or type(function) is str, function + assert path is None or type(path) is str, path + def match(args, settings): + result = True + if file: result = result and settings["source_file"].endswith(file) + if path: result = result and path in settings["source_file"] + if function: result = result and function == settings["source_function"] + return result + if not path is None: + path = join(*path.split(".")) + match.__name__ = "by_source(%s, %s, %s)" % (file, function, path) + filter_add(chain, match, target, jump=jump, position=position) + +def filter_add_by_level(chain, target, exact=None, min=None, max=None, jump=None, position=maxsize): + """ + Helper function for filter_add to add a filter on the message level + + CHAIN must indicate an existing chain ("ENTRY" always exists) + TARGET must be either accept, drop, continue, or jump + EXACT indicates an exact message level. matches if: level == EXACT + MIN indicates a minimal message level. matches if: MIN <= level <= MAX + MAX indicates a maximum message level. matches if: MIN <= level <= MAX + JUMP must be an existing chain name when TARGET is jump + POSITION indicates the position in the chain to insert the rule. The default is the end of the chain + + It is not allowed to give MIN without MAX or vise-versa. + + Either EXACT or (MIN and MAX) must be given. When both EXACT and + (MIN and MAX) are given only EXACT is used. + """ + # assert for CHAIN is done in filter_add + # assert for TARGET is done in filter_add + # assert for POSITION is done in filter_add + # assert for JUMP is done in filter_add + assert exact is None or exact in level_map or type(exact) is int, exact + assert min is None or min in level_map or type(min) is int, min + assert max is None or max in level_map or type(max) is int, max + assert (min is None and max is None) or (not min is None and not max is None), (min, max) + if exact in level_map: exact = level_map[exact] + if min in level_map: min = level_map[min] + if max in level_map: max = level_map[max] + if exact is None: + def match(args, settings): + return min <= settings["level"] <= max + else: + def match(args, settings): + return exact == settings["level"] + match.__name__ = "by_level(%s, %s, %s)" % (exact, min, max) + filter_add(chain, match, target, jump=jump, position=position) + +def filter_add_by_pattern(chain, target, pattern, jump=None, position=maxsize): + """ + Helper function for filter_add to add a regular expression filter on the message + + CHAIN must indicate an existing chain ("ENTRY" always exists) + TARGET must be either accept, drop, continue, or jump + PATTERN is a regular expression. matches if any: re.match(PATTERN, str(arg)) where arg is any argument to dprint + JUMP must be an existing chain name when TARGET is jump + POSITION indicates the position in the chain to insert the rule. The default is the end of the chain + """ + # assert for CHAIN is done in filter_add + # assert for TARGET is done in filter_add + # assert for POSITION is done in filter_add + # assert for JUMP is done in filter_add + assert type(pattern) is str, "Pattern must be a string [%s]" % pattern + pattern = re.compile(pattern) + def match(args, settings): + for arg in args: + if pattern.match(str(arg)): + return True + return False + match.__name__ = "by_pattern(%s)" % pattern.pattern + filter_add(chain, match, target, jump=jump, position=position) + +def filter_print(): + """ + Print the filter-chains and filter-rules to the stdout. + """ + for chain, policy in filter_chains_get(): + print("Chain %s (policy %s)" % (chain, policy)) + + for check, target, jump in filter_get(chain): + if not jump: jump = "" + print("%-6s %-15s %s" % (target, jump, check)) + + print() + +def filter_check(args, settings): + """ + Check if a message passes a specific chain + + ARGS is a tuple containing the message + SETTINGS is a dictionaty in in the format of _dprint_settings + + returns True when all filters pass. Otherwise returns False + """ + return _filter_check(args, settings, _filter_entry) + +def _filter_check(args, settings, chain_info): + """ + Check if a message passes a specific chain + + ARGS is a tuple containing the message + SETTINGS is a dictionaty in in the format of _dprint_settings + CHAIN_INFO is a list at _filters[chain-name] + + returns True when all filters pass. Otherwise returns False + """ + for filter_info in chain_info[1]: + if filter_info[0](args, settings): + if filter_info[1] is True: + return True + elif filter_info[1] is False: + return False + elif filter_info[1] is None: + continue + else: # should be callable jump + result = _filter_check(args, settings, filter_info[1]) + if result is None: + continue + else: + return result + return chain_info[0] + +def _config_read(): + """ + Read dprint.conf configuration files + + Note: while we use 'normal' ini file structure we do not use the + ConfigParser that python supplies. Unfortunately ConfigParser uses + dictionaries to store the options making it unusable to us (the + filter rules are order dependend.) + """ + def get_arguments(string, conversions, glue): + """ + get_arguments("filename, function, 42", (strip, strip, int), ",") + --> ["filename", "function", 42] + + get_arguments("filename", (strip, strip, int), ",") + --> ["filename", None, None] + """ + def helper(index, func): + if len(args) > index: + return func(args[index]) + return None + args = string.split(glue) + return [helper(index, func) for index, func in zip(xrange(len(conversions)), conversions)] + + def strip(string): + return string.strip() + + re_section = re.compile("^\s*\[\s*(.+?)\s*\]\s*$") + re_option = re.compile("^\s*([^#].+?)\s*=\s*(.+?)\s*$") + re_true = re.compile("^true|t|1$") + + options = [] + sections = {"default":options} + for file_ in ['dprint.conf', expanduser('~/dprint.conf')]: + if isfile(file_): + line_number = 0 + for line in open(file_, "r"): + line_number += 1 + match = re_option.match(line) + if match: + options.append((line_number, line[:-1], match.group(1), match.group(2))) + continue + + match = re_section.match(line) + if match: + section = match.group(1) + if section in sections: + options = sections[section] + else: + options = [] + sections[section] = options + continue + + string = ["box_char", "glue", "line_char", "remote_host", "source_file", "source_function", "style", "time_format"] + int_ = ["width", "remote_port", "source_line", "stack_origin_modifier"] + boolean = ["box", "binary", "exception", "force", "line", "lines", "meta", "pprint", "remote", "stack", "stderr", "stdout", "time"] + for line_number, line, before, after in sections["default"]: + try: + if before in string: + _dprint_settings[before] = after + elif before in int_: + if after.isdigit(): + _dprint_settings[before] = int(after) + else: + raise ValueError("Not a number") + elif before in boolean: + _dprint_settings[before] = bool(re_true.match(after)) + elif before == "level": + _dprint_settings["level"] = int(level_map.get(after, after)) + except Exception, e: + raise Exception("Error parsing line %s \"%s\"\n%s %s" % (line_number, line, type(e), str(e))) + + chains = [] + for section in sections: + if section.startswith("filter "): + chain = section.split(" ", 1)[1] + filter_chain_create(chain, "return") + chains.append((section, chain)) + if "filter" in sections: + chains.append(("filter", "ENTRY")) + + for section, chain in chains: + for line_number, line, before, after in sections[section]: + try: + if before == "policy": + filter_chain_policy(chain, after) + else: + type_, before = before.split(" ", 1) + after, jump = get_arguments(after, (strip, strip), " ") + if type_ == "source": + file_, function, path = get_arguments(before, (strip, strip, strip), ",") + filter_add_by_source(chain, after, file=file_, function=function, path=path, jump=jump) + elif type_ == "level": + def conv(x): + if x.isdigit(): return int(x) + x = x.strip() + if x: return x + return None + exact, min_, max_ = get_arguments(before, (conv, conv, conv), ",") + filter_add_by_level(chain, after, exact=exact, min=min_, max=max_, jump=jump) + elif type_ == "pattern": + filter_add_by_pattern(chain, after, before, jump=jump) + except Exception, e: + raise Exception("Error parsing line %s \"%s\"\n%s %s" % (line_number, line, type(e), str(e))) + +_config_read() + +# class RemoteProtocol: +# @staticmethod +# def encode(key, value): +# """ +# 1 byte (reserved) +# 1 byte with len(key) +# 2 bytes with len(value) +# n bytes with the key where n=len(key) +# m bytes with the value where m=len(value) +# """ +# assert type(key) is str +# assert len(key) < 2**8 +# assert type(value) is str +# assert len(value) < 2**16 +# m = len(value) +# return "".join((chr(0), +# chr(len(key)), +# chr((m >> 8) & 0xFF), chr(m & 0xFF), +# key, +# value)) + +# @staticmethod +# def decode(data): +# """ +# decode raw data. + +# returns (data, messages) where data contains the remaining raw +# data and messages is a list containing (key, message) tuples. +# """ +# assert type(data) is str +# size = len(data) +# messages = [] +# while size >= 4: +# n = ord(data[1]) +# m = ord(data[2]) << 8 | ord(data[3]) + +# # check if the entire message is available +# if size - 4 >= n + m: +# messages.append((data[4:4+n], (data[4+n:4+n+m], ))) +# data = data[4+n+m:] +# size -= (4+n+m) +# else: +# break + +# return data, messages + +# class RemoteConnection(RemoteProtocol): +# __singleton = None +# __lock = Lock() + +# @classmethod +# def get_instance(cls, *args, **kargs): +# if not cls.__singleton: +# cls.__lock.acquire() +# try: +# if not cls.__singleton: +# cls.__singleton = cls(*args, **kargs) +# finally: +# cls.__lock.release() +# return cls.__singleton + +# @classmethod +# def send(cls, args, settings): +# remote = cls.get_instance(settings["remote_host"], settings["remote_port"]) +# remote._queue.put(("dprint", (args, settings))) + +# def __init__(self, host, port): +# # thread protected write buffer +# self._queue = Queue(0) +# self._address = (host, port) + +# # start a thread to handle async socket communication +# thread = Thread(target=self._loop) +# thread.start() + +# def _loop(self): +# # connect +# connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM) +# connection.setblocking(1) +# try: +# connection.connect(self._address) +# except: +# print >>stderr, "Could not connect to Dremote at", self._address +# raise + +# # handshake +# connection.send(self.encode("protocol", "remote-dprint-1.0")) + +# # send data from the queue +# while True: +# key, value = self._queue.get() +# connection.send(self.encode(key, dumps(value))) + +def dprint_wrap(func): + source_file = inspect.getsourcefile(func) + source_line = inspect.getsourcelines(func)[1] + source_function = func.__name__ + def wrapper(*args, **kargs): + dprint("PRE ", args, kargs, source_file=source_file, source_line=source_line, source_function=source_function) + try: + result = func(*args, **kargs) + except Exception, e: + dprint("POST", e, source_file=source_file, source_line=source_line, source_function=source_function) + raise + else: + dprint("POST", result, source_file=source_file, source_line=source_line, source_function=source_function) + return result + return wrapper + +def dprint_pre(func): + source_file = inspect.getsourcefile(func) + source_line = inspect.getsourcelines(func)[1] + source_function = func.__name__ + def wrapper(*args, **kargs): + dprint("PRE ", args, kargs, source_file=source_file, source_line=source_line, source_function=source_function) + return func(*args, **kargs) + return wrapper + +def dprint_post(func): + source_file = inspect.getsourcefile(func) + source_line = inspect.getsourcelines(func)[1] + source_function = func.__name__ + def wrapper(*args, **kargs): + try: + result = func(*args, **kargs) + return result + finally: + dprint("POST", result, source_file=source_file, source_line=source_line, source_function=source_function) + return wrapper + +def dprint_wrap_object(object_, pattern="^(?!__)"): + """ + Experimental feature: add a dprint before and after each method + call to object. + """ + re_pattern = re.compile(pattern) + for name, member in inspect.getmembers(object_): + if hasattr(member, "__call__") and re_pattern.match(name): + try: + setattr(object_, member.__name__, dprint_wrap(member)) + except: + dprint("Failed wrapping", member, "in object", object_) + +def dprint(*args, **kargs): + """ + Create a message from ARGS and output it somewhere. + + The message can be send to: + - stdout + - stderr (default) + - remote (send message and context to external program) + + The message can contain a stacktrace and thread-id when kargs + contains "stack" which evaluates to True. The callstack can + be shortened (leaving of the call to extract_stack() or event + dprint()) my supplying "stack_origin_modifier" to kargs. The + default is -1 which removes the last call to extract_stack(). + + Each ARGS will be presented on a seperate line with meta data when + kargs contains "meta" which evaluates to True. + + Each ARGS will be presented in binary format when kargs contains + "binary" which evaluates to True. + + A string representation is derived from anything in ARGS using + str(). Therefore, no objects should be supplied that define a + __str__ method which causes secondary effects. Furthermore, to + reduce screen clutter, only the first 1000 characters returned by + str() are used. (an exception to this is the remote output where + everything is transfered). + + Usage: + from dispersy.dprint import dprint + + (Display a message "foo bar") + | if __debug__: dprint("foo", "bar") + filename:1 foo bar + --- + + (Display a message in a function) + | def my_function(): + | if __debug__: dprint("foo") + | pass + | my_function() + filename:2 my_function foo + --- + + (Display a value types) + | if __debug__: dprint("foo", 123, 1.5, meta=1) + filename:1 (StringType, len 3) foo + filename:1 (IntType) 123 + filename:1 (FloatType) 1.5 + --- + + (Display a message with a callstack) + | def my_function(): + | if __debug__: dprint("foo", stack=1) + | pass + | my_function() + filename:2 my_function foo + filename:2 my_function --- + filename:2 my_function Stacktrace on thread: "MainThread" + filename:2 my_function Dprint.py:470 + filename:2 my_function filename.py:4 + filename:2 my_function filename.py:2 my_function + --- + + (Display an exception) + | try: + | raise RuntimeError("Wrong") + | except: + | if __debug__: dprint("An exception occured", exception=1) + | pass + filename:4 An exception occured + filename:4 --- + filename:4 --- Exception: --- + filename:4 Wrong + filename:4 Traceback where the exception originated: + filename:4 filename.py:2 + --- + + (Display a cpu-intensive message) + | if __debug__: + | def expensive_calculation(): + | import time + | time.sleep(1) + | return "moo-milk", + | dprint("foo-bar", callback=expensive_calculation) + filename:6 | foo-bar moo-milk + --- + """ + # ensure that kargs contains only known options + for key in kargs: + if not key in _dprint_settings: + raise ValueError("Unknown option: %s" % key) + + # merge default dprint settings with kargs + # todo: it might be faster to clone _dprint_settings and call update(kargs) on it + for key, value in _dprint_settings.items(): + if not key in kargs: + kargs[key] = value + + # type check all kargs + # TODO + + # fetch the callstack + callstack = extract_stack()[:kargs["stack_origin_modifier"]] + + if callstack: + # use the callstack to determine where the call came from + if kargs["source_file"] is None: kargs["source_file"] = callstack[-1][0] + if kargs["source_line"] is None: kargs["source_line"] = callstack[-1][1] + if kargs["source_function"] is None: kargs["source_function"] = callstack[-1][2] + else: + if kargs["source_file"] is None: kargs["source_file"] = "unknown" + if kargs["source_line"] is None: kargs["source_line"] = 0 + if kargs["source_function"] is None: kargs["source_function"] = "unknown" + + # exlicitly force the message + if kargs["force"]: + kargs["level"] = "force" + + # when level is below ERROR, apply filters on the message + if kargs["level"] in level_map: kargs["level"] = level_map[kargs["level"]] + if kargs["level"] < LEVEL_ERROR and not _filter_check(args, kargs, _filter_entry): + return + + if kargs["source_file"].endswith(".py"): + short_source_file = join(basename(dirname(kargs["source_file"])), basename(kargs["source_file"][:-3])) + else: + short_source_file = join(basename(dirname(kargs["source_file"])), basename(kargs["source_file"])) + prefix = [level_tag_map.get(kargs["level"], "U")] + if kargs["time"]: + prefix.append(strftime(kargs["time_format"])) + if kargs["style"] == "short": + prefix.append("%s:%s %s " % (short_source_file, kargs["source_line"], kargs["source_function"])) + elif kargs["style"] == "column": + prefix.append("%25s:%-4s %-25s | " % (short_source_file[-25:], kargs["source_line"], kargs["source_function"][-25:])) + else: + raise ValueError("Invalid/unknown style: \"%s\"" % kargs["style"]) + prefix = " ".join(prefix) + messages = [] + + if kargs["callback"]: + args = args + kargs["callback"]() + + # print each variable in args + if kargs["binary"]: + string = kargs["glue"].join([str(v) for v in args]) + messages.append(" ".join(["%08d" % int(bin(ord(char))[2:]) for char in string])) + # for index, char in zip(xrange(len(string)), string): + # messages.append("{0:3d} {1:08d} \\x{2}".format(index, int(bin(ord(char))[2:]), char.encode("HEX"))) + elif kargs["meta"]: + messages.extend([dprint_format_variable(v) for v in args]) + elif kargs["lines"] and len(args) == 1 and type(args[0]) in (list, tuple): + messages.extend([str(v) for v in args[0]]) + elif kargs["lines"] and len(args) == 1 and type(args[0]) is dict: + messages.extend(["%s: %s" % (str(k), str(v)) for k, v in args[0].items()]) + elif kargs["lines"]: + messages.extend([str(v) for v in args]) + elif kargs["pprint"] and len(args) == 1: + messages.extend(pformat(args[0], width=kargs["width"]).split("\n")) + elif kargs["pprint"]: + messages.extend(pformat(args, width=kargs["width"]).split("\n")) + else: + messages.append(kargs["glue"].join([str(v) for v in args])) + + # add a line of characters at the top to seperate messages + if kargs["line"]: + messages.insert(0, "".join(kargs["line_char"] * kargs["width"])) + + # add a line of characters above and below to seperate messages + if kargs["box"]: + messages.insert(0, "".join(kargs["box_char"] * kargs["width"])) + messages.append("".join(kargs["box_char"] * kargs["width"])) + + # always add stderr to output if level is error or exception is set + global _last_dprint_see_stderr + if kargs["level"] == LEVEL_ERROR or kargs["exception"]: + kargs["stderr"] = True + kargs["stdout"] = False + + # do not outpuy 'see stderr' on consecutive dprint calls + if not _last_dprint_see_stderr: + _last_dprint_see_stderr = True + print >> stdout, prefix + "See stderr for exception" + else: + _last_dprint_see_stderr = False + + if kargs["stdout"]: + print >> stdout, prefix + ("\n"+prefix).join([msg[:10000] for msg in messages]) + if kargs["stack"]: + for line in format_list(callstack): + print >> stdout, line, + # if isinstance(kargs["stack"], bool): + # for line in format_list(callstack): + # print >> stdout, line, + # else: + # for line in format_list(kargs["stack"][:kargs["stack_origin_modifier"]]): + # print >> stdout, line, + if kargs["exception"]: + print_exception(*exc_info(), **{"file":stdout}) + stdout.flush() + if kargs["stderr"]: + print >> stderr, prefix + ("\n"+prefix).join([msg[:10000] for msg in messages]) + if kargs["stack"]: + print_stack(file=stderr) + if kargs["exception"]: + print_exception(*exc_info(), **{"file":stderr}) + stderr.flush() + if kargs["remote"]: + # todo: the remote_host and remote_port are values that may change + # for each message. when this happens different connections should + # be created! + kargs["timestamp"] = time() + kargs["callstack"] = callstack + kargs["prefix"] = prefix + kargs["thread_name"] = current_thread().name + RemoteConnection.send(args, kargs) + +def dprint_format_variable(v): + return "%22s %s" % (type(v), str(v)) + + # t = type(v) + # if t is BooleanType: return "(BooleanType) {!s}".format(v) + # if t is BufferType: return "(BufferType) {!s}".format(v) + # if t is BuiltinFunctionType: return "(BuiltinFunctionType) {!s}".format(v) + # if t is BuiltinMethodType: return "(BuiltinMethodType) {!s}".format(v) + # if t is ClassType: return "(ClassType) {!s}".format(v) + # if t is CodeType: return "(CodeType) {!s}".format(v) + # if t is ComplexType: return "(ComplexType) {!s}".format(v) + # if t is DictProxyType: return "(DictProxyType) {!s}".format(v) + # if t in (DictType, DictionaryType): return "(DictType, len {8} {!s}".format(len(v), str(v)) + # if t is EllipsisType: return "(EllipsisType) {!s}".format(v) + # if t is FileType: return "(FileType) {!s}".format(v) + # if t is FloatType: return "(FloatType) {!s}".format(v) + # if t is FrameType: return "(FrameType) {!s}".format(v) + # if t is FunctionType: return "(FunctionType) {!s}".format(v) + # if t is GeneratorType: return "(GeneratorType) {!s}".format(v) + # if t is GetSetDescriptorType: return "(GetSetDescriptorType) {!s}".format(v) + # if t is InstanceType: return "(InstanceType) {!s}".format(v) + # if t is int: return "(IntType) {!s}".format(v) + # if t is LambdaType: return "(LambdaType) {!s}".format(v) + # if t is ListType: return "(ListType, len {8} {!s}".format(len(v), str(v)) + # if t is LongType: return "(LongType) {!s}".format(v) + # if t is MemberDescriptorType: return "(MemberDescriptorType) {!s}".format(v) + # if t is MethodType: return "(MethodType) {!s}".format(v) + # if t is ModuleType: return "(ModuleType) {!s}".format(v) + # if t is NoneType: return "(NoneType) {!s}".format(v) + # if t is NotImplementedType: return "(NotImplementedType) {!s}".format(v) + # if t is ObjectType: return "(ObjectType) {!s}".format(v) + # if t is SliceType: return "(SliceType) {!s}".format(v) + # if t is str: return "(StringType, len {6} {!s}".format(len(v), str(v)) + # if t is TracebackType: return "(TracebackType) {!s}".format(v) + # if t is TupleType: return "(TupleType, len {7} {!s}".format(len(v), str(v)) + # if t is TypeType: return "(TypeType) {!s}".format(v) + # if t is UnboundMethodType: return "(UnboundMethodType) {!s}".format(v) + # if t is UnicodeType: return "(UnicodeType) {!s}".format(v) + # if t is XRangeType: return "(XRangeType) {!s}".format(v) + + # # default return + # return "({22!s}) {!s}".format(t, v) + +def strip_prefix(prefix, string): + if string.startswith(prefix): + return string[len(prefix):] + else: + return string + +if __debug__: + if __name__ == "__main__": + dprint(1, level="error") + dprint(2, level="error") + dprint(3, level="error") + dprint("---", force=1) + dprint(1, level="error") + dprint(2, level="error") + dprint(3, level="error") + +# if __name__ == "__main__": + +# def examples(): +# examples = [('Display a message "foo bar"', """if __debug__: dprint("foo", "bar")"""), +# ('Display a message in a function', """def my_function(): +# if __debug__: dprint("foo") +# pass +# my_function()"""), +# ('Display a value types', """if __debug__: dprint("foo", 123, 1.5, meta=1)"""), +# ('Display a message with a callstack', """def my_function(): +# if __debug__: dprint("foo", stack=1) +# pass +# my_function()"""), +# ('Display an exception', """try: +# raise RuntimeError("Wrong") +# except: +# if __debug__: dprint("An exception occured", exception=1) +# pass"""), +# ('Display a cpu-intensive message', """if __debug__: +# def expensive_calculation(): +# import time +# time.sleep(0.1) +# return "moo-milk" +# dprint("foo-bar", callback=expensive_calculation)""") +# ] + +# for title, code in examples: +# print("({})".format(title)) +# print("| " + "\n| ".join(code.split("\n"))) +# eval(compile(code, "filename.py", "exec")) +# print("---") +# print() + +# for title, code in examples: +# print("{{{") +# print("#!python") +# print("# {}".format(title)) +# print(code) +# eval(compile(code, "filename.py", "exec")) +# print("}}}") +# print() + +# def filter_(): +# filter_chain_policy("ENTRY", "drop") +# filter_add_by_level("ENTRY", "accept", level=LEVEL_ERROR) +# filter_add_by_level("ENTRY", "accept", min=LEVEL_WARNING, max=LEVEL_ERROR) +# filter_add_by_pattern("ENTRY", "accept", "foo") +# filter_add_by_source("ENTRY", "accept", line=644) +# filter_add_by_source("ENTRY", "accept", function="filter_") +# filter_add_by_source("ENTRY", "accept", "print.py") +# dprint("foo-bar", level=LEVEL_ERROR) +# dprint("foo-bar", level=LEVEL_WARNING) +# dprint("foo-bar") +# filter_print() + diff -Nru tribler-6.2.0/Tribler/dispersy/encoding.py tribler-6.2.0/Tribler/dispersy/encoding.py --- tribler-6.2.0/Tribler/dispersy/encoding.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/encoding.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,535 @@ +def _a_encode_int(value, mapping): + """ + 42 --> ('2', 'i', '42') + """ + assert isinstance(value, int), "VALUE has invalid type: %s" % type(value) + value = str(value).encode("UTF-8") + return (str(len(value)).encode("UTF-8"), "i", value) + +def _a_encode_long(value, mapping): + """ + 42 --> ('2', 'J', '42') + """ + assert isinstance(value, long), "VALUE has invalid type: %s" % type(value) + value = str(value).encode("UTF-8") + return (str(len(value)).encode("UTF-8"), "J", value) + +def _a_encode_float(value, mapping): + """ + 4.2 --> ('3', 'f', '4.2') + """ + assert isinstance(value, float), "VALUE has invalid type: %s" % type(value) + value = str(value).encode("UTF-8") + return (str(len(value)).encode("UTF-8"), "f", value) + +def _a_encode_unicode(value, mapping): + """ + 'foo-bar' --> ('7', 's', 'foo-bar') + """ + assert isinstance(value, unicode), "VALUE has invalid type: %s" % type(value) + value = value.encode("UTF-8") + return (str(len(value)).encode("UTF-8"), "s", value) + +def _a_encode_bytes(value, mapping): + """ + 'foo-bar' --> ('7', 'b', 'foo-bar') + """ + assert isinstance(value, bytes), "VALUE has invalid type: %s" % type(value) + return (str(len(value)).encode("UTF-8"), "b", value) + +def _a_encode_list(values, mapping): + """ + [1,2,3] --> ['3', 'l', '1', 'i', '1', '1', 'i', '2', '1', 'i', '3'] + """ + assert isinstance(values, list), "VALUE has invalid type: %s" % type(values) + encoded = [str(len(values)).encode("UTF-8"), "l"] + extend = encoded.extend + for value in values: + extend(mapping[type(value)](value, mapping)) + return encoded + +def _a_encode_set(values, mapping): + """ + [1,2,3] --> ['3', 'l', '1', 'i', '1', '1', 'i', '2', '1', 'i', '3'] + """ + assert isinstance(values, set), "VALUE has invalid type: %s" % type(values) + encoded = [str(len(values)).encode("UTF-8"), "L"] + extend = encoded.extend + for value in values: + extend(mapping[type(value)](value, mapping)) + return encoded + +def _a_encode_tuple(values, mapping): + """ + (1,2) --> ['2', 't', '1', 'i', '1', '1', 'i', '2'] + """ + assert isinstance(values, tuple), "VALUE has invalid type: %s" % type(values) + encoded = [str(len(values)).encode("UTF-8"), "t"] + extend = encoded.extend + for value in values: + extend(mapping[type(value)](value, mapping)) + return encoded + +def _a_encode_dictionary(values, mapping): + """ + {'foo':'bar', 'moo':'milk'} --> ['2', 'd', '3', 's', 'foo', '3', 's', 'bar', '3', 's', 'moo', '4', 's', 'milk'] + """ + assert isinstance(values, dict), "VALUE has invalid type: %s" % type(values) + encoded = [str(len(values)).encode("UTF-8"), "d"] + extend = encoded.extend + for key, value in sorted(values.items()): + assert type(key) in mapping, (key, values) + assert type(value) in mapping, (value, values) + extend(mapping[type(key)](key, mapping)) + extend(mapping[type(value)](value, mapping)) + return encoded + +def _a_encode_none(value, mapping): + """ + None --> ['0', 'n'] + """ + return ['0n'] + +def _a_encode_bool(value, mapping): + """ + True --> ['0', 'T'] + False --> ['0', 'F'] + """ + return ['0T' if value else '0F'] + +_a_encode_mapping = {int:_a_encode_int, + long:_a_encode_long, + float:_a_encode_float, + unicode:_a_encode_unicode, + str:_a_encode_bytes, + list:_a_encode_list, + set:_a_encode_set, + tuple:_a_encode_tuple, + dict:_a_encode_dictionary, + type(None):_a_encode_none, + bool:_a_encode_bool} + +# def _b_uint_to_bytes(i): +# assert isinstance(i, (int, long)) +# assert i >= 0 +# if i == 0: +# return "\x00" + +# else: +# bit8 = 16*8 +# mask8 = 2**8-1 +# mask7 = 2**7-1 +# l = [] +# while i: +# l.append(bit8 | mask7 & i) +# i >>= 7 +# l[0] &= mask7 +# return "".join(chr(k) for k in reversed(l)) + +# from math import log +# from struct import pack + +# def _b_encode_int(value, mapping): +# """ +# 42 --> (_b_uint_to_bytes(2), 'i', struct.pack('>h', 42)) +# """ +# assert isinstance(value, (int, long)), "VALUE has invalid type: %s" % type(value) +# length = 2 if value == 0 else int(log(value, 2) / 8) + 1 +# return (_b_uint_to_bytes(length), "i", pack({1:">h", 2:">h", 3:">i", 4:">i", 5:">l", 6:">l", 7:">l", 8:">l"}.get(length, ">q"), value)) + +# def _b_encode_float(value, mapping): +# """ +# 4.2 --> (_b_uint_to_bytes(4), 'f', struct.pack('>f', 4.2)) +# """ +# assert isinstance(value, float), "VALUE has invalid type: %s" % type(value) +# return (_b_uint_to_bytes(4), "f", pack(">f", value)) + +# def _b_encode_unicode(value, mapping): +# """ +# 'foo-bar' --> (_b_uint_to_bytes(7), 's', 'foo-bar') +# """ +# assert isinstance(value, unicode), "VALUE has invalid type: %s" % type(value) +# value = value.encode("UTF-8") +# return ("s", _b_uint_to_bytes(len(value)), value) + +# def _b_encode_bytes(value, mapping): +# """ +# 'foo-bar' --> (_b_uint_to_bytes(7), 'b', 'foo-bar') +# """ +# assert isinstance(value, bytes), "VALUE has invalid type: %s" % type(value) +# return (_b_uint_to_bytes(len(value)), "b", value) + +# def _b_encode_list(values, mapping): +# """ +# [1,2,3] --> [_b_uint_to_bytes(3), 'l'] + _b_encode_int(1) + _b_encode_int(2) + _b_encode_int(3) +# """ +# assert isinstance(values, list), "VALUE has invalid type: %s" % type(value) +# encoded = [_b_uint_to_bytes(len(values)), "l"] +# extend = encoded.extend +# for value in values: +# extend(mapping[type(value)](value, mapping)) +# return encoded + +# def _b_encode_tuple(values, mapping): +# """ +# (1,2) --> [_b_uint_to_bytes(3), 't'] + _b_encode_int(1) + _b_encode_int(2) +# """ +# assert isinstance(values, tuple), "VALUE has invalid type: %s" % type(value) +# encoded = [_b_uint_to_bytes(len(values)), "t"] +# extend = encoded.extend +# for value in values: +# extend(mapping[type(value)](value, mapping)) +# return encoded + +# def _b_encode_dictionary(values, mapping): +# """ +# {'foo':'bar', 'moo':'milk'} --> [_b_uint_to_bytes(2), 'd'] + _b_encode_bytes('foo') + _b_encode_bytes('bar') + _b_encode_bytes('moo') +_b_encode_bytes('milk') +# """ +# assert isinstance(values, dict), "VALUE has invalid type: %s" % type(value) +# encoded = [_b_uint_to_bytes(len(values)), "d"] +# extend = encoded.extend +# for key, value in sorted(values.items()): +# assert type(key) in mapping, (key, values) +# assert type(value) in mapping, (value, values) +# extend(mapping[type(key)](key, mapping)) +# extend(mapping[type(value)](value, mapping)) +# return encoded + +# def _b_encode_none(value, mapping): +# """ +# None --> [_b_uint_to_bytes(0), 'n'] +# """ +# return [_b_uint_to_bytes(0), "n"] + +# def _b_encode_bool(value, mapping): +# """ +# True --> [_b_uint_to_bytes(0), 'T'] +# False --> [_b_uint_to_bytes(0), 'F'] +# """ +# return [_b_uint_to_bytes(0), "T" if value else "F"] + +# _b_encode_mapping = {int:_b_encode_int, +# long:_b_encode_int, +# float:_b_encode_float, +# unicode:_b_encode_unicode, +# str:_b_encode_bytes, +# list:_b_encode_list, +# tuple:_b_encode_tuple, +# dict:_b_encode_dictionary, +# type(None):_b_encode_none, +# bool:_b_encode_bool} + +def bytes_to_uint(stream, offset=0): + assert isinstance(stream, str) + assert isinstance(offset, (int, long)) + assert offset >= 0 + bit8 = 16*8 + mask7 = 2**7-1 + i = 0 + while offset < len(stream): + c = ord(stream[offset]) + i |= mask7 & c + if not bit8 & c: + return i + offset += 1 + i <<= 7 + raise ValueError() + +def encode(data, version="a"): + """ + Encode DATA into version 'a' binary stream. + + DATA can be any: int, float, string, unicode, list, tuple, or + dictionary. + + Lists are considered to be tuples. I.e. when decoding an + encoded list it will come out as a tuple. + + The encoding process is done using version 'a' which is + indicated by the first byte of the resulting binary stream. + """ + assert isinstance(version, str) + if version == "a": + return "a" + "".join(_a_encode_mapping[type(data)](data, _a_encode_mapping)) + elif version == "b": + # raise ValueError("This version is not yet implemented") + return "b" + "".join(_b_encode_mapping[type(data)](data, _b_encode_mapping)) + else: + raise ValueError("Unknown encode version") + +def _a_decode_int(stream, offset, count, _): + """ + 'a2i42',3,2 --> 5,42 + """ + return offset+count, int(stream[offset:offset+count]) + +def _a_decode_long(stream, offset, count, _): + """ + 'a2J42',3,2 --> 5,42 + """ + return offset+count, long(stream[offset:offset+count]) + +def _a_decode_float(stream, offset, count, _): + """ + 'a3f4.2',3,3 --> 6,4.2 + """ + return offset+count, float(stream[offset:offset+count]) + +def _a_decode_unicode(stream, offset, count, _): + """ + 'a3sbar',3,3 --> 6,u'bar' + """ + if len(stream) >= offset+count: + return offset+count, stream[offset:offset+count].decode("UTF-8") + else: + raise ValueError("Invalid stream length", len(stream), offset + count) + +def _a_decode_bytes(stream, offset, count, _): + """ + 'a3bfoo',3,3 --> 6,'foo' + """ + if len(stream) >= offset+count: + return offset+count, stream[offset:offset+count] + else: + raise ValueError("Invalid stream length", len(stream), offset + count) + +def _a_decode_list(stream, offset, count, mapping): + """ + 'a1l3i123',3,1 --> 8,[123] + 'a2l1i41i2',3,1 --> 8,[4,2] + """ + container = [] + for _ in range(count): + + index = offset + while 48 <= ord(stream[index]) <= 57: + index += 1 + offset, value = mapping[stream[index]](stream, index+1, int(stream[offset:index]), mapping) + container.append(value) + + return offset, container + +def _a_decode_set(stream, offset, count, mapping): + """ + 'a1L3i123',3,1 --> 8,set(123) + 'a2L1i41i2',3,1 --> 8,set(4,2) + """ + container = set() + for _ in range(count): + + index = offset + while 48 <= ord(stream[index]) <= 57: + index += 1 + offset, value = mapping[stream[index]](stream, index+1, int(stream[offset:index]), mapping) + container.add(value) + + return offset, container + +def _a_decode_tuple(stream, offset, count, mapping): + """ + 'a1t3i123',3,1 --> 8,[123] + 'a2t1i41i2',3,1 --> 8,[4,2] + """ + container = [] + for _ in range(count): + + index = offset + while 48 <= ord(stream[index]) <= 57: + index += 1 + offset, value = mapping[stream[index]](stream, index+1, int(stream[offset:index]), mapping) + container.append(value) + + return offset, tuple(container) + +def _a_decode_dictionary(stream, offset, count, mapping): + """ + 'a2d3sfoo3sbar3smoo4smilk',3,2 -> 24,{'foo':'bar', 'moo':'milk'} + """ + container = {} + for _ in range(count): + + index = offset + while 48 <= ord(stream[index]) <= 57: + index += 1 + offset, key = mapping[stream[index]](stream, index+1, int(stream[offset:index]), mapping) + + index = offset + while 48 <= ord(stream[index]) <= 57: + index += 1 + offset, value = mapping[stream[index]](stream, index+1, int(stream[offset:index]), mapping) + + container[key] = value + + if len(container) < count: + raise ValueError("Duplicate key in dictionary") + return offset, container + +def _a_decode_none(stream, offset, count, mapping): + """ + 'a0n',3,0 -> 3,None + """ + assert count == 0 + return offset, None + +def _a_decode_true(stream, offset, count, mapping): + """ + 'a0T',3,1 -> 3,True + """ + assert count == 0 + return offset, True + +def _a_decode_false(stream, offset, count, mapping): + """ + 'a0F',3,1 -> 3,False + """ + assert count == 0 + return offset, False + +_a_decode_mapping = {"i":_a_decode_int, + "J":_a_decode_long, + "f":_a_decode_float, + "s":_a_decode_unicode, + "b":_a_decode_bytes, + "l":_a_decode_list, + "L":_a_decode_set, + "t":_a_decode_tuple, + "d":_a_decode_dictionary, + "n":_a_decode_none, + "T":_a_decode_true, + "F":_a_decode_false} + +def decode(stream, offset=0): + """ + Decode STREAM from index OFFSET and further into a python data + structure. + + Returns the new OFFSET of the stream and the decoded data. + + Only version 'a' decoding is supported. This version is + indicated by the first byte in the binary STREAM. + """ + assert isinstance(stream, bytes), "STREAM has invalid type: %s" % type(stream) + assert isinstance(offset, int), "OFFSET has invalid type: %s" % type(offset) + if stream[offset] == "a": + index = offset + 1 + while 48 <= ord(stream[index]) <= 57: + index += 1 + return _a_decode_mapping[stream[index]](stream, index+1, int(stream[offset+1:index]), _a_decode_mapping) + + raise ValueError("Unknown version found") + +if __debug__: + if __name__ == "__main__": + # def uint_to_bytes(i): + # assert isinstance(i, (int, long)) + # assert i >= 0 + # if i == 0: + # return "\x00" + + # else: + # bit8 = 16*8 + # mask8 = 2**8-1 + # mask7 = 2**7-1 + # l = [] + # while i: + # l.append(bit8 | mask7 & i) + # i >>= 7 + # l[0] &= mask7 + # return "".join(chr(k) for k in reversed(l)) + + # def bytes_to_uint(stream, offset=0): + # assert isinstance(stream, str) + # assert isinstance(offset, (int, long)) + # assert offset >= 0 + # bit8 = 16*8 + # mask7 = 2**7-1 + # i = 0 + # while offset < len(stream): + # c = ord(stream[offset]) + # i |= mask7 & c + # if not bit8 & c: + # return i + # offset += 1 + # i <<= 7 + # raise ValueError() + + # def test(i): + # s = uint_to_bytes(i) + # print "%5d %15s %8s" % (i, bin(i), s.encode("HEX")), [bin(ord(x)) for x in s] + # j = bytes_to_uint(s + "kjdhsakdjhkjhsdasa") + # assert i == j, (i, j) + # return s + + # # test(int("10110101010", 2)) + # for i in xrange(-10, 1024*150): + # if len(test(i)) > 2: + # break + # exit(0) + + from Tribler.Core.BitTornado.bencode import bencode, bdecode + + def test(in_, verbose=True): + value = in_ + s = encode(value) + length, v = decode(s) + if verbose: + print "dispersy A", length, ":", value, "->", s + else: + print "dispersy A", length + assert len(s) == length, (len(s), length) + assert value == v, (value, v) + + # value = in_ + # s = encode(value, "b") + # length = len(s) + # # length, v = decode(s) + # if verbose: + # print "dispersy B", length, ":", value, "->", s + # else: + # print "dispersy B", length + # # assert len(s) == length, (len(s), length) + # # assert value == v, (value, v) + + value = in_ + if isinstance(value, (float, type(None), set)): + print "bittorrent", "not supported" + else: + # exception: tuple types are encoded as list + if isinstance(value, tuple): + value = list(value) + + # exception: dictionary types may only have string for keys + if isinstance(value, dict): + convert = lambda a: str(a) if not isinstance(a, (str, unicode)) else a + value = dict((convert(a), b) for a, b in value.iteritems()) + + s = bencode(value) + v = bdecode(s) + + if verbose: + print "bittorrent", len(s), ":", value, "->", s + else: + print "bittorrent", len(s) + assert value == v, (value, v) + print + + test(4242) + test(42) + test(42l) + test(4.2) + test(0.0000000000000000042) + test("foo") + test(u"bar") + test([123]) + test([4, 2]) + test((4, 2)) + test({'foo':'bar', 'moo':'milk'}) + test({u'foo':'bar'}) + test({4:2}) + test(None) + test(range(1000), False) + test(["F" * 20 for _ in range(1000)], False) + test(set(['a','b'])) + test(True) + test(False) + test([True, True, False, True, False, False]) diff -Nru tribler-6.2.0/Tribler/dispersy/endpoint.py tribler-6.2.0/Tribler/dispersy/endpoint.py --- tribler-6.2.0/Tribler/dispersy/endpoint.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/endpoint.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,399 @@ +import logging +logger = logging.getLogger(__name__) + +from itertools import product +from select import select +from time import time +import errno +import socket +import sys +import threading + +from .candidate import Candidate + +if sys.platform == 'win32': + SOCKET_BLOCK_ERRORCODE = 10035 # WSAEWOULDBLOCK +else: + SOCKET_BLOCK_ERRORCODE = errno.EWOULDBLOCK + +TUNNEL_PREFIX = "ffffffff".decode("HEX") + + +class Endpoint(object): + + def __init__(self): + self._dispersy = None + self._total_up = 0 + self._total_down = 0 + self._total_send = 0 + self._cur_sendqueue = 0 + + @property + def total_up(self): + return self._total_up + + @property + def total_down(self): + return self._total_down + + @property + def total_send(self): + return self._total_send + + @property + def cur_sendqueue(self): + return self._cur_sendqueue + + def reset_statistics(self): + self._total_up = 0 + self._total_down = 0 + self._total_send = 0 + self._cur_sendqueue = 0 + + def get_address(self): + raise NotImplementedError() + + def send(self, candidates, packets): + raise NotImplementedError() + + def open(self, dispersy): + self._dispersy = dispersy + + def close(self, timeout=0.0): + assert self._dispersy, "Should not be called before open(...)" + assert isinstance(timeout, float), type(timeout) + + +class NullEndpoint(Endpoint): + + """ + NullEndpoint will ignore not send or receive anything. + + This Endpoint can be used during unit tests that should not communicate with other peers. + """ + + def __init__(self, address=("0.0.0.0", -1)): + super(NullEndpoint, self).__init__() + self._address = address + + def get_address(self): + return self._address + + def send(self, candidates, packets): + if any(len(packet) > 2**16 - 60 for packet in packets): + raise RuntimeError("UDP does not support %d byte packets" % len(max(len(packet) for packet in packets))) + self._total_up += sum(len(packet) for packet in packets) * len(candidates) + + +class RawserverEndpoint(Endpoint): + + def __init__(self, rawserver, port, ip="0.0.0.0"): + super(RawserverEndpoint, self).__init__() + + self._rawserver = rawserver + self._port = port + self._ip = ip + self._add_task = self._rawserver.add_task + self._sendqueue_lock = threading.RLock() + self._sendqueue = [] + + # _DISPERSY and _SOCKET are set during open(...) + self._socket = None + + def open(self, dispersy): + super(RawserverEndpoint, self).open(dispersy) + + while True: + try: + self._socket = self._rawserver.create_udpsocket(self._port, self._ip) + logger.debug("Listening at %d", self._port) + except socket.error: + self._port += 1 + continue + break + self._rawserver.start_listening_udp(self._socket, self) + + def close(self, timeout=0.0): + self._rawserver.stop_listening_udp(self._socket) + super(RawserverEndpoint, self).close(timeout) + + def get_address(self): + assert self._dispersy, "Should not be called before open(...)" + return self._socket.getsockname() + + def data_came_in(self, packets): + assert self._dispersy, "Should not be called before open(...)" + # called on the Tribler rawserver + + # the rawserver SUCKS. every now and then exceptions are not shown and apparently we are + # sometimes called without any packets... + if packets: + self._total_down += sum(len(data) for _, data in packets) + + if logger.isEnabledFor(logging.DEBUG): + for sock_addr, data in packets: + try: + name = self._dispersy.convert_packet_to_meta_message(data, load=False, auto_load=False).name + except: + name = "???" + logger.debug("%30s <- %15s:%-5d %4d bytes", name, sock_addr[0], sock_addr[1], len(data)) + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_recv, name) + + self._dispersy.callback.register(self.dispersythread_data_came_in, (packets, time())) + + def dispersythread_data_came_in(self, packets, timestamp): + assert self._dispersy, "Should not be called before open(...)" + # iterator = ((self._dispersy.get_candidate(sock_addr), data.startswith(TUNNEL_PREFIX), sock_addr, data) for sock_addr, data in packets) + # self._dispersy.on_incoming_packets([(candidate if candidate else self._dispersy.create_candidate(WalkCandidate, sock_addr, tunnel), data[4:] if tunnel else data) + # for candidate, tunnel, sock_addr, data + # in iterator], + # True, + # timestamp) + iterator = ((data.startswith(TUNNEL_PREFIX), sock_addr, data) for sock_addr, data in packets) + self._dispersy.on_incoming_packets([(Candidate(sock_addr, tunnel), data[4:] if tunnel else data) + for tunnel, sock_addr, data + in iterator], + True, + timestamp) + + def send(self, candidates, packets): + assert self._dispersy, "Should not be called before open(...)" + assert isinstance(candidates, (tuple, list, set)), type(candidates) + assert all(isinstance(candidate, Candidate) for candidate in candidates) + assert isinstance(packets, (tuple, list, set)), type(packets) + assert all(isinstance(packet, str) for packet in packets) + assert all(len(packet) > 0 for packet in packets) + if any(len(packet) > 2**16 - 60 for packet in packets): + raise RuntimeError("UDP does not support %d byte packets" % len(max(len(packet) for packet in packets))) + + self._total_up += sum(len(data) for data in packets) * len(candidates) + self._total_send += (len(packets) * len(candidates)) + + wan_address = self._dispersy.wan_address + + with self._sendqueue_lock: + batch = [(candidate.get_destination_address(wan_address), TUNNEL_PREFIX + data if candidate.tunnel else data) + for candidate, data + in product(candidates, packets)] + + if len(batch) > 0: + did_have_senqueue = bool(self._sendqueue) + self._sendqueue.extend(batch) + + # If we did not already a sendqueue, then we need to call process_sendqueue in order send these messages + if not did_have_senqueue: + self._process_sendqueue() + + # return True when something has been send + return True + + return False + + def _process_sendqueue(self): + assert self._dispersy, "Should not be called before start(...)" + with self._sendqueue_lock: + if self._sendqueue: + index = 0 + NUM_PACKETS = min(max(50, len(self._sendqueue) / 10), len(self._sendqueue)) + logger.debug("%d left in sendqueue, trying to send %d packets", len(self._sendqueue), NUM_PACKETS) + + for i in xrange(NUM_PACKETS): + sock_addr, data = self._sendqueue[i] + try: + self._socket.sendto(data, sock_addr) + if logger.isEnabledFor(logging.DEBUG): + try: + name = self._dispersy.convert_packet_to_meta_message(data, load=False, auto_load=False).name + except: + name = "???" + logger.debug("%30s -> %15s:%-5d %4d bytes", name, sock_addr[0], sock_addr[1], len(data)) + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_send, name) + + index += 1 + + except socket.error as e: + if e[0] != SOCKET_BLOCK_ERRORCODE: + logger.warning("could not send %d to %s (%d in sendqueue)", len(data), sock_addr, len(self._sendqueue)) + + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_send, u"socket-error") + break + + self._sendqueue = self._sendqueue[index:] + if self._sendqueue: + # And schedule a new attempt + self._add_task(self._process_sendqueue, 0.1, "process_sendqueue") + logger.debug("%d left in sendqueue", len(self._sendqueue)) + + self._cur_sendqueue = len(self._sendqueue) + + +class StandaloneEndpoint(RawserverEndpoint): + + def __init__(self, port, ip="0.0.0.0"): + # do NOT call RawserverEndpoint.__init__! + Endpoint.__init__(self) + + self._port = port + self._ip = ip + self._running = False + self._add_task = lambda task, delay = 0.0, id = "": None + self._sendqueue_lock = threading.RLock() + self._sendqueue = [] + + # _DISPERSY and _THREAD are set during open(...) + self._thread = None + + def open(self, dispersy): + # do NOT call RawserverEndpoint.open! + Endpoint.open(self, dispersy) + + while True: + try: + self._socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 870400) + self._socket.bind((self._ip, self._port)) + self._socket.setblocking(0) + logger.debug("Listening at %d", self._port) + except socket.error: + self._port += 1 + continue + break + + self._running = True + self._thread = threading.Thread(name="StandaloneEndpoint", target=self._loop) + self._thread.daemon = True + self._thread.start() + + def close(self, timeout=10.0): + self._running = False + if timeout > 0.0: + self._thread.join(timeout) + + try: + self._socket.close() + except socket.error as exception: + logger.exception("%s", exception) + + # do NOT call RawserverEndpoint.open! + Endpoint.close(self, timeout) + + def _loop(self): + assert self._dispersy, "Should not be called before open(...)" + recvfrom = self._socket.recvfrom + socket_list = [self._socket.fileno()] + + prev_sendqueue = 0 + while self._running: + # This is a tricky, if we are running on the DAS4 whenever a socket is ready for writing all processes of + # this node will try to write. Therefore, we have to limit the frequency of trying to write a bit. + if self._sendqueue and (time() - prev_sendqueue) > 0.1: + read_list, write_list, _ = select(socket_list, socket_list, [], 0.1) + else: + read_list, write_list, _ = select(socket_list, [], [], 0.1) + + # Furthermore, if we are allowed to send, process sendqueue immediately + if write_list: + self._process_sendqueue() + prev_sendqueue = time() + + if read_list: + packets = [] + try: + while True: + (data, sock_addr) = recvfrom(65535) + if data: + packets.append((sock_addr, data)) + else: + break + + except socket.error as e: + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_recv, u"socket-error-'%s'" % str(e)) + + finally: + if packets: + self.data_came_in(packets) + + +class TunnelEndpoint(Endpoint): + + def __init__(self, swift_process): + super(TunnelEndpoint, self).__init__() + self._swift = swift_process + self._session = "ffffffff".decode("HEX") + + def open(self, dispersy): + super(TunnelEndpoint, self).open(dispersy) + self._swift.add_download(self) + + def close(self, timeout=0.0): + self._swift.remove_download(self, True, True) + super(TunnelEndpoint, self).close(timeout) + + def get_def(self): + class DummyDef(object): + + def get_roothash(self): + return "dispersy-endpoint" + + def get_roothash_as_hex(self): + return "dispersy-endpoint".encode("HEX") + return DummyDef() + + def get_address(self): + return ("0.0.0.0", self._swift.listenport) + + def send(self, candidates, packets): + assert self._dispersy, "Should not be called before open(...)" + assert isinstance(candidates, (tuple, list, set)), type(candidates) + assert all(isinstance(candidate, Candidate) for candidate in candidates) + assert isinstance(packets, (tuple, list, set)), type(packets) + assert all(isinstance(packet, str) for packet in packets) + assert all(len(packet) > 0 for packet in packets) + if any(len(packet) > 2**16 - 60 for packet in packets): + raise RuntimeError("UDP does not support %d byte packets" % len(max(len(packet) for packet in packets))) + + self._total_up += sum(len(data) for data in packets) * len(candidates) + self._total_send += (len(packets) * len(candidates)) + wan_address = self._dispersy.wan_address + + self._swift.splock.acquire() + try: + for candidate in candidates: + sock_addr = candidate.get_destination_address(wan_address) + assert self._dispersy.is_valid_address(sock_addr), sock_addr + + for data in packets: + if logger.isEnabledFor(logging.DEBUG): + try: + name = self._dispersy.convert_packet_to_meta_message(data, load=False, auto_load=False).name + except: + name = "???" + logger.debug("%30s -> %15s:%-5d %4d bytes", name, sock_addr[0], sock_addr[1], len(data)) + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_send, name) + + self._swift.send_tunnel(self._session, sock_addr, data) + + # return True when something has been send + return candidates and packets + + finally: + self._swift.splock.release() + + def i2ithread_data_came_in(self, session, sock_addr, data): + assert self._dispersy, "Should not be called before open(...)" + # assert session == self._session, [session, self._session] + if logger.isEnabledFor(logging.DEBUG): + try: + name = self._dispersy.convert_packet_to_meta_message(data, load=False, auto_load=False).name + except: + name = "???" + logger.debug("%30s <- %15s:%-5d %4d bytes", name, sock_addr[0], sock_addr[1], len(data)) + self._dispersy.statistics.dict_inc(self._dispersy.statistics.endpoint_recv, name) + + self._total_down += len(data) + self._dispersy.callback.register(self.dispersythread_data_came_in, (sock_addr, data, time())) + + def dispersythread_data_came_in(self, sock_addr, data, timestamp): + assert self._dispersy, "Should not be called before open(...)" + # candidate = self._dispersy.get_candidate(sock_addr) or self._dispersy.create_candidate(WalkCandidate, sock_addr, True) + self._dispersy.on_incoming_packets([(Candidate(sock_addr, True), data)], True, timestamp) diff -Nru tribler-6.2.0/Tribler/dispersy/member.py tribler-6.2.0/Tribler/dispersy/member.py --- tribler-6.2.0/Tribler/dispersy/member.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/member.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,322 @@ +import logging +logger = logging.getLogger(__name__) + +from hashlib import sha1 + +from .crypto import ec_from_private_bin, ec_from_public_bin, ec_signature_length, ec_verify, ec_sign + +if __debug__: + from .crypto import ec_check_public_bin, ec_check_private_bin + + +class DummyMember(object): + + def __init__(self, dispersy, mid): + if __debug__: + from .dispersy import Dispersy + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(mid, str), type(mid) + assert len(mid) == 20, len(mid) + database = dispersy.database + + try: + database_id, = database.execute(u"SELECT id FROM member WHERE mid = ? LIMIT 1", (buffer(mid),)).next() + except StopIteration: + database.execute(u"INSERT INTO member (mid) VALUES (?)", (buffer(mid),)) + database_id = database.last_insert_rowid + + self._database_id = database_id + self._mid = mid + + @property + def mid(self): + """ + The member id. This is the 20 byte sha1 hash over the public key. + """ + return self._mid + + @property + def database_id(self): + """ + The database id. This is the unsigned integer used to store + this member in the Dispersy database. + """ + return self._database_id + + @property + def public_key(self): + return "" + + @property + def private_key(self): + return "" + + @property + def signature_length(self): + return 0 + + def has_identity(self, community): + return False + + @property + def must_store(self): + return False + + @must_store.setter + def must_store(self, value): + pass + + @property + def must_ignore(self): + return False + + @must_ignore.setter + def must_ignore(self, value): + pass + + @property + def must_blacklist(self): + return False + + @must_blacklist.setter + def must_blacklist(self, value): + pass + + def verify(self, data, signature, offset=0, length=0): + return False + + def sign(self, data, offset=0, length=0): + return "" + + def __eq__(self, member): + return False + + def __ne__(self, member): + return True + + def __cmp__(self, member): + return -1 + + def __hash__(self): + return self._mid.__hash__() + + def __str__(self): + return "<%s 0 %s>" % (self.__class__.__name__, self._mid.encode("HEX")) + + +class Member(DummyMember): + + def __init__(self, dispersy, public_key, private_key=""): + """ + Create a new Member instance. + """ + if __debug__: + from .dispersy import Dispersy + assert isinstance(dispersy, Dispersy), type(dispersy) + assert isinstance(public_key, str) + assert isinstance(private_key, str) + assert ec_check_public_bin(public_key), public_key.encode("HEX") + assert private_key == "" or ec_check_private_bin(private_key), private_key.encode("HEX") + + database = dispersy.database + + try: + database_id, mid, tags, private_key_from_db = database.execute(u"SELECT m.id, m.mid, m.tags, p.private_key FROM member AS m LEFT OUTER JOIN private_key AS p ON p.member = m.id WHERE m.public_key = ? LIMIT 1", (buffer(public_key),)).next() + + except StopIteration: + mid = sha1(public_key).digest() + private_key_from_db = None + try: + database_id, tags = database.execute(u"SELECT id, tags FROM member WHERE mid = ? LIMIT 1", (buffer(mid),)).next() + + except StopIteration: + database.execute(u"INSERT INTO member (mid, public_key) VALUES (?, ?)", (buffer(mid), buffer(public_key))) + database_id = database.last_insert_rowid + tags = u"" + + else: + database.execute(u"UPDATE member SET public_key = ? WHERE id = ?", (buffer(public_key), database_id)) + + else: + mid = str(mid) + private_key_from_db = str(private_key_from_db) if private_key_from_db else "" + assert private_key_from_db == "" or ec_check_private_bin(private_key_from_db), private_key_from_db.encode("HEX") + + if private_key_from_db: + private_key = private_key_from_db + elif private_key: + database.execute(u"INSERT INTO private_key (member, private_key) VALUES (?, ?)", (database_id, buffer(private_key))) + + self._database = database + self._database_id = database_id + self._mid = mid + self._public_key = public_key + self._private_key = private_key + self._ec = ec_from_private_bin(private_key) if private_key else ec_from_public_bin(public_key) + self._signature_length = ec_signature_length(self._ec) + self._tags = [tag for tag in tags.split(",") if tag] + self._has_identity = set() + + if __debug__: + assert len(set(self._tags)) == len(self._tags), ("there are duplicate tags", self._tags) + for tag in self._tags: + assert tag in (u"store", u"ignore", u"blacklist"), tag + + logger.debug("mid:%s db:%d public:%s private:%s", self._mid.encode("HEX"), self._database_id, bool(self._public_key), bool(self._private_key)) + + @property + def public_key(self): + """ + The public key. + + This is binary representation of the public key. + """ + return self._public_key + + @property + def private_key(self): + """ + The private key. + + This is binary representation of the private key. + + It may be an empty string when the private key is not yet available. In this case the sign + method will raise a RuntimeError. + """ + return self._private_key + + @property + def signature_length(self): + """ + The length, in bytes, of a signature. + """ + return self._signature_length + + def set_private_key(self, private_key): + assert isinstance(private_key, str) + assert self._private_key == "" + self._private_key = private_key + self._ec = ec_from_private_bin(private_key) + self._database.execute(u"INSERT INTO private_key (member, private_key) VALUES (?, ?)", (self._database_id, buffer(private_key))) + + def has_identity(self, community): + """ + Returns True when we have a dispersy-identity message for this member in COMMUNITY. + """ + if __debug__: + from .community import Community + assert isinstance(community, Community) + + if community.cid in self._has_identity: + return True + + else: + try: + self._database.execute(u"SELECT 1 FROM sync WHERE member = ? AND meta_message = ? LIMIT 1", + (self._database_id, community.get_meta_message(u"dispersy-identity").database_id)).next() + except StopIteration: + return False + else: + self._has_identity.add(community.cid) + return True + + def _set_tag(self, tag, value): + assert isinstance(tag, unicode) + assert tag in [u"store", u"ignore", u"blacklist"] + assert isinstance(value, bool) + logger.debug("mid:%s set tag %s -> %s", self._mid.encode("HEX"), tag, value) + if value: + if tag in self._tags: + # the tag is already set + return False + self._tags.append(tag) + + else: + if not tag in self._tags: + # the tag isn't there to begin with + return False + self._tags.remove(tag) + + self._database.execute(u"UPDATE member SET tags = ? WHERE id = ?", (u",".join(sorted(self._tags)), self._database_id)) + return True + + @property + def must_store(self): + return u"store" in self._tags + + @must_store.setter + def must_store(self, value): + return self._set_tag(u"store", value) + + @property + def must_ignore(self): + return u"ignore" in self._tags + + @must_ignore.setter + def must_ignore(self, value): + return self._set_tag(u"ignore", value) + + @property + def must_blacklist(self): + return u"blacklist" in self._tags + + @must_blacklist.setter + def must_blacklist(self, value): + return self._set_tag(u"blacklist", value) + + def verify(self, data, signature, offset=0, length=0): + """ + Verify that DATA, starting at OFFSET up to LENGTH bytes, was signed by this member and + matches SIGNATURE. + + DATA is the signed data and the signature concatenated. + OFFSET is the offset for the signed data. + LENGTH is the length of the signature and the data, in bytes. + + Returns True or False. + """ + assert isinstance(data, str) + assert isinstance(signature, str) + assert isinstance(offset, (int, long)) + assert isinstance(length, (int, long)) + return self._public_key and \ + self._signature_length == len(signature) \ + and ec_verify(self._ec, sha1(data[offset:offset + (length or len(data))]).digest(), signature) + + def sign(self, data, offset=0, length=0): + """ + Returns the signature of DATA, starting at OFFSET up to LENGTH bytes. + + Will raise a RuntimeError when this we do not have the private key. + """ + if self._private_key: + return ec_sign(self._ec, sha1(data[offset:length or len(data)]).digest()) + else: + raise RuntimeError("unable to sign data without the private key") + + def __eq__(self, member): + assert isinstance(member, DummyMember) + assert (self._database_id == member.database_id) == (self._mid == member.mid) + return self._database_id == member.database_id + + def __ne__(self, member): + assert isinstance(member, DummyMember) + assert (self._database_id == member.database_id) == (self._mid == member.mid) + return self._database_id != member.database_id + + def __cmp__(self, member): + assert isinstance(member, DummyMember) + assert (self._database_id == member.database_id) == (self._mid == member.mid) + return cmp(self._database_id, member.database_id) + + def __hash__(self): + """ + Allows Member classes to be used as keys in a dictionary. + """ + return self._public_key.__hash__() + + def __str__(self): + """ + Returns a human readable string representing the member. + """ + return "<%s %d %s>" % (self.__class__.__name__, self._database_id, self._mid.encode("HEX")) diff -Nru tribler-6.2.0/Tribler/dispersy/message.py tribler-6.2.0/Tribler/dispersy/message.py --- tribler-6.2.0/Tribler/dispersy/message.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/message.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,656 @@ +import logging +logger = logging.getLogger(__name__) + +from .meta import MetaObject + +# +# Exceptions +# + + +class DelayPacket(Exception): + + """ + Uses an identifier to match request to response. + """ + def __init__(self, msg, community): + super(DelayPacket, self).__init__(msg) + self._community = community + + def create_request(self, candidate, delayed): + # create and send a request. once the response is received the _process_delayed_packet can + # pass the (candidate, delayed) tuple to dispersy for reprocessing + # @return True if actual request is made + raise NotImplementedError() + + def _process_delayed_packet(self, response, candidate, delayed): + if response: + # process the response and the delayed message + self._community.dispersy.on_incoming_packets([(candidate, delayed)]) + self._community.dispersy.statistics.delay_success += 1 + else: + # timeout, do nothing + self._community.dispersy.statistics.delay_timeout += 1 + + +class DelayPacketByMissingMember(DelayPacket): + + def __init__(self, community, missing_member_id): + assert isinstance(missing_member_id, str) + assert len(missing_member_id) == 20 + super(DelayPacketByMissingMember, self).__init__("Missing member", community) + self._missing_member_id = missing_member_id + + def create_request(self, candidate, delayed): + return self._community.dispersy.create_missing_identity(self._community, candidate, self._community.dispersy.get_temporary_member_from_id(self._missing_member_id), self._process_delayed_packet, (candidate, delayed)) + + +class DelayPacketByMissingLastMessage(DelayPacket): + + def __init__(self, community, member, message, count): + if __debug__: + from .member import Member + assert isinstance(member, Member) + assert isinstance(message, Message) + assert isinstance(count, int) + super(DelayPacketByMissingLastMessage, self).__init__("Missing last message", community) + self._member = member + self._message = message + self._count = count + + def create_request(self, candidate, delayed): + return self._community.dispersy.create_missing_last_message(self._community, candidate, self._member, self._message, self._count, self._process_delayed_packet, (candidate, delayed)) + + +class DelayPacketByMissingMessage(DelayPacket): + + def __init__(self, community, member, global_time): + if __debug__: + from .community import Community + from .member import Member + assert isinstance(community, Community) + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + super(DelayPacketByMissingMessage, self).__init__("Missing message (new style)", community) + self._member = member + self._global_time = global_time + + def create_request(self, candidate, delayed): + return self._community.dispersy.create_missing_message(self._community, candidate, self._member, self._global_time, self._process_delayed_packet, (candidate, delayed)) + + +class DropPacket(Exception): + + """ + Raised by Conversion.decode_message when the packet is invalid. + I.e. does not conform to valid syntax, contains malicious + behaviour, etc. + """ + pass + + +class DelayMessage(Exception): + + """ + Uses an identifier to match request to response. + + Ensure to call Dispersy.handle_missing_messages for each incoming message that may have been + requested. + """ + def __init__(self, delayed): + if __debug__: + from .message import Message + assert isinstance(delayed, Message.Implementation), delayed + super(DelayMessage, self).__init__(self.__class__.__name__) + self._delayed = delayed + + @property + def delayed(self): + return self._delayed + + def duplicate(self, delayed): + """ + Create another instance of the same class with another DELAYED. + """ + return self.__class__(delayed) + + def create_request(self): + # create and send a request. once the response is received the _process_delayed_message can + # pass the (candidate, delayed) tuple to dispersy for reprocessing + # @return True if actual request is made + raise NotImplementedError() + + def _process_delayed_message(self, response): + if response: + logger.debug("resume %s (received %s)", self._delayed, response) + + # inform the delayed message of the reason why it is resumed + self._delayed.resume = response + + # process the response and the delayed message + self._delayed.community.dispersy.on_messages([self._delayed]) + self._delayed.community.dispersy.statistics.delay_success += 1 + else: + # timeout, do nothing + logger.debug("ignore %s (no response was received)", self._delayed) + self._delayed.community.dispersy.statistics.delay_timeout += 1 + + +class DelayMessageByProof(DelayMessage): + + def create_request(self): + community = self._delayed.community + return community.dispersy.create_missing_proof(community, self._delayed.candidate, self._delayed, self._process_delayed_message) + + +class DelayMessageBySequence(DelayMessage): + + def __init__(self, delayed, missing_low, missing_high): + assert isinstance(missing_low, (int, long)) + assert isinstance(missing_high, (int, long)) + assert 0 < missing_low <= missing_high + super(DelayMessageBySequence, self).__init__(delayed) + self._missing_low = missing_low + self._missing_high = missing_high + + def duplicate(self, delayed): + return self.__class__(delayed, self._missing_low, self._missing_high) + + def create_request(self): + community = self._delayed.community + return community.dispersy.create_missing_sequence(community, self._delayed.candidate, self._delayed.authentication.member, self._delayed.meta, self._missing_low, self._missing_high, self._process_delayed_message) + + +class DelayMessageByMissingMessage(DelayMessage): + + def __init__(self, delayed, member, global_time): + if __debug__: + from .member import Member + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + super(DelayMessageByMissingMessage, self).__init__(delayed) + self._member = member + self._global_time = global_time + + def duplicate(self, delayed): + return self.__class__(delayed, self._member, self._global_time) + + def create_request(self): + community = self._delayed.community + return community.dispersy.create_missing_message(community, self._delayed.candidate, self._member, self._global_time, self._process_delayed_message) + + +class DropMessage(Exception): + + """ + Raised during Community.on_message. + + Drops a message because it violates 'something'. More specific + reasons can be given with by raising a spectific subclass. + """ + def __init__(self, dropped, msg): + if __debug__: + from .message import Message + assert isinstance(dropped, Message.Implementation) + assert isinstance(msg, (str, unicode)) + self._dropped = dropped + super(DropMessage, self).__init__(msg) + + @property + def dropped(self): + return self._dropped + + def duplicate(self, dropped): + """ + Create another instance of the same class with another DELAYED. + """ + return self.__class__(dropped, self.message) + + def __str__(self): + return "".join((super(DropMessage, self).__str__(), " [", self._dropped.name, "]")) +# +# batch +# + + +class BatchConfiguration(object): + + def __init__(self, max_window=0.0, priority=0, max_size=1024, max_age=300.0): + """ + Per meta message configuration on batch handling. + + MAX_WINDOW sets the maximum size, in seconds, of the window. A larger window results in + larger batches and a longer average delay for incoming messages. Setting MAX_WINDOW to zero + disables batching, in this case all other parameters are ignored. + + PRIORITY sets the Callback priority of the task that processes the batch. A higher priority + will result in earlier handling when there is CPU contention. + + MAX_SIZE sets the maximum size of the batch. A new batch will be created when this size is + reached, even when new messages would fall within MAX_WINDOW size. A larger MAX_SIZE + results in more processing time per batch and will reduce responsiveness as the processing + thread is occupied. Also, when a batch reaches MAX_SIZE it is processed immediately. + + MAX_AGE sets the maximum age of the batch. This is useful for messages that require a + response. When the requests are delayed for to long they will time out, in this case a + response no longer needs to be sent. MAX_AGE for the request messages should hence be lower + than the used timeout + max_window on the response messages. + """ + assert isinstance(max_window, float) + assert 0.0 <= max_window, max_window + assert isinstance(priority, int) + assert isinstance(max_size, int) + assert 0 < max_size, max_size + assert isinstance(max_age, float) + assert 0.0 <= max_window < max_age, [max_window, max_age] + self._max_window = max_window + self._priority = priority + self._max_size = max_size + self._max_age = max_age + + @property + def enabled(self): + # enabled when max_window is positive + return 0.0 < self._max_window + + @property + def max_window(self): + return self._max_window + + @property + def priority(self): + return self._priority + + @property + def max_size(self): + return self._max_size + + @property + def max_age(self): + return self._max_age + +# +# packet +# + + +class Packet(MetaObject.Implementation): + + def __init__(self, meta, packet, packet_id): + assert isinstance(packet, str) + assert isinstance(packet_id, (int, long)) + super(Packet, self).__init__(meta) + self._packet = packet + self._packet_id = packet_id + + @property + def community(self): + return self._meta._community + + @property + def name(self): + return self._meta._name + + @property + def database_id(self): + return self._meta._database_id + + @property + def resolution(self): + return self._meta._resolution + + @property + def check_callback(self): + return self._meta._check_callback + + @property + def handle_callback(self): + return self._meta._handle_callback + + @property + def undo_callback(self): + return self._meta._undo_callback + + @property + def priority(self): + return self._meta._priority + + @property + def delay(self): + return self._meta._delay + + @property + def packet(self): + return self._packet + + @property + def packet_id(self): + return self._packet_id + + @packet_id.setter + def packet_id(self, packet_id): + assert isinstance(packet_id, (int, long)) + self._packet_id = packet_id + + def load_message(self): + message = self._meta.community.dispersy.convert_packet_to_message(self._packet, self._meta.community, verify=False) + message.packet_id = self._packet_id + return message + + def __str__(self): + return "<%s.%s %s %dbytes>" % (self._meta.__class__.__name__, self.__class__.__name__, self._meta._name, len(self._packet)) + +# +# message +# + + +class Message(MetaObject): + + class Implementation(Packet): + + def __init__(self, meta, authentication, resolution, distribution, destination, payload, conversion=None, candidate=None, packet="", packet_id=0, sign=True): + if __debug__: + from .conversion import Conversion + from .candidate import Candidate + assert isinstance(meta, Message), "META has invalid type '%s'" % type(meta) + assert isinstance(authentication, meta.authentication.Implementation), "AUTHENTICATION has invalid type '%s'" % type(authentication) + assert isinstance(resolution, meta.resolution.Implementation), "RESOLUTION has invalid type '%s'" % type(resolution) + assert isinstance(distribution, meta.distribution.Implementation), "DISTRIBUTION has invalid type '%s'" % type(distribution) + assert isinstance(destination, meta.destination.Implementation), "DESTINATION has invalid type '%s'" % type(destination) + assert isinstance(payload, meta.payload.Implementation), "PAYLOAD has invalid type '%s'" % type(payload) + assert conversion is None or isinstance(conversion, Conversion), "CONVERSION has invalid type '%s'" % type(conversion) + assert candidate is None or isinstance(candidate, Candidate) + assert isinstance(packet, str) + assert isinstance(packet_id, (int, long)) + super(Message.Implementation, self).__init__(meta, packet, packet_id) + self._authentication = authentication + self._resolution = resolution + self._distribution = distribution + self._destination = destination + self._payload = payload + self._candidate = candidate + + # _RESUME contains the message that caused SELF to be processed after it was delayed + self._resume = None + + # allow setup parts. used to setup callback when something changes that requires the + # self._packet to be generated again + self._authentication.setup(self) + # self._resolution.setup(self) + # self._distribution.setup(self) + # self._destination.setup(self) + # self._payload.setup(self) + + if conversion: + self._conversion = conversion + elif packet: + self._conversion = meta.community.get_conversion_for_packet(packet) + else: + self._conversion = meta.community.get_conversion_for_message(self) + + if not packet: + self._packet = self._conversion.encode_message(self, sign=sign) + + @property + def conversion(self): + return self._conversion + + @property + def authentication(self): + return self._authentication + + @property + def resolution(self): + return self._resolution + + @property + def distribution(self): + return self._distribution + + @property + def destination(self): + return self._destination + + @property + def payload(self): + return self._payload + + @property + def candidate(self): + return self._candidate + + @property + def resume(self): + return self._resume + + @resume.setter + def resume(self, message): + assert isinstance(message, Message.Implementation), type(message) + self._resume = message + + def load_message(self): + return self + + def regenerate_packet(self, packet=""): + if packet: + self._packet = packet + else: + self._packet = self._conversion.encode_message(self) + + def __str__(self): + return "<%s.%s %s %dbytes>" % (self._meta.__class__.__name__, self.__class__.__name__, self._meta._name, len(self._packet)) + + def __init__(self, community, name, authentication, resolution, distribution, destination, payload, check_callback, handle_callback, undo_callback=None, batch=None): + if __debug__: + from .community import Community + from .authentication import Authentication + from .resolution import Resolution, DynamicResolution + from .destination import Destination + from .distribution import Distribution + from .payload import Payload + assert isinstance(community, Community), "COMMUNITY has invalid type '%s'" % type(community) + assert isinstance(name, unicode), "NAME has invalid type '%s'" % type(name) + assert isinstance(authentication, Authentication), "AUTHENTICATION has invalid type '%s'" % type(authentication) + assert isinstance(resolution, Resolution), "RESOLUTION has invalid type '%s'" % type(resolution) + assert isinstance(distribution, Distribution), "DISTRIBUTION has invalid type '%s'" % type(distribution) + assert isinstance(destination, Destination), "DESTINATION has invalid type '%s'" % type(destination) + assert isinstance(payload, Payload), "PAYLOAD has invalid type '%s'" % type(payload) + assert callable(check_callback) + assert callable(handle_callback) + assert undo_callback is None or callable(undo_callback), undo_callback + if __debug__: + if isinstance(resolution, DynamicResolution): + assert callable(undo_callback), "UNDO_CALLBACK must be specified when using the DynamicResolution policy" + assert batch is None or isinstance(batch, BatchConfiguration) + assert self.check_policy_combination(authentication, resolution, distribution, destination) + self._community = community + self._name = name + self._authentication = authentication + self._resolution = resolution + self._distribution = distribution + self._destination = destination + self._payload = payload + self._check_callback = check_callback + self._handle_callback = handle_callback + self._undo_callback = undo_callback + self._batch = BatchConfiguration() if batch is None else batch + + # use cache to avoid database queries + cache = community.meta_message_cache.get(name) + if cache: + self._database_id = cache["id"] + else: + # ensure that there is a database id associated to this meta message name + community.dispersy.database.execute(u"INSERT INTO meta_message (community, name, cluster, priority, direction) VALUES (?, ?, 0, 128, 1)", + (community.database_id, name)) + self._database_id = community.dispersy.database.last_insert_rowid + community.meta_message_cache[name] = {"id": self._database_id, "cluster": 0, "priority": 128, "direction": 1} + + # allow optional setup methods to initialize the specific parts of the meta message + self._authentication.setup(self) + self._resolution.setup(self) + self._distribution.setup(self) + self._destination.setup(self) + self._payload.setup(self) + + @property + def community(self): + return self._community + + @property + def name(self): + return self._name + + @property + def database_id(self): + return self._database_id + + @property + def authentication(self): + return self._authentication + + @property + def resolution(self): + return self._resolution + + @property + def distribution(self): + return self._distribution + + @property + def destination(self): + return self._destination + + @property + def payload(self): + return self._payload + + @property + def check_callback(self): + return self._check_callback + + @property + def handle_callback(self): + return self._handle_callback + + @property + def undo_callback(self): + return self._undo_callback + + @property + def batch(self): + return self._batch + + def impl(self, authentication=(), resolution=(), distribution=(), destination=(), payload=(), *args, **kargs): + if __debug__: + assert isinstance(authentication, tuple), type(authentication) + assert isinstance(resolution, tuple), type(resolution) + assert isinstance(distribution, tuple), type(distribution) + assert isinstance(destination, tuple), type(destination) + assert isinstance(payload, tuple), type(payload) + try: + authentication_impl = self._authentication.Implementation(self._authentication, *authentication) + resolution_impl = self._resolution.Implementation(self._resolution, *resolution) + distribution_impl = self._distribution.Implementation(self._distribution, *distribution) + destination_impl = self._destination.Implementation(self._destination, *destination) + payload_impl = self._payload.Implementation(self._payload, *payload) + except TypeError: + logger.error("message name: %s", self._name) + logger.error("authentication: %s.Implementation", self._authentication.__class__.__name__) + logger.error("resolution: %s.Implementation", self._resolution.__class__.__name__) + logger.error("distribution: %s.Implementation", self._distribution.__class__.__name__) + logger.error("destination: %s.Implementation", self._destination.__class__.__name__) + logger.error("payload: %s.Implementation", self._payload.__class__.__name__) + raise + else: + return self.Implementation(self, authentication_impl, resolution_impl, distribution_impl, destination_impl, payload_impl, *args, **kargs) + + return self.Implementation(self, + self._authentication.Implementation(self._authentication, *authentication), + self._resolution.Implementation(self._resolution, *resolution), + self._distribution.Implementation(self._distribution, *distribution), + self._destination.Implementation(self._destination, *destination), + self._payload.Implementation(self._payload, *payload), + *args, **kargs) + + def __str__(self): + return "<%s %s>" % (self.__class__.__name__, self._name) + + @staticmethod + def check_policy_combination(authentication, resolution, distribution, destination): + from .authentication import Authentication, NoAuthentication, MemberAuthentication, DoubleMemberAuthentication + from .resolution import Resolution, PublicResolution, LinearResolution, DynamicResolution + from .distribution import Distribution, RelayDistribution, DirectDistribution, FullSyncDistribution, LastSyncDistribution + from .destination import Destination, CandidateDestination, CommunityDestination + + assert isinstance(authentication, Authentication) + assert isinstance(resolution, Resolution) + assert isinstance(distribution, Distribution) + assert isinstance(destination, Destination) + + def require(a, b, c): + if not isinstance(b, c): + raise ValueError("%s does not support %s. Allowed options are: %s" % (a.__class__.__name__, b.__class__.__name__, ", ".join([x.__name__ for x in c]))) + + if isinstance(authentication, NoAuthentication): + require(authentication, resolution, PublicResolution) + require(authentication, distribution, (RelayDistribution, DirectDistribution)) + require(authentication, destination, (CandidateDestination, CommunityDestination)) + elif isinstance(authentication, MemberAuthentication): + require(authentication, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(authentication, distribution, (RelayDistribution, DirectDistribution, FullSyncDistribution, LastSyncDistribution)) + require(authentication, destination, (CandidateDestination, CommunityDestination)) + elif isinstance(authentication, DoubleMemberAuthentication): + require(authentication, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(authentication, distribution, (RelayDistribution, DirectDistribution, FullSyncDistribution, LastSyncDistribution)) + require(authentication, destination, (CandidateDestination, CommunityDestination)) + else: + raise ValueError("%s is not supported" % authentication.__class_.__name__) + + if isinstance(resolution, PublicResolution): + require(resolution, authentication, (NoAuthentication, MemberAuthentication, DoubleMemberAuthentication)) + require(resolution, distribution, (RelayDistribution, DirectDistribution, FullSyncDistribution, LastSyncDistribution)) + require(resolution, destination, (CandidateDestination, CommunityDestination)) + elif isinstance(resolution, LinearResolution): + require(resolution, authentication, (MemberAuthentication, DoubleMemberAuthentication)) + require(resolution, distribution, (RelayDistribution, DirectDistribution, FullSyncDistribution, LastSyncDistribution)) + require(resolution, destination, (CandidateDestination, CommunityDestination)) + elif isinstance(resolution, DynamicResolution): + pass + else: + raise ValueError("%s is not supported" % resolution.__class_.__name__) + + if isinstance(distribution, RelayDistribution): + require(distribution, authentication, (NoAuthentication, MemberAuthentication, DoubleMemberAuthentication)) + require(distribution, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(distribution, destination, (CandidateDestination,)) + elif isinstance(distribution, DirectDistribution): + require(distribution, authentication, (NoAuthentication, MemberAuthentication, DoubleMemberAuthentication)) + require(distribution, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(distribution, destination, (CandidateDestination, CommunityDestination)) + elif isinstance(distribution, FullSyncDistribution): + require(distribution, authentication, (MemberAuthentication, DoubleMemberAuthentication)) + require(distribution, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(distribution, destination, (CommunityDestination,)) + if isinstance(authentication, DoubleMemberAuthentication) and distribution.enable_sequence_number: + raise ValueError("%s may not be used with %s when sequence numbers are enabled" % (distribution.__class__.__name__, authentication.__class__.__name__)) + elif isinstance(distribution, LastSyncDistribution): + require(distribution, authentication, (MemberAuthentication, DoubleMemberAuthentication)) + require(distribution, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(distribution, destination, (CommunityDestination,)) + else: + raise ValueError("%s is not supported" % distribution.__class_.__name__) + + if isinstance(destination, CandidateDestination): + require(destination, authentication, (NoAuthentication, MemberAuthentication, DoubleMemberAuthentication)) + require(destination, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(destination, distribution, (RelayDistribution, DirectDistribution)) + elif isinstance(destination, CommunityDestination): + require(destination, authentication, (NoAuthentication, MemberAuthentication, DoubleMemberAuthentication)) + require(destination, resolution, (PublicResolution, LinearResolution, DynamicResolution)) + require(destination, distribution, (DirectDistribution, FullSyncDistribution, LastSyncDistribution)) + else: + raise ValueError("%s is not supported" % destination.__class_.__name__) + + return True diff -Nru tribler-6.2.0/Tribler/dispersy/meta.py tribler-6.2.0/Tribler/dispersy/meta.py --- tribler-6.2.0/Tribler/dispersy/meta.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/meta.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,44 @@ +import logging +logger = logging.getLogger(__name__) + +import inspect + + +class MetaObject(object): + + class Implementation(object): + + def __init__(self, meta): + assert isinstance(meta, MetaObject), type(meta) + self._meta = meta + + @property + def meta(self): + return self._meta + + def __str__(self): + return "<%s.%s>" % (self._meta.__class__.__name__, self.__class__.__name__) + + def __str__(self): + return "<%s>" % self.__class__.__name__ + + def implement_class(self, cls, *args, **kargs): + assert cls == self.Implementation or cls in self.Implementation.__subclasses__(), (cls, self.Implementation) + if __debug__: + try: + return cls(self, *args, **kargs) + except TypeError: + logger.error("TypeError in %s.%s", self.__class__.__name__, self.Implementation.__name__) + logger.error("self.Implementation takes: %s", inspect.getargspec(self.Implementation.__init__)) + logger.error("self.Implementation got: %s %s", args, kargs) + raise + + else: + return cls(self, *args, **kargs) + + def implement(self, *args, **kargs): + if __debug__: + return self.implement_class(self.Implementation, *args, **kargs) + + else: + return self.Implementation(self, *args, **kargs) diff -Nru tribler-6.2.0/Tribler/dispersy/payload.py tribler-6.2.0/Tribler/dispersy/payload.py --- tribler-6.2.0/Tribler/dispersy/payload.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/payload.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,675 @@ +from hashlib import sha1 + +from .meta import MetaObject + +if __debug__: + from .bloomfilter import BloomFilter + + def is_address(address): + assert isinstance(address, tuple), type(address) + assert len(address) == 2, len(address) + assert isinstance(address[0], str), type(address[0]) + assert address[0], address[0] + assert isinstance(address[1], int), type(address[1]) + assert address[1] >= 0, address[1] + return True + + +class Payload(MetaObject): + + class Implementation(MetaObject.Implementation): + pass + + def setup(self, message): + """ + Setup is called after the meta message is initially created. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + def __str__(self): + return "<{0.__class__.__name__}>".format(self) + + +class IntroductionRequestPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, destination_address, source_lan_address, source_wan_address, advice, connection_type, sync, identifier): + """ + Create the payload for an introduction-request message. + + DESTINATION_ADDRESS is the address of the receiver. Effectively this should be the + wan address that others can use to contact the receiver. + + SOURCE_LAN_ADDRESS is the lan address of the sender. Nodes in the same LAN + should use this address to communicate. + + SOURCE_WAN_ADDRESS is the wan address of the sender. Nodes not in the same + LAN should use this address to communicate. + + ADVICE is a boolean value. When True the receiver will introduce the sender to a new + node. This introduction will be facilitated by the receiver sending a puncture-request + to the new node. + + CONNECTION_TYPE is a unicode string indicating the connection type that the message + creator has. Currently the following values are supported: u"unknown", u"public", and + u"symmetric-NAT". + + SYNC is an optional (TIME_LOW, TIME_HIGH, MODULO, OFFSET, BLOOM_FILTER) tuple. When + given the introduction-request will also add this sync bloom filter in the message + allowing the receiver to respond with missing packets. No such sync bloom filter will + be included when SYNC is None. + + TIME_LOW and TIME_HIGH give the global time range that the sync bloomfilter covers. + + Only packets with (global time + OFFSET % MODULO) == 0 will be taken into account, + allowing for sync ranges to cover much larger ranges without including all the + packets in that range. + + BLOOM_FILTER is a BloomFilter object containing all packets that the sender has in + the given sync range. + + IDENTIFIER is a number that must be given in the associated introduction-response. This + number allows to distinguish between multiple introduction-response messages. + """ + assert is_address(destination_address), destination_address + assert is_address(source_lan_address), source_lan_address + assert is_address(source_wan_address), source_wan_address + assert isinstance(advice, bool), advice + assert isinstance(connection_type, unicode) and connection_type in (u"unknown", u"public", u"symmetric-NAT"), connection_type + assert sync is None or isinstance(sync, tuple), sync + assert sync is None or len(sync) == 5, sync + assert isinstance(identifier, int), identifier + assert 0 <= identifier < 2 ** 16, identifier + super(IntroductionRequestPayload.Implementation, self).__init__(meta) + self._destination_address = destination_address + self._source_lan_address = source_lan_address + self._source_wan_address = source_wan_address + self._advice = advice + self._connection_type = connection_type + self._identifier = identifier + if sync: + self._time_low, self._time_high, self._modulo, self._offset, self._bloom_filter = sync + assert isinstance(self._time_low, (int, long)) + assert 0 < self._time_low + assert isinstance(self._time_high, (int, long)) + assert self._time_high == 0 or self._time_low <= self._time_high + assert isinstance(self._modulo, int) + assert 0 < self._modulo < 2 ** 16 + assert isinstance(self._offset, int) + assert 0 <= self._offset < self._modulo + assert isinstance(self._bloom_filter, BloomFilter) + else: + self._time_low, self._time_high, self._modulo, self._offset, self._bloom_filter = 0, 0, 1, 0, None + + @property + def destination_address(self): + return self._destination_address + + @property + def source_lan_address(self): + return self._source_lan_address + + @property + def source_wan_address(self): + return self._source_wan_address + + @property + def advice(self): + return self._advice + + @property + def connection_type(self): + return self._connection_type + + @property + def sync(self): + return True if self._bloom_filter else False + + @property + def time_low(self): + return self._time_low + + @property + def time_high(self): + return self._time_high + + @property + def has_time_high(self): + return self._time_high > 0 + + @property + def modulo(self): + return self._modulo + + @property + def offset(self): + return self._offset + + @property + def bloom_filter(self): + return self._bloom_filter + + @property + def identifier(self): + return self._identifier + + +class IntroductionResponsePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, destination_address, source_lan_address, source_wan_address, lan_introduction_address, wan_introduction_address, connection_type, tunnel, identifier): + """ + Create the payload for an introduction-response message. + + DESTINATION_ADDRESS is the address of the receiver. Effectively this should be the + wan address that others can use to contact the receiver. + + SOURCE_LAN_ADDRESS is the lan address of the sender. Nodes in the same LAN + should use this address to communicate. + + SOURCE_WAN_ADDRESS is the wan address of the sender. Nodes not in the same + LAN should use this address to communicate. + + LAN_INTRODUCTION_ADDRESS is the lan address of the node that the sender + advises the receiver to contact. This address is zero when the associated request did + not want advice. + + WAN_INTRODUCTION_ADDRESS is the wan address of the node that the sender + advises the receiver to contact. This address is zero when the associated request did + not want advice. + + CONNECTION_TYPE is a unicode string indicating the connection type that the message + creator has. Currently the following values are supported: u"unknown", u"public", and + u"symmetric-NAT". + + TUNNEL is a boolean indicating that the connection is tunneled and all messages send to + the introduced candidate require a ffffffff prefix. + + IDENTIFIER is a number that was given in the associated introduction-request. This + number allows to distinguish between multiple introduction-response messages. + + When the associated request wanted advice the sender will also sent a puncture-request + message to either the lan_introduction_address or the wan_introduction_address + (depending on their positions). The introduced node must sent a puncture message to the + receiver to punch a hole in its NAT. + """ + assert is_address(destination_address) + assert is_address(source_lan_address) + assert is_address(source_wan_address) + assert is_address(lan_introduction_address) + assert is_address(wan_introduction_address) + assert isinstance(connection_type, unicode) and connection_type in (u"unknown", u"public", u"symmetric-NAT") + assert isinstance(tunnel, bool) + assert isinstance(identifier, int) + assert 0 <= identifier < 2 ** 16 + super(IntroductionResponsePayload.Implementation, self).__init__(meta) + self._destination_address = destination_address + self._source_lan_address = source_lan_address + self._source_wan_address = source_wan_address + self._lan_introduction_address = lan_introduction_address + self._wan_introduction_address = wan_introduction_address + self._connection_type = connection_type + self._tunnel = tunnel + self._identifier = identifier + + @property + def destination_address(self): + return self._destination_address + + @property + def source_lan_address(self): + return self._source_lan_address + + @property + def source_wan_address(self): + return self._source_wan_address + + @property + def lan_introduction_address(self): + return self._lan_introduction_address + + @property + def wan_introduction_address(self): + return self._wan_introduction_address + + @property + def connection_type(self): + return self._connection_type + + @property + def tunnel(self): + return self._tunnel + + @property + def identifier(self): + return self._identifier + + +class PunctureRequestPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, lan_walker_address, wan_walker_address, identifier): + """ + Create the payload for a puncture-request payload. + + LAN_WALKER_ADDRESS is the lan address of the node that the sender wants us to + contact. This contact attempt should punch a hole in our NAT to allow the node to + connect to us. + + WAN_WALKER_ADDRESS is the wan address of the node that the sender wants us to + contact. This contact attempt should punch a hole in our NAT to allow the node to + connect to us. + + IDENTIFIER is a number that was given in the associated introduction-request. This + number allows to distinguish between multiple introduction-response messages. + + TODO add connection type + TODO add tunnel bit + """ + assert is_address(lan_walker_address) + assert is_address(wan_walker_address) + assert isinstance(identifier, int) + assert 0 <= identifier < 2 ** 16 + super(PunctureRequestPayload.Implementation, self).__init__(meta) + self._lan_walker_address = lan_walker_address + self._wan_walker_address = wan_walker_address + self._identifier = identifier + + @property + def lan_walker_address(self): + return self._lan_walker_address + + @property + def wan_walker_address(self): + return self._wan_walker_address + + @property + def identifier(self): + return self._identifier + + +class PuncturePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, source_lan_address, source_wan_address, identifier): + """ + Create the payload for a puncture message + + SOURCE_LAN_ADDRESS is the lan address of the sender. Nodes in the same LAN + should use this address to communicate. + + SOURCE_WAN_ADDRESS is the wan address of the sender. Nodes not in the same + LAN should use this address to communicate. + + IDENTIFIER is a number that was given in the associated introduction-request. This + number allows to distinguish between multiple introduction-response messages. + """ + assert is_address(source_lan_address) + assert is_address(source_wan_address) + assert isinstance(identifier, int) + assert 0 <= identifier < 2 ** 16 + super(PuncturePayload.Implementation, self).__init__(meta) + self._source_lan_address = source_lan_address + self._source_wan_address = source_wan_address + self._identifier = identifier + + @property + def source_lan_address(self): + return self._source_lan_address + + @property + def source_wan_address(self): + return self._source_wan_address + + @property + def identifier(self): + return self._identifier + + +class AuthorizePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, permission_triplets): + """ + Authorize the given permission_triplets. + + The permissions are given in the permission_triplets list. Each element is a (Member, + Message, permission) pair, where permission can either be u"permit", u"authorize", or + u"revoke". + """ + if __debug__: + from .authentication import MemberAuthentication, DoubleMemberAuthentication + from .resolution import PublicResolution, LinearResolution, DynamicResolution + from .member import Member + from .message import Message + for triplet in permission_triplets: + assert isinstance(triplet, tuple), triplet + assert len(triplet) == 3, triplet + assert isinstance(triplet[0], Member), triplet[0] + assert isinstance(triplet[1], Message), triplet[1] + assert isinstance(triplet[1].resolution, (PublicResolution, LinearResolution, DynamicResolution)), triplet[1] + assert isinstance(triplet[1].authentication, (MemberAuthentication, DoubleMemberAuthentication)), triplet[1] + assert isinstance(triplet[2], unicode), triplet[2] + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo"), triplet[2] + super(AuthorizePayload.Implementation, self).__init__(meta) + self._permission_triplets = permission_triplets + + @property + def permission_triplets(self): + return self._permission_triplets + + +class RevokePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, permission_triplets): + """ + Revoke the given permission_triplets. + + The permissions are given in the permission_triplets list. Each element is a (Member, + Message, permission) pair, where permission can either be u"permit", u"authorize", or + u"revoke". + """ + if __debug__: + from .authentication import MemberAuthentication, DoubleMemberAuthentication + from .resolution import PublicResolution, LinearResolution, DynamicResolution + from .member import Member + from .message import Message + for triplet in permission_triplets: + assert isinstance(triplet, tuple) + assert len(triplet) == 3 + assert isinstance(triplet[0], Member), triplet + assert isinstance(triplet[1], Message), triplet + assert isinstance(triplet[1].resolution, (PublicResolution, LinearResolution, DynamicResolution)), triplet + assert isinstance(triplet[1].authentication, (MemberAuthentication, DoubleMemberAuthentication)), triplet + assert isinstance(triplet[2], unicode), triplet + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo"), triplet + super(RevokePayload.Implementation, self).__init__(meta) + self._permission_triplets = permission_triplets + + @property + def permission_triplets(self): + return self._permission_triplets + + +class UndoPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, member, global_time, packet=None): + if __debug__: + from .member import Member + from .message import Packet + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + assert packet is None or isinstance(packet, Packet) + assert global_time > 0 + super(UndoPayload.Implementation, self).__init__(meta) + self._member = member + self._global_time = global_time + self._packet = packet + + @property + def member(self): + return self._member + + @property + def global_time(self): + return self._global_time + + @property + def packet(self): + return self._packet + + @packet.setter + def packet(self, packet): + if __debug__: + from .message import Packet + assert isinstance(packet, Packet), type(packet) + self._packet = packet + + +class MissingSequencePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, member, message, missing_low, missing_high): + """ + We are missing messages of type MESSAGE signed by USER. We + are missing sequence numbers >= missing_low to <= + missing_high. + """ + if __debug__: + from .member import Member + from .message import Message + assert isinstance(member, Member) + assert isinstance(message, Message) + assert isinstance(missing_low, (int, long)) + assert isinstance(missing_high, (int, long)) + assert 0 < missing_low <= missing_high + super(MissingSequencePayload.Implementation, self).__init__(meta) + self._member = member + self._message = message + self._missing_low = missing_low + self._missing_high = missing_high + + @property + def member(self): + return self._member + + @property + def message(self): + return self._message + + @property + def missing_low(self): + return self._missing_low + + @property + def missing_high(self): + return self._missing_high + + +class SignaturePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, identifier, message): + if __debug__: + from .message import Message + assert isinstance(identifier, int), type(identifier) + assert 0 <= identifier < 2 ** 16, identifier + assert isinstance(message, Message.Implementation), type(message) + super(SignaturePayload.Implementation, self).__init__(meta) + self._identifier = identifier + self._message = message + + @property + def identifier(self): + return self._identifier + + @property + def message(self): + return self._message + + +class SignatureRequestPayload(SignaturePayload): + + class Implementation(SignaturePayload.Implementation): + pass + + +class SignatureResponsePayload(SignaturePayload): + + class Implementation(SignaturePayload.Implementation): + pass + + +class IdentityPayload(Payload): + + class Implementation(Payload.Implementation): + pass + + +class MissingIdentityPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, mid): + assert isinstance(mid, str) + assert len(mid) == 20 + super(MissingIdentityPayload.Implementation, self).__init__(meta) + self._mid = mid + + @property + def mid(self): + return self._mid + + +class DestroyCommunityPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, degree): + assert isinstance(degree, unicode) + assert degree in (u"soft-kill", u"hard-kill") + super(DestroyCommunityPayload.Implementation, self).__init__(meta) + self._degree = degree + + @property + def degree(self): + return self._degree + + @property + def is_soft_kill(self): + return self._degree == u"soft-kill" + + @property + def is_hard_kill(self): + return self._degree == u"hard-kill" + + +class MissingMessagePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, member, global_times): + if __debug__: + from .member import Member + assert isinstance(member, Member) + assert isinstance(global_times, (tuple, list)) + assert all(isinstance(global_time, (int, long)) for global_time in global_times) + assert all(global_time > 0 for global_time in global_times) + assert len(global_times) > 0 + assert len(set(global_times)) == len(global_times) + super(MissingMessagePayload.Implementation, self).__init__(meta) + self._member = member + self._global_times = global_times + + @property + def member(self): + return self._member + + @property + def global_times(self): + return self._global_times + + +class MissingLastMessagePayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, member, message, count): + if __debug__: + from .member import Member + assert isinstance(member, Member) + super(MissingLastMessagePayload.Implementation, self).__init__(meta) + self._member = member + self._message = message + self._count = count + + @property + def member(self): + return self._member + + @property + def message(self): + return self._message + + @property + def count(self): + return self._count + + +class MissingProofPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, member, global_time): + if __debug__: + from .member import Member + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + super(MissingProofPayload.Implementation, self).__init__(meta) + self._member = member + self._global_time = global_time + + @property + def member(self): + return self._member + + @property + def global_time(self): + return self._global_time + + +class DynamicSettingsPayload(Payload): + + class Implementation(Payload.Implementation): + + def __init__(self, meta, policies): + """ + Create a new payload container for a dispersy-dynamic-settings message. + + This message allows the community to start using different policies for one or more of + its messages. Currently only the resolution policy can be dynamically changed. + + The POLICIES is a list containing (meta_message, policy) tuples. The policy that is + choosen must be one of the policies defined for the associated meta_message. + + @param policies: A list with the new message policies. + @type *policies: [(meta_message, policy), ...] + """ + if __debug__: + from .message import Message + from .resolution import PublicResolution, LinearResolution, DynamicResolution + assert isinstance(policies, (tuple, list)) + for tup in policies: + assert isinstance(tup, tuple) + assert len(tup) == 2 + message, policy = tup + assert isinstance(message, Message) + # currently only supporting resolution policy changes + assert isinstance(message.resolution, DynamicResolution) + assert isinstance(policy, (PublicResolution, LinearResolution)) + assert policy in message.resolution.policies, "the given policy must be one available at meta message creation" + + super(DynamicSettingsPayload.Implementation, self).__init__(meta) + self._policies = policies + + @property + def policies(self): + """ + Returns a list or tuple containing the new message policies. + @rtype: [(meta_message, policy), ...] + """ + return self._policies diff -Nru tribler-6.2.0/Tribler/dispersy/python27_ordereddict.py tribler-6.2.0/Tribler/dispersy/python27_ordereddict.py --- tribler-6.2.0/Tribler/dispersy/python27_ordereddict.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/python27_ordereddict.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,259 @@ +# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. +# Passes Python2.7's test suite and incorporates all the latest updates. + +try: + from thread import get_ident as _get_ident +except ImportError: + from dummy_thread import get_ident as _get_ident + +try: + from _abcoll import KeysView, ValuesView, ItemsView +except ImportError: + pass + + +class OrderedDict(dict): + + 'Dictionary that remembers insertion order' + # An inherited dict maps keys to values. + # The inherited dict provides __getitem__, __len__, __contains__, and get. + # The remaining methods are order-aware. + # Big-O running times for all methods are the same as for regular dictionaries. + + # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The circular doubly linked list starts and ends with a sentinel element. + # The sentinel element never gets deleted (this simplifies the algorithm). + # Each link is stored as a list of length three: [PREV, NEXT, KEY]. + + def __init__(self, *args, **kwds): + '''Initialize an ordered dictionary. Signature is the same as for + regular dictionaries, but keyword arguments are not recommended + because their insertion order is arbitrary. + + ''' + if len(args) > 1: + raise TypeError('expected at most 1 arguments, got %d' % len(args)) + try: + self.__root + except AttributeError: + self.__root = root = [] # sentinel node + root[:] = [root, root, None] + self.__map = {} + self.__update(*args, **kwds) + + def __setitem__(self, key, value, dict_setitem=dict.__setitem__): + 'od.__setitem__(i, y) <==> od[i]=y' + # Setting a new item creates a new link which goes at the end of the linked + # list, and the inherited dictionary is updated with the new key/value pair. + if key not in self: + root = self.__root + last = root[0] + last[1] = root[0] = self.__map[key] = [last, root, key] + dict_setitem(self, key, value) + + def __delitem__(self, key, dict_delitem=dict.__delitem__): + 'od.__delitem__(y) <==> del od[y]' + # Deleting an existing item uses self.__map to find the link which is + # then removed by updating the links in the predecessor and successor nodes. + dict_delitem(self, key) + link_prev, link_next, key = self.__map.pop(key) + link_prev[1] = link_next + link_next[0] = link_prev + + def __iter__(self): + 'od.__iter__() <==> iter(od)' + root = self.__root + curr = root[1] + while curr is not root: + yield curr[2] + curr = curr[1] + + def __reversed__(self): + 'od.__reversed__() <==> reversed(od)' + root = self.__root + curr = root[0] + while curr is not root: + yield curr[2] + curr = curr[0] + + def clear(self): + 'od.clear() -> None. Remove all items from od.' + try: + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + except AttributeError: + pass + dict.clear(self) + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + root = self.__root + if last: + link = root[0] + link_prev = link[0] + link_prev[1] = root + root[0] = link_prev + else: + link = root[1] + link_next = link[1] + root[1] = link_next + link_next[0] = root + key = link[2] + del self.__map[key] + value = dict.pop(self, key) + return key, value + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) items in od' + for k in self: + yield (k, self[k]) + + def update(*args, **kwds): + '''od.update(E, **F) -> None. Update od from dict/iterable E and F. + + If E is a dict instance, does: for k in E: od[k] = E[k] + If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] + Or if E is an iterable of items, does: for k, v in E: od[k] = v + In either case, this is followed by: for k, v in F.items(): od[k] = v + + ''' + if len(args) > 2: + raise TypeError('update() takes at most 2 positional ' + 'arguments (%d given)' % (len(args),)) + elif not args: + raise TypeError('update() takes at least 1 argument (0 given)') + self = args[0] + # Make progressively weaker assumptions about "other" + other = () + if len(args) == 2: + other = args[1] + if isinstance(other, dict): + for key in other: + self[key] = other[key] + elif hasattr(other, 'keys'): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. + If key is not found, d is returned if given, otherwise KeyError is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + + def __reduce__(self): + 'Return state information for pickling' + items = [[k, self[k]] for k in self] + inst_dict = vars(self).copy() + for k in vars(OrderedDict()): + inst_dict.pop(k, None) + if inst_dict: + return (self.__class__, (items,), inst_dict) + return self.__class__, (items,) + + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) + + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S + and values equal to v (which defaults to None). + + ''' + d = cls() + for key in iterable: + d[key] = value + return d + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self) == len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + return not self == other + + # -- the following methods are only used in Python 2.7 -- + + def viewkeys(self): + "od.viewkeys() -> a set-like object providing a view on od's keys" + return KeysView(self) + + def viewvalues(self): + "od.viewvalues() -> an object providing a view on od's values" + return ValuesView(self) + + def viewitems(self): + "od.viewitems() -> a set-like object providing a view on od's items" + return ItemsView(self) diff -Nru tribler-6.2.0/Tribler/dispersy/requestcache.py tribler-6.2.0/Tribler/dispersy/requestcache.py --- tribler-6.2.0/Tribler/dispersy/requestcache.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/requestcache.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,125 @@ +import logging +logger = logging.getLogger(__name__) + +from random import random + + +def identifier_to_string(identifier): + return identifier.encode("HEX") if isinstance(identifier, str) else identifier + + +class Cache(object): + timeout_delay = 10.0 + cleanup_delay = 10.0 + + def on_timeout(self): + raise NotImplementedError() + + def on_cleanup(self): + pass + + def __str__(self): + return "<%s>" % self.__class__.__name__ + + +class RequestCache(object): + + def __init__(self, callback): + self._callback = callback + self._identifiers = dict() + + def generate_identifier(self): + while True: + identifier = int(random() * 2 ** 16) + if not identifier in self._identifiers: + logger.debug("claiming on %s", identifier_to_string(identifier)) + return identifier + + def claim(self, cache): + identifier = self.generate_identifier() + logger.debug("claiming on %s for %s", identifier_to_string(identifier), cache) + self.set(identifier, cache) + return identifier + + def set(self, identifier, cache): + assert isinstance(identifier, (int, long, str)), type(identifier) + assert not identifier in self._identifiers, identifier + assert isinstance(cache, Cache) + assert isinstance(cache.timeout_delay, float) + assert cache.timeout_delay > 0.0 + + # TODO we are slowly making all Dispersy identifiers unicode strings. currently the request + # cache using stings instead, hence the conversion to HEX before giving them to _CALLBACK. + # once the request cache identifiers are also unicode, this HEX conversion should be removed + + logger.debug("set %s for %s (%fs timeout)", identifier_to_string(identifier), cache, cache.timeout_delay) + self._callback.register(self._on_timeout, (identifier,), id_=u"requestcache-%s" % str(identifier).encode("HEX"), delay=cache.timeout_delay) + self._identifiers[identifier] = cache + cache.identifier = identifier + + def replace(self, identifier, cache): + assert isinstance(identifier, (int, long, str)), type(identifier) + assert identifier in self._identifiers, identifier + assert isinstance(cache, Cache) + assert isinstance(cache.timeout_delay, float) + assert cache.timeout_delay > 0.0 + + logger.debug("replace %s for %s (%fs timeout)", identifier_to_string(identifier), cache, cache.timeout_delay) + self._callback.replace_register(u"requestcache-%s" % str(identifier).encode("HEX"), self._on_timeout, (identifier,), delay=cache.cleanup_delay) + self._identifiers[identifier] = cache + cache.identifier = identifier + + def has(self, identifier, cls): + assert isinstance(identifier, (int, long, str)), type(identifier) + assert issubclass(cls, Cache), cls + + logger.debug("cache contains %s? %s", identifier_to_string(identifier), identifier in self._identifiers) + return isinstance(self._identifiers.get(identifier), cls) + + def get(self, identifier, cls): + assert isinstance(identifier, (int, long, str)), type(identifier) + assert issubclass(cls, Cache), cls + + cache = self._identifiers.get(identifier) + if cache and isinstance(cache, cls): + return cache + + def pop(self, identifier, cls): + assert isinstance(identifier, (int, long, str)), type(identifier) + assert issubclass(cls, Cache), cls + + cache = self._identifiers.get(identifier) + if cache and isinstance(cache, cls): + assert isinstance(cache.cleanup_delay, float) + assert cache.cleanup_delay >= 0.0 + logger.debug("canceling timeout on %s for %s", identifier_to_string(identifier), cache) + + if cache.cleanup_delay: + self._callback.replace_register(u"requestcache-%s" % str(identifier).encode("HEX"), self._on_cleanup, (identifier,), delay=cache.cleanup_delay) + + elif identifier in self._identifiers: + self._callback.unregister(u"requestcache-%s" % str(identifier).encode("HEX")) + del self._identifiers[identifier] + + return cache + + def _on_timeout(self, identifier): + assert identifier in self._identifiers, identifier + cache = self._identifiers[identifier] + logger.debug("timeout on %s for %s", identifier_to_string(identifier), cache) + cache.on_timeout() + + if cache.cleanup_delay: + self._callback.replace_register(u"requestcache-%s" % str(identifier).encode("HEX"), self._on_cleanup, (identifier,), delay=cache.cleanup_delay) + + elif identifier in self._identifiers: + del self._identifiers[identifier] + + def _on_cleanup(self, identifier): + assert identifier in self._identifiers + cache = self._identifiers[identifier] + logger.debug("cleanup on %s for %s", identifier_to_string(identifier), cache) + cache.on_cleanup() + + if identifier in self._identifiers: + del self._identifiers[identifier] diff -Nru tribler-6.2.0/Tribler/dispersy/resolution.py tribler-6.2.0/Tribler/dispersy/resolution.py --- tribler-6.2.0/Tribler/dispersy/resolution.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/resolution.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,113 @@ +from .meta import MetaObject + + +class Resolution(MetaObject): + + class Implementation(MetaObject.Implementation): + pass + + def setup(self, message): + """ + Setup is called after the meta message is initially created. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + + +class PublicResolution(Resolution): + + """ + PublicResolution allows any member to create a message. + """ + class Implementation(Resolution.Implementation): + pass + + +class LinearResolution(Resolution): + + """ + LinearResolution allows only members that have a specific permission to create a message. + """ + class Implementation(Resolution.Implementation): + pass + + +class DynamicResolution(Resolution): + + """ + DynamicResolution allows the resolution policy to change. + + A special dispersy-dynamic-settings message needs to be created and distributed to change the + resolution policy. Currently the policy can dynamically switch between either PublicResolution + and LinearResolution. + """ + class Implementation(Resolution.Implementation): + + def __init__(self, meta, policy): + """ + Create a DynamicResolution.Implementation instance. + + This object will contain the resolution policy used for a single message. This message + must use one of the available policies defined in the associated meta_message object. + """ + assert isinstance(policy, (PublicResolution.Implementation, LinearResolution.Implementation)) + assert policy.meta in meta._policies + super(DynamicResolution.Implementation, self).__init__(meta) + self._policy = policy + + @property + def default(self): + return self._meta._default + + @property + def policies(self): + return self._meta._policies + + @property + def policy(self): + return self._policy + + def __init__(self, *policies): + """ + Create a DynamicResolution instance. + + The DynamicResolution allows the resolution policy to change by creating and distributing a + dispersy-dynamic-settings message. The availabe policies is given by POLICIES. + + The first policy will be used by default until a dispersy-dynamic-settings message is + received that changes the policy. + + Warning! The order of the given policies is -very- important. Each policy is assigned a + number based on the order (0, 1, ... etc) and this number is used by the + dispersy-dynamic-settings message to change the policies. + + @param *policies: A list with available policies. + @type *policies: (Resolution, ...) + """ + assert isinstance(policies, tuple) + assert 0 < len(policies) < 255 + assert all(isinstance(x, (PublicResolution, LinearResolution)) for x in policies) + self._policies = policies + + @property + def default(self): + """ + Returns the default policy, i.e. policies[0]. + @rtype Resolution + """ + return self._policies[0] + + @property + def policies(self): + """ + Returns a tuple containing all available policies. + @rtype (Resolution, ...) + """ + return self._policies + + def setup(self, message): + if __debug__: + assert message.undo_callback, "a message with DynamicResolution policy must have an undo callback" + for policy in self._policies: + policy.setup(message) diff -Nru tribler-6.2.0/Tribler/dispersy/revision.py tribler-6.2.0/Tribler/dispersy/revision.py --- tribler-6.2.0/Tribler/dispersy/revision.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/revision.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,11 @@ +_revision_information = {} + +def update_revision_information(url, revision): + if not (url == "$HeadURL$" and revision == "$Revision$"): + _revision_information[url[10:-2]] = int(revision[11:-2]) + +def get_revision_information(): + return _revision_information + +def get_revision(): + return max(_revision_information.itervalues()) diff -Nru tribler-6.2.0/Tribler/dispersy/script.py tribler-6.2.0/Tribler/dispersy/script.py --- tribler-6.2.0/Tribler/dispersy/script.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/script.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,366 @@ +import logging +logger = logging.getLogger(__name__) + +from time import time + +from .tests.debugcommunity.community import DebugCommunity +from .dispersy import Dispersy +from .tool.lencoder import log, make_valid_key + + +def assert_(value, *args): + if not value: + raise AssertionError(*args) + + +class ScriptBase(object): + + def __init__(self, dispersy, **kargs): + assert isinstance(dispersy, Dispersy), type(dispersy) + super(ScriptBase, self).__init__() + self._kargs = kargs + self._testcases = [] + self._dispersy = dispersy + self._dispersy_database = self._dispersy.database + # self._dispersy.callback.register(self.run) + if self.enable_wait_for_wan_address: + self.add_testcase(self.wait_for_wan_address) + + self.run() + + def add_testcase(self, func, args=()): + assert callable(func) + assert isinstance(args, tuple) + self._testcases.append((func, args)) + + def next_testcase(self, result=None): + if isinstance(result, Exception): + logger.error("exception! shutdown") + self._dispersy.callback.stop(timeout=0.0, exception=result) + + elif self._testcases: + call, args = self._testcases.pop(0) + logger.info("start %s", call) + if args: + logger.info("arguments %s", args) + if call.__doc__: + logger.info(call.__doc__) + self._dispersy.callback.register(call, args, callback=self.next_testcase) + + else: + logger.debug("shutdown") + self._dispersy.callback.stop(timeout=0.0) + + def caller(self, run, args=()): + assert callable(run) + assert isinstance(args, tuple) + logger.warning("depricated: use add_testcase instead") + return self.add_testcase(run, args) + + def run(self): + raise NotImplementedError("Must implement a generator or use self.add_testcase(...)") + + @property + def enable_wait_for_wan_address(self): + return True + + def wait_for_wan_address(self): + my_member = self._dispersy.get_new_member(u"low") + community = DebugCommunity.create_community(self._dispersy, my_member) + + while self._dispersy.wan_address[0] == "0.0.0.0": + yield 0.1 + + community.unload_community() + + +class ScenarioScriptBase(ScriptBase): + # TODO: all bartercast references should be converted to some universal style + + def __init__(self, dispersy, logfile, **kargs): + ScriptBase.__init__(self, dispersy, **kargs) + + self._timestep = float(kargs.get('timestep', 1.0)) + self._stepcount = 0 + self._logfile = logfile + + self._my_name = None + self._my_address = None + + self._nr_peers = self.__get_nr_peers() + + if 'starting_timestamp' in kargs: + self._starting_timestamp = int(kargs['starting_timestamp']) + log(self._logfile, "Using %d as starting timestamp, will wait for %d seconds" % (self._starting_timestamp, self._starting_timestamp - int(time()))) + else: + self._starting_timestamp = int(time()) + log(self._logfile, "No starting_timestamp specified, using currentime") + + @property + def enable_wait_for_wan_address(self): + return False + + def get_peer_ip_port(self, peer_id): + assert isinstance(peer_id, int), type(peer_id) + + line_nr = 1 + for line in open('data/peers'): + if line_nr == peer_id: + ip, port = line.split() + return ip, int(port) + line_nr += 1 + + def __get_nr_peers(self): + line_nr = 0 + for line in open('data/peers'): + line_nr += 1 + + return line_nr + + def set_online(self): + """ Restore on_socket_endpoint and _send functions of + dispersy back to normal. + + This simulates a node coming online, since it's able to send + and receive messages. + """ + log(self._logfile, "Going online") + self._dispersy.on_incoming_packets = self.original_on_incoming_packets + self._dispersy.endpoint.send = self.original_send + + def set_offline(self): + """ Replace on_socket_endpoint and _sends functions of + dispersy with dummies + + This simulates a node going offline, since it's not able to + send or receive any messages + """ + def dummy_on_socket(*params): + return + + def dummy_send(*params): + return False + + log(self._logfile, "Going offline") + self._dispersy.on_socket_endpoint = dummy_on_socket + self._dispersy.endpoint.send = dummy_send + + def get_commands_from_fp(self, fp, step): + """ Return a list of commands from file handle for step + + Read lines from fp and return all the lines starting at + timestamp equal to step. If we read the end of the file, + without commands to return, then I return -1. + """ + commands = [] + if fp: + while True: + cursor_position = fp.tell() + line = fp.readline().strip() + if not line: + if commands: + return commands + else: + return -1 + + cmdstep, command = line.split(' ', 1) + + cmdstep = int(cmdstep) + if cmdstep < step: + continue + elif cmdstep == step: + commands.append(command) + else: + # restore cursor position and break + fp.seek(cursor_position) + break + + return commands + + def sleep(self): + """ Calculate the time to sleep. + """ + # when should we start the next step? + expected_time = self._starting_timestamp + (self._timestep * (self._stepcount + 1)) + diff = expected_time - time() + + delay = max(0.0, diff) + return delay + + def log_desync(self, desync): + log(self._logfile, "sleep", desync=desync, stepcount=self._stepcount) + + def join_community(self, my_member): + raise NotImplementedError() + + def execute_scenario_cmds(self, commands): + raise NotImplementedError() + + def run(self): + self.add_testcase(self._run) + + def _run(self): + if __debug__: + log(self._logfile, "start-scenario-script") + + # + # Read our configuration from the peer.conf file + # name, ip, port, public and private key + # + with open('data/peer.conf') as fp: + self._my_name, ip, port, _ = fp.readline().split() + self._my_address = (ip, int(port)) + + log(self._logfile, "Read config done", my_name=self._my_name, my_address=self._my_address) + + # create my member + my_member = self._dispersy.get_new_member(u"low") + logger.info("-my member- %d %d %s", my_member.database_id, id(my_member), my_member.mid.encode("HEX")) + + self.original_on_incoming_packets = self._dispersy.on_incoming_packets + self.original_send = self._dispersy.endpoint.send + + # join the community with the newly created member + self._community = self.join_community(my_member) + logger.debug("Joined community %s", self._community._my_member) + + log("dispersy.log", "joined-community", time=time(), timestep=self._timestep, sync_response_limit=self._community.dispersy_sync_response_limit, starting_timestamp=self._starting_timestamp) + + self._stepcount = 0 + + # wait until we reach the starting time + self._dispersy.callback.register(self.do_steps, delay=self.sleep()) + self._dispersy.callback.register(self.do_log) + + # I finished the scenario execution. I should stay online + # until killed. Note that I can still sync and exchange + # messages with other peers. + while True: + # wait to be killed + yield 100.0 + + def do_steps(self): + self._dispersy._statistics.reset() + scenario_fp = open('data/bartercast.log') + try: + availability_fp = open('data/availability.log') + except: + availability_fp = None + + self._stepcount += 1 + + # start the scenario + while True: + # get commands + scenario_cmds = self.get_commands_from_fp(scenario_fp, self._stepcount) + availability_cmds = self.get_commands_from_fp(availability_fp, self._stepcount) + + # if there is a start in the avaibility_cmds then go + # online + if availability_cmds != -1 and 'start' in availability_cmds: + self.set_online() + + # if there are barter_cmds then execute them + if scenario_cmds != -1: + self.execute_scenario_cmds(scenario_cmds) + + # if there is a stop in the availability_cmds then go offline + if availability_cmds != -1 and 'stop' in availability_cmds: + self.set_offline() + + sleep = self.sleep() + if sleep < 0.5: + self.log_desync(1.0 - sleep) + yield sleep + self._stepcount += 1 + + def do_log(self): + def print_on_change(name, prev_dict, cur_dict): + new_values = {} + changed_values = {} + if cur_dict: + for key, value in cur_dict.iteritems(): + if not isinstance(key, (basestring, int, long)): + key = str(key) + + key = make_valid_key(key) + new_values[key] = value + if prev_dict.get(key, None) != value: + changed_values[key] = value + + if changed_values: + log("dispersy.log", name, **changed_values) + return new_values + return prev_dict + + prev_statistics = {} + prev_total_received = {} + prev_total_dropped = {} + prev_total_delayed = {} + prev_total_outgoing = {} + prev_total_fail = {} + prev_endpoint_recv = {} + prev_endpoint_send = {} + prev_created_messages = {} + prev_bootstrap_candidates = {} + + while True: + # print statistics + self._dispersy.statistics.update() + + bloom = [(c.classification, c.sync_bloom_reuse, c.sync_bloom_skip) for c in self._dispersy.statistics.communities] + candidates = [(c.classification, len(c.candidates) if c.candidates else 0) for c in self._dispersy.statistics.communities] + statistics_dict = {'received_count': self._dispersy.statistics.received_count, + 'total_up': self._dispersy.statistics.total_up, + 'total_down': self._dispersy.statistics.total_down, + 'drop_count': self._dispersy.statistics.drop_count, + 'total_send': self._dispersy.statistics.total_send, + 'cur_sendqueue': self._dispersy.statistics.cur_sendqueue, + 'delay_count': self._dispersy.statistics.delay_count, + 'delay_success': self._dispersy.statistics.delay_success, + 'delay_timeout': self._dispersy.statistics.delay_timeout, + 'walk_attempt': self._dispersy.statistics.walk_attempt, + 'walk_success': self._dispersy.statistics.walk_success, + 'walk_reset': self._dispersy.statistics.walk_reset, + 'conn_type': self._dispersy.statistics.connection_type, + 'bloom': bloom, + 'candidates': candidates} + + prev_statistics = print_on_change("statistics", prev_statistics, statistics_dict) + prev_total_received = print_on_change("statistics-successful-messages", prev_total_received, self._dispersy.statistics.success) + prev_total_dropped = print_on_change("statistics-dropped-messages", prev_total_dropped, self._dispersy.statistics.drop) + prev_total_delayed = print_on_change("statistics-delayed-messages", prev_total_delayed, self._dispersy.statistics.delay) + prev_total_outgoing = print_on_change("statistics-outgoing-messages", prev_total_outgoing, self._dispersy.statistics.outgoing) + prev_total_fail = print_on_change("statistics-walk-fail", prev_total_fail, self._dispersy.statistics.walk_fail) + prev_endpoint_recv = print_on_change("statistics-endpoint-recv", prev_endpoint_recv, self._dispersy.statistics.endpoint_recv) + prev_endpoint_send = print_on_change("statistics-endpoint-send", prev_endpoint_send, self._dispersy.statistics.endpoint_send) + prev_created_messages = print_on_change("statistics-created-messages", prev_created_messages, self._dispersy.statistics.created) + prev_bootstrap_candidates = print_on_change("statistics-bootstrap-candidates", prev_bootstrap_candidates, self._dispersy.statistics.bootstrap_candidates) + +# def callback_cmp(a, b): +# return cmp(self._dispersy.callback._statistics[a][0], self._dispersy.callback._statistics[b][0]) +# keys = self._dispersy.callback._statistics.keys() +# keys.sort(reverse = True) +# +# total_run = {} +# for key in keys[:10]: +# total_run[make_valid_key(key)] = self._dispersy.callback._statistics[key] +# if len(total_run) > 0: +# log("dispersy.log", "statistics-callback-run", **total_run) + +# stats = Conversion.debug_stats +# total = stats["encode-message"] +# nice_total = {'encoded':stats["-encode-count"], 'total':"%.2fs"%total} +# for key, value in sorted(stats.iteritems()): +# if key.startswith("encode") and not key == "encode-message" and total: +# nice_total[make_valid_key(key)] = "%7.2fs ~%5.1f%%" % (value, 100.0 * value / total) +# log("dispersy.log", "statistics-encode", **nice_total) +# +# total = stats["decode-message"] +# nice_total = {'decoded':stats["-decode-count"], 'total':"%.2fs"%total} +# for key, value in sorted(stats.iteritems()): +# if key.startswith("decode") and not key == "decode-message" and total: +# nice_total[make_valid_key(key)] = "%7.2fs ~%5.1f%%" % (value, 100.0 * value / total) +# log("dispersy.log", "statistics-decode", **nice_total) + + yield 1.0 diff -Nru tribler-6.2.0/Tribler/dispersy/singleton.py tribler-6.2.0/Tribler/dispersy/singleton.py --- tribler-6.2.0/Tribler/dispersy/singleton.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/singleton.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,294 @@ +# Python 2.5 features +from __future__ import with_statement + +""" +Helper class to easily and cleanly use singleton objects +""" + +from gc import get_referrers +from random import sample +from threading import RLock + +# update version information directly from SVN +from .revision import update_revision_information +update_revision_information("$HeadURL: http://svn.tribler.org/dispersy/trunk/singleton.py $", "$Revision: 31520 $") + +def cleanup(): + """ + Removes all singleton instances from existing Singleton subclasses + """ + def clear(cls): + if hasattr(cls, "_singleton_instance"): + delattr(cls, "_singleton_instance") + + if hasattr(cls, "_singleton_instances"): + delattr(cls, "_singleton_instances") + + for subcls in cls.__subclasses__(): + clear(subcls) + + clear(Singleton) + clear(Parameterized1Singleton) + +class Singleton(object): + """ + Usage: + + class Foo(Singleton): + def __init__(self, bar): + self.bar = bar + + # create singleton instance and set bar = 123 + foo = Foo.get_instance(123) + assert foo.bar == 123 + + # retrieve existing singleton instance, Foo.__init__ is NOT called again + foo = Foo.get_instance() + assert foo.bar == 123 + + # retrieve existing singleton instance, bar is NOT set to 456 + foo = Foo.get_instance(456) + assert foo.bar == 123 + """ + + _singleton_lock = RLock() + + @classmethod + def has_instance(cls, singleton_placeholder=None): + """ + Returns the existing singleton instance or None + """ + if singleton_placeholder is None: + singleton_placeholder = cls + + with singleton_placeholder._singleton_lock: + if hasattr(singleton_placeholder, "_singleton_instance"): + return getattr(singleton_placeholder, "_singleton_instance") + + @classmethod + def get_instance(cls, *args, **kargs): + """ + Returns the existing singleton instance or create one + """ + if "singleton_placeholder" in kargs: + singleton_placeholder = kargs.pop("singleton_placeholder") + else: + singleton_placeholder = cls + + with singleton_placeholder._singleton_lock: + if not hasattr(singleton_placeholder, "_singleton_instance"): + setattr(singleton_placeholder, "_singleton_instance", cls(*args, **kargs)) + return getattr(singleton_placeholder, "_singleton_instance") + + @classmethod + def del_instance(cls, singleton_placeholder=None): + """ + Removes the existing singleton instance + """ + if singleton_placeholder is None: + singleton_placeholder = cls + + with singleton_placeholder._singleton_lock: + if hasattr(singleton_placeholder, "_singleton_instance"): + delattr(singleton_placeholder, "_singleton_instance") + + @classmethod + def referenced_instance(cls, singleton_placeholder=None): + """ + Returns True if this singleton instance is referenced. + + Warning: this method uses the GC.get_referrers to determine the number of references. This + method is very expensive to use! + """ + if singleton_placeholder is None: + singleton_placeholder = cls + + with singleton_placeholder._singleton_lock: + if hasattr(singleton_placeholder, "_singleton_instance"): + return len(get_referrers(getattr(cls, "_singleton_instance"))) > 1 + return False + +class Parameterized1Singleton(object): + """ + The required first parameter is used to uniquely identify a + singleton instance. Only one instance per first parameter will be + created. + + class Bar(Parameterized1Singleton): + def __init(self, name): + self.name = name + + a1 = Bar.get_instance('a', 'a') + a2 = Bar.get_instance('a', *whatever) + b1 = Bar.get_instance('b', 'b') + + assert a1 == a2 + assert a1 != b1 + assert a2 != b2 + + """ + + _singleton_lock = RLock() + + @classmethod + def has_instance(cls, arg): + """ + Returns the existing singleton instance or None + """ + assert hasattr(arg, "__hash__") + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances") and arg in getattr(cls, "_singleton_instances"): + return getattr(cls, "_singleton_instances")[arg] + + @classmethod + def get_instance(cls, *args, **kargs): + """ + Returns the existing singleton instance or create one + """ + assert len(args) > 0 + assert hasattr(args[0], "__hash__") + with cls._singleton_lock: + if not hasattr(cls, "_singleton_instances"): + setattr(cls, "_singleton_instances", {}) + if not args[0] in getattr(cls, "_singleton_instances"): + getattr(cls, "_singleton_instances")[args[0]] = cls(*args, **kargs) + return getattr(cls, "_singleton_instances")[args[0]] + + @classmethod + def del_instance(cls, arg): + """ + Removes the existing singleton instance + """ + assert hasattr(arg, "__hash__") + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances") and arg in getattr(cls, "_singleton_instances"): + del getattr(cls, "_singleton_instances")[arg] + if not getattr(cls, "_singleton_instances"): + delattr(cls, "_singleton_instances") + + @classmethod + def get_instances(cls): + """ + Returns a list with all the singleton instances. + """ + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances"): + return getattr(cls, "_singleton_instances").values() + else: + return [] + + @classmethod + def referenced_instance(cls, arg): + """ + Returns True if this singleton instance is referenced. + + Warning: this method uses the GC.get_referrers to determine the number of references. This + method is very expensive to use! + """ + assert hasattr(arg, "__hash__") + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances") and arg in getattr(cls, "_singleton_instances"): + return len(get_referrers(getattr(cls, "_singleton_instances")[arg])) > 1 + return False + + @classmethod + def reference_instances(cls): + """ + Returns a list with (reference-count, instance) tuples. + + Warning: this method uses the GC.get_referrers to determine the number of references. This + method is very expensive to use! + """ + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances"): + return [(len(get_referrers(instance)) - 2, instance) for instance in getattr(cls, "_singleton_instances").itervalues()] + return [] + + @classmethod + def sample_reference_instances(cls, size): + """ + Returns a list with at most SIZE randomly chosen (reference-count, instance) tuples. + + Warning: this method uses the GC.get_referrers to determine the number of references. This + method is very expensive to use! + """ + assert isinstance(size, int) + assert 0 < size + with cls._singleton_lock: + if hasattr(cls, "_singleton_instances"): + instances = getattr(cls, "_singleton_instances") + if len(instances) < size: + # sample larger than population + return [(len(get_referrers(instance)) - 2, instance) for instance in instances.itervalues()] + + else: + return [(len(get_referrers(instance)) - 2, instance) for instance in (instances[arg] for arg in sample(instances, size))] + + return [] + +if __name__ == "__main__": + from .dprint import dprint + + def assert_(value, *args): + if not value: + raise AssertionError(*args) + + class Foo(Singleton): + def __init__(self, message): + self.message = message + + assert_(not Foo.referenced_instance()) + + foo = Foo.get_instance("foo") + assert_(foo.message == "foo") + assert_(foo.referenced_instance()) + + del foo + foo = Foo.get_instance("bar") + assert_(foo.message == "foo") + assert_(foo.referenced_instance()) + + del foo + assert_(not Foo.referenced_instance()) + + Foo.del_instance() + assert_(not Foo.referenced_instance()) + + # + # + # + + class Foo(Parameterized1Singleton): + def __init__(self, key, message): + self.message = message + + def __eq__(self, other): + return id(self) == id(other) if isinstance(other, Foo) else self.message == other + + assert_(not Foo.referenced_instance(1)) + assert_(Foo.reference_instances() == []) + assert_(Foo.sample_reference_instances(10) == []) + + Foo.get_instance(1, "foo") + assert_(Foo.reference_instances() == [(0, "foo")]) + assert_(Foo.sample_reference_instances(10) == [(0, "foo")]) + + foo = Foo.get_instance(1, "foo") + assert_(foo.message == "foo") + assert_(foo.referenced_instance(1)) + assert_(Foo.reference_instances() == [(1, "foo")]) + assert_(Foo.sample_reference_instances(10) == [(1, "foo")]) + + foo = Foo.get_instance(1, "bar") + assert_(foo.message == "foo") + assert_(foo.referenced_instance(1)) + del foo + + assert_(not Foo.referenced_instance(1)) + assert_(Foo.reference_instances() == [(0, "foo")]) + assert_(Foo.sample_reference_instances(10) == [(0, "foo")]) + + Foo.del_instance(1) + assert_(not Foo.referenced_instance(1)) + assert_(Foo.reference_instances() == []) + assert_(Foo.sample_reference_instances(10) == []) diff -Nru tribler-6.2.0/Tribler/dispersy/statistics.py tribler-6.2.0/Tribler/dispersy/statistics.py --- tribler-6.2.0/Tribler/dispersy/statistics.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/statistics.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,242 @@ +from time import time +from collections import defaultdict + + +class Statistics(object): + + @staticmethod + def dict_inc(dictionary, key, value=1): + if dictionary != None: + dictionary[key] += value + + def update(self): + raise NotImplementedError() + + def get_dict(self): + """ + Returns a deep clone of SELF as a dictionary. + + Warning: there is no recursion protection, if SELF contains self-references it will hang. + """ + def clone(o): + if isinstance(o, Statistics): + return dict((key, clone(value)) + for key, value + in o.__dict__.iteritems() + if not key.startswith("_")) + + if isinstance(o, dict): + return dict((clone(key), clone(value)) + for key, value + in o.iteritems()) + + if isinstance(o, tuple): + return tuple(clone(value) for value in o) + + if isinstance(o, list): + return [clone(value) for value in o] + + return o + return clone(self) + + +class DispersyStatistics(Statistics): + + def __init__(self, dispersy): + self._dispersy = dispersy + + self.communities = None + self.connection_type = None + self.database_version = dispersy.database.database_version + self.lan_address = None + self.start = self.timestamp = time() + + # nr packets received + self.received_count = 0 + + # nr messages successfully handled + self.success_count = 0 + + # nr messages which were received, but dropped + self.drop_count = 0 + + # nr messages which were received, but delayed + self.delay_count = 0 + # nr delay success and timeout (success + timeout) != count, as some messages are in between + self.delay_success = 0 + self.delay_timeout = 0 + # nr delay messages being send + self.delay_send = 0 + + # nr sync messages created by this peer send using _send method + self.created_count = 0 + + # nr of bytes up/down and packets send as reported by endpoint + self.total_down = 0 + self.total_up = 0 + self.total_send = 0 + + # size of the sendqueue + self.cur_sendqueue = 0 + + # nr of candidates introduced/stumbled upon + self.total_candidates_discovered = 0 + + self.walk_attempt = 0 + self.walk_success = 0 + self.walk_bootstrap_attempt = 0 + self.walk_bootstrap_success = 0 + self.walk_reset = 0 + self.walk_invalid_response_identifier = 0 + + # nr of outgoing introduction-request messages with payload.advice == True + self.walk_advice_outgoing_request = 0 + # nr of incoming introduction-response messages that introduce a candidate + self.walk_advice_incoming_response = 0 + # nr of incoming introduction-response messages that introduce a previously unknown candidate + self.walk_advice_incoming_response_new = 0 + # nr of incoming introduction-request messages with payload.advice == True + self.walk_advice_incoming_request = 0 + # nr of outgoing introduction-response messages that introduce a candidate + self.walk_advice_outgoing_response = 0 + + self.wan_address = None + self.update() + + self.enable_debug_statistics(__debug__) + + def enable_debug_statistics(self, enable): + if self.are_debug_statistics_enabled() != enable or not hasattr(self, 'drop'): + if enable: + self.drop = defaultdict(int) + self.delay = defaultdict(int) + self.success = defaultdict(int) + self.outgoing = defaultdict(int) + self.created = defaultdict(int) + self.walk_fail = defaultdict(int) + self.attachment = defaultdict(int) + self.database = defaultdict(int) + self.endpoint_recv = defaultdict(int) + self.endpoint_send = defaultdict(int) + self.bootstrap_candidates = defaultdict(int) + + # SOURCE:INTRODUCED:COUNT nested dictionary + self.received_introductions = defaultdict(lambda: defaultdict(int)) + + # DESTINATION:COUNT dictionary + self.outgoing_introduction_request = defaultdict(int) + + # SOURCE:COUNT dictionary + self.incoming_introduction_response = defaultdict(int) + + else: + self.drop = None + self.delay = None + self.success = None + self.created = None + self.outgoing = None + self.walk_fail = None + self.attachment = None + self.database = None + self.endpoint_recv = None + self.endpoint_send = None + self.bootstrap_candidates = None + self.received_introductions = None + self.outgoing_introduction_request = None + self.incoming_introduction_response = None + + def are_debug_statistics_enabled(self): + return getattr(self, 'drop', None) != None + + def update(self, database=False): + self.timestamp = time() + self.connection_type = self._dispersy.connection_type + self.lan_address = self._dispersy.lan_address + self.wan_address = self._dispersy.wan_address + + self.total_down = self._dispersy.endpoint.total_down + self.total_up = self._dispersy.endpoint.total_up + self.total_send = self._dispersy.endpoint.total_send + self.cur_sendqueue = self._dispersy.endpoint.cur_sendqueue + + self.communities = [community.statistics for community in self._dispersy.get_communities()] + for community in self.communities: + community.update(database=database) + + def reset(self): + self.success_count = 0 + self.drop_count = 0 + self.delay_count = 0 + self.delay_send = 0 + self.delay_success = 0 + self.delay_timeout = 0 + self.received_count = 0 + self.created_count = 0 + + self._dispersy.endpoint.reset_statistics() + self.total_down = self._dispersy.endpoint.total_down + self.total_up = self._dispersy.endpoint.total_up + self.total_send = self._dispersy.endpoint.total_send + self.cur_sendqueue = self._dispersy.endpoint.cur_sendqueue + self.start = self.timestamp = time() + + self.walk_attempt = 0 + self.walk_reset = 0 + self.walk_success = 0 + self.walk_bootstrap_attempt = 0 + self.walk_bootstrap_success = 0 + + if self.are_debug_statistics_enabled(): + self.drop = defaultdict(int) + self.delay = defaultdict(int) + self.success = defaultdict(int) + self.outgoing = defaultdict(int) + self.created = defaultdict(int) + self.walk_fail = defaultdict(int) + self.attachment = defaultdict(int) + self.database = defaultdict(int) + self.endpoint_recv = defaultdict(int) + self.endpoint_send = defaultdict(int) + self.bootstrap_candidates = defaultdict(int) + self.received_introductions = defaultdict(lambda: defaultdict(int)) + self.outgoing_introduction_request = defaultdict(int) + self.incoming_introduction_response = defaultdict(int) + + +class CommunityStatistics(Statistics): + + def __init__(self, community): + self._community = community + self.acceptable_global_time = None + self.candidates = None + self.cid = community.cid + self.classification = community.get_classification() + self.database = dict() + self.database_id = community.database_id + self.dispersy_acceptable_global_time_range = None + self.dispersy_enable_candidate_walker = None + self.dispersy_enable_candidate_walker_responses = None + self.global_time = None + self.hex_cid = community.cid.encode("HEX") + self.hex_mid = community.my_member.mid.encode("HEX") + self.mid = community.my_member.mid + self.sync_bloom_new = 0 + self.sync_bloom_reuse = 0 + self.sync_bloom_send = 0 + self.sync_bloom_skip = 0 + self.update() + + def update(self, database=False): + self.acceptable_global_time = self._community.acceptable_global_time + self.dispersy_acceptable_global_time_range = self._community.dispersy_acceptable_global_time_range + self.dispersy_enable_candidate_walker = self._community.dispersy_enable_candidate_walker + self.dispersy_enable_candidate_walker_responses = self._community.dispersy_enable_candidate_walker_responses + self.global_time = self._community.global_time + now = time() + self.candidates = [(candidate.lan_address, candidate.wan_address, candidate.global_time) + for candidate + in self._community.candidates.itervalues() if candidate.get_category(now) in [u'walk', u'stumble', u'intro']] + if database: + self.database = dict(self._community.dispersy.database.execute(u"SELECT meta_message.name, COUNT(sync.id) FROM sync JOIN meta_message ON meta_message.id = sync.meta_message WHERE sync.community = ? GROUP BY sync.meta_message", (self._community.database_id,))) + else: + self.database = dict() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/community.py tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/community.py --- tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/community.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/community.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,192 @@ +import logging +logger = logging.getLogger(__name__) + +from ...authentication import DoubleMemberAuthentication, MemberAuthentication +from ...candidate import Candidate +from ...community import Community, HardKilledCommunity +from ...conversion import DefaultConversion +from ...destination import CommunityDestination +from ...distribution import DirectDistribution, FullSyncDistribution, LastSyncDistribution, GlobalTimePruning +from ...message import Message, DelayMessageByProof +from ...resolution import PublicResolution, LinearResolution, DynamicResolution + +from .payload import TextPayload +from .conversion import DebugCommunityConversion + + +class DebugCommunity(Community): + + """ + DebugCommunity is used to debug Dispersy related messages and policies. + """ + @property + def my_candidate(self): + return Candidate(self._dispersy.lan_address, False) + + @property + def dispersy_candidate_request_initial_delay(self): + # disable candidate + return 0.0 + + @property + def dispersy_sync_initial_delay(self): + # disable sync + return 0.0 + + def initiate_conversions(self): + return [DefaultConversion(self), DebugCommunityConversion(self)] + + # + # helper methods to check database status + # + + def fetch_packets(self, *message_names): + return [str(packet) for packet, in list(self._dispersy.database.execute(u"SELECT packet FROM sync WHERE meta_message IN (" + ", ".join("?" * len(message_names)) + ") ORDER BY global_time, packet", + [self.get_meta_message(name).database_id for name in message_names]))] + + def fetch_messages(self, *message_names): + """ + Fetch all packets for MESSAGE_NAMES from the database and converts them into + Message.Implementation instances. + """ + return self._dispersy.convert_packets_to_messages(self.fetch_packets(*message_names), community=self, verify=False) + + def delete_messages(self, *message_names): + """ + Deletes all packets for MESSAGE_NAMES from the database. Returns the number of packets + removed. + """ + self._dispersy.database.execute(u"DELETE FROM sync WHERE meta_message IN (" + ", ".join("?" * len(message_names)) + ")", + [self.get_meta_message(name).database_id for name in message_names]) + return self._dispersy.database.changes + + def initiate_meta_messages(self): + return [Message(self, u"last-1-test", MemberAuthentication(), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=1), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"last-9-test", MemberAuthentication(), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=9), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"last-1-doublemember-text", DoubleMemberAuthentication(allow_signature_func=self.allow_signature_func), PublicResolution(), LastSyncDistribution(synchronization_direction=u"ASC", priority=128, history_size=1), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"double-signed-text", DoubleMemberAuthentication(allow_signature_func=self.allow_double_signed_text), PublicResolution(), DirectDistribution(), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"full-sync-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + Message(self, u"ASC-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"DESC-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"DESC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"protected-full-sync-text", MemberAuthentication(), LinearResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text), + Message(self, u"dynamic-resolution-text", MemberAuthentication(), DynamicResolution(PublicResolution(), LinearResolution()), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + Message(self, u"sequence-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=True, synchronization_direction=u"ASC", priority=128), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + Message(self, u"full-sync-global-time-pruning-text", MemberAuthentication(), PublicResolution(), FullSyncDistribution(enable_sequence_number=False, synchronization_direction=u"ASC", priority=128, pruning=GlobalTimePruning(10, 20)), CommunityDestination(node_count=10), TextPayload(), self.check_text, self.on_text, self.undo_text), + ] + + def create_full_sync_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"full-sync-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + def create_full_sync_global_time_pruning_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"full-sync-global-time-pruning-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # double-signed-text + # + + def create_double_signed_text(self, text, candidate, member, response_func, response_args=(), timeout=10.0, forward=True): + assert isinstance(candidate, Candidate) + meta = self.get_meta_message(u"double-signed-text") + message = meta.impl(authentication=([self._my_member, member],), + distribution=(self.global_time,), + payload=(text,)) + return self.create_dispersy_signature_request(candidate, message, response_func, response_args, timeout, forward) + + def allow_double_signed_text(self, message): + """ + Received a request to sign MESSAGE. + + Must return either: a. the same message, b. a modified version of message, or c. None. + """ + logger.debug("%s \"%s\"", message, message.payload.text) + assert message.payload.text in ("Allow=True", "Allow=False") + if message.payload.text == "Allow=True": + return message + + # + # last-1-doublemember-text + # + def allow_signature_func(self, message): + return True + + # + # protected-full-sync-text + # + def create_protected_full_sync_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"protected-full-sync-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # dynamic-resolution-text + # + def create_dynamic_resolution_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"dynamic-resolution-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(),), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # sequence-text + # + def create_sequence_text(self, text, store=True, update=True, forward=True): + meta = self.get_meta_message(u"sequence-text") + message = meta.impl(authentication=(self._my_member,), + distribution=(self.claim_global_time(), meta.distribution.claim_sequence_number()), + payload=(text,)) + self._dispersy.store_update_forward([message], store, update, forward) + return message + + # + # any text-payload + # + + def check_text(self, messages): + for message in messages: + allowed, proof = self._timeline.check(message) + if allowed: + yield message + else: + yield DelayMessageByProof(message) + + def on_text(self, messages): + """ + Received a text message. + """ + for message in messages: + if not "Dprint=False" in message.payload.text: + logger.debug("%s \"%s\" @%d", message, message.payload.text, message.distribution.global_time) + + def undo_text(self, descriptors): + """ + Received an undo for a text message. + """ + for member, global_time, packet in descriptors: + message = packet.load_message() + logger.debug("undo \"%s\" @%d", message.payload.text, global_time) + + def dispersy_cleanup_community(self, message): + if message.payload.is_soft_kill: + raise NotImplementedError() + + elif message.payload.is_hard_kill: + return HardKilledDebugCommunity + + +class HardKilledDebugCommunity(DebugCommunity, HardKilledCommunity): + pass diff -Nru tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/conversion.py tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/conversion.py --- tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/conversion.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/conversion.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,54 @@ +from struct import pack, unpack_from + +from ...conversion import BinaryConversion +from ...message import DropPacket + + +class DebugCommunityConversion(BinaryConversion): + + """ + DebugCommunityConversion is used to convert messages to and from binary while performing unittests. + """ + def __init__(self, community, version="\x02"): + assert isinstance(version, str), type(version) + assert len(version) == 1, len(version) + super(DebugCommunityConversion, self).__init__(community, version) + # we use higher message identifiers to reduce the chance that we clash with either Dispersy (255 and down) and + # normal communities (1 and up). + self.define_meta_message(chr(101), community.get_meta_message(u"last-1-test"), self._encode_text, self._decode_text) + self.define_meta_message(chr(102), community.get_meta_message(u"last-9-test"), self._encode_text, self._decode_text) + self.define_meta_message(chr(103), community.get_meta_message(u"double-signed-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(104), community.get_meta_message(u"full-sync-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(105), community.get_meta_message(u"ASC-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(106), community.get_meta_message(u"DESC-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(107), community.get_meta_message(u"last-1-doublemember-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(108), community.get_meta_message(u"protected-full-sync-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(109), community.get_meta_message(u"dynamic-resolution-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(110), community.get_meta_message(u"sequence-text"), self._encode_text, self._decode_text) + self.define_meta_message(chr(111), community.get_meta_message(u"full-sync-global-time-pruning-text"), self._encode_text, self._decode_text) + + def _encode_text(self, message): + """ + Encode a text message. + Returns one byte containing len(message.payload.text) followed by this text. + """ + return pack("!B", len(message.payload.text)), message.payload.text + + def _decode_text(self, placeholder, offset, data): + """ + Decode a text message. + Returns the new offset and a payload implementation. + """ + if len(data) < offset + 1: + raise DropPacket("Insufficient packet size") + + text_length, = unpack_from("!B", data, offset) + offset += 1 + + if len(data) < offset + text_length: + raise DropPacket("Insufficient packet size") + + text = data[offset:offset + text_length] + offset += text_length + + return offset, placeholder.meta.payload.implement(text) diff -Nru tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/node.py tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/node.py --- tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/node.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/node.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,684 @@ +import logging +logger = logging.getLogger(__name__) + +import socket +from time import time, sleep + +from ...bloomfilter import BloomFilter +from ...candidate import Candidate +from ...community import Community +from ...crypto import ec_generate_key, ec_to_public_bin, ec_to_private_bin +from ...member import Member +from ...message import Message +from ...resolution import PublicResolution, LinearResolution + + +class DebugNode(object): + + """ + DebugNode is used to represent an external node/peer while performing unittests. + + One or more debug nodes are generally made, for each unittest, as follows: + + # create external node + node = DebugNode(community) + node.init_socket() + node.init_my_member() + """ + + _socket_range = (8000, 8999) + _socket_pool = {} + _socket_counter = 0 + + def __init__(self, community): + assert community is None or isinstance(community, Community), type(community) + super(DebugNode, self).__init__() + self._community = community + self._dispersy = community.dispersy if community else None + self._socket = None + self._tunnel = False + self._my_member = None + + @property + def community(self): + """ + The community for this node. + + Returns None unless self.set_community() has been called. + """ + return self._community + + @property + def socket(self): + """ + The python socket.socket instance for this node. + + Will fail unless self.init_socket() has been called. + """ + return self._socket + + @property + def tunnel(self): + """ + True when this node is behind a tunnel. + + Will fail unless self.init_socket() has been called. + """ + return self._tunnel + + @property + def lan_address(self): + """ + The LAN address for this node. + + Will fail unless self.init_socket() has been called. + """ + _, port = self._socket.getsockname() + return ("127.0.0.1", port) + + @property + def wan_address(self): + """ + The WAN address for this node. + + Will fail unless self.init_socket() has been called. + """ + if self._community.dispersy: + host = self._community.dispersy.wan_address[0] + + if host == "0.0.0.0": + host = self._community.dispersy.lan_address[0] + + else: + host = "0.0.0.0" + + _, port = self._socket.getsockname() + return (host, port) + + @property + def my_member(self): + """ + The member for this node. + + Returns None unless self.init_my_member() has been called. + """ + return self._my_member + + @property + def candidate(self): + """ + A Candidate instance for this node. + + Will fail unless self.init_socket() has been called. + """ + return Candidate(self.lan_address, self.tunnel) + + def set_community(self, community): + """ + Set the community that this node is associated to. + """ + assert community is None or isinstance(community, Community), type(community) + self._community = community + if community: + self._dispersy = community.dispersy + + def init_socket(self, tunnel=False): + """ + Create a socket.socket instance for this node. + + The port will be chosen from self._socket_range. When there are too many DebugNodes the + socket.socket instances will be reused. Hence it is possible to emulate many external + nodes. + """ + assert isinstance(tunnel, bool) + assert self._socket is None + port = self._socket_range[0] + self._socket_counter % (self._socket_range[1] - self._socket_range[0]) + type(self)._socket_counter += 1 + + if port in self._socket_pool: + logger.warning("reuse socket %d", port) + + else: + s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 870400) + s.setblocking(False) + s.settimeout(0.0) + while True: + try: + s.bind(("localhost", port)) + except socket.error: + port = self._socket_range[0] + self._socket_counter % (self._socket_range[1] - self._socket_range[0]) + type(self)._socket_counter += 1 + continue + break + + self._socket_pool[port] = s + logger.debug("create socket %d", port) + + self._socket = self._socket_pool[port] + self._tunnel = tunnel + + def init_my_member(self, bits=None, sync_with_database=None, candidate=True, identity=True): + """ + Create a Member instance for this node. + + The member will be created without being stored in the Dispersy member cache. Hence, when + this member communicates with the associated community the community will create yet another + Member instance. However, these two Member instances will share the same database + identifier! + + BITS is deprecated and should no longer be used. + SYNC_WITH_DATABASE is deprecated and should no longer be used. + + When IDENTITY is True the community will immediately be given a dispersy-identity message + for this node. The identity message will be given global-time 2, and will be encoded using + the associated community. + + When CANDIDATE is True the community will immediately be told that this node exist using a + dispersy-introduction-request message. + """ + assert bits is None, "The parameter bits is deprecated and must be None" + assert sync_with_database is None, "The parameter sync_with_database is deprecated and must be None" + + ec = ec_generate_key(u"low") + self._my_member = Member(self._dispersy, ec_to_public_bin(ec), ec_to_private_bin(ec)) + + # remove the private key from the database to ensure DebugCommunity has no access to it + self._dispersy.database.execute(u"DELETE FROM private_key WHERE member = ?", (self._my_member.database_id,)) + assert self._dispersy.database.changes == 1 + + if identity: + # update identity information + assert self._socket, "Socket needs to be set to candidate" + assert self._community, "Community needs to be set to candidate" + message = self.create_dispersy_identity(2) + self.give_message(message) + + if candidate: + # update candidate information + assert self._socket, "Socket needs to be set to candidate" + assert self._community, "Community needs to be set to candidate" + message = self.create_dispersy_introduction_request(self._community.my_candidate, self.lan_address, self.wan_address, False, u"unknown", None, 1, 1) + self.give_message(message) + sleep(0.1) + self.receive_message(message_names=[u"dispersy-introduction-response"]) + + def encode_message(self, message): + """ + Returns the raw packet after MESSAGE is encoded using the associated community. + """ + assert isinstance(message, Message.Implementation) + tmp_member = self._community._my_member + self._community._my_member = self._my_member + try: + return self._community.get_conversion_for_message(message).encode_message(message) + finally: + self._community._my_member = tmp_member + + def give_packet(self, packet, verbose=False, cache=False, tunnel=None): + """ + Give PACKET directly to Dispersy on_incoming_packets. + Returns PACKET + """ + assert isinstance(packet, str) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + assert tunnel is None, "TUNNEL property is set using init_socket(...)" + if verbose: + logger.debug("giving %d bytes", len(packet)) + candidate = Candidate(self.lan_address, self._tunnel) + self._dispersy.on_incoming_packets([(candidate, packet)], cache=cache, timestamp=time()) + return packet + + def give_packets(self, packets, verbose=False, cache=False, tunnel=None): + """ + Give multiple PACKETS directly to Dispersy on_incoming_packets. + Returns PACKETS + """ + assert isinstance(packets, list) + assert all(isinstance(packet, str) for packet in packets) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + assert tunnel is None, "TUNNEL property is set using init_socket(...)" + if verbose: + logger.debug("giving %d bytes", sum(len(packet) for packet in packets)) + candidate = Candidate(self.lan_address, self._tunnel) + self._dispersy.on_incoming_packets([(candidate, packet) for packet in packets], cache=cache, timestamp=time()) + return packets + + def give_message(self, message, verbose=False, cache=False, tunnel=None): + """ + Give MESSAGE directly to Dispersy on_incoming_packets after it is encoded. + Returns MESSAGE + """ + assert isinstance(message, Message.Implementation) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + assert tunnel is None, "TUNNEL property is set using init_socket(...)" + packet = message.packet if message.packet else self.encode_message(message) + if verbose: + logger.debug("giving %s (%d bytes)", message.name, len(packet)) + self.give_packet(packet, verbose=verbose, cache=cache) + return message + + def give_messages(self, messages, verbose=False, cache=False, tunnel=None): + """ + Give multiple MESSAGES directly to Dispersy on_incoming_packets after they are encoded. + Returns MESSAGES + """ + assert isinstance(messages, list) + assert all(isinstance(message, Message.Implementation) for message in messages) + assert isinstance(verbose, bool) + assert isinstance(cache, bool) + assert tunnel is None, "TUNNEL property is set using init_socket(...)" + packets = [message.packet if message.packet else self.encode_message(message) for message in messages] + if verbose: + logger.debug("giving %d messages (%d bytes)", len(messages), sum(len(packet) for packet in packets)) + self.give_packets(packets, verbose=verbose, cache=cache) + return messages + + def send_packet(self, packet, address, verbose=False): + """ + Sends PACKET to ADDRESS using the nodes' socket. + Returns PACKET + """ + assert isinstance(packet, str) + assert isinstance(address, tuple) + assert isinstance(verbose, bool) + if verbose: + logger.debug("%d bytes to %s:%d", len(packet), address[0], address[1]) + self._socket.sendto(packet, address) + return packet + + def send_message(self, message, address, verbose=False): + """ + Sends MESSAGE to ADDRESS using the nodes' socket after it is encoded. + Returns MESSAGE + """ + assert isinstance(message, Message.Implementation) + assert isinstance(address, tuple) + assert isinstance(verbose, bool) + self.encode_message(message) + if verbose: + logger.debug("%s (%d bytes) to %s:%d", message.name, len(message.packet), address[0], address[1]) + self.send_packet(message.packet, address) + return message + + def drop_packets(self, verbose=False): + """ + Discard all packets on the nodes' socket. + """ + while True: + try: + packet, address = self._socket.recvfrom(10240) + except: + break + + if verbose: + logger.debug("dropped %d bytes from %s:%d", len(packet), address[0], address[1]) + + def receive_packet(self, timeout=None, addresses=None, packets=None): + """ + Returns the first matching (candidate, packet) tuple from incoming UDP packets. + + TIMEOUT is deprecated and should no longer be used. + + ADDRESSES must be None or a list of address tuples. When it is a list of addresses, only + UDP packets from ADDRESSES will be returned. + + PACKETS must be None or a list of packets. When it is a list of packets, only those PACKETS + will be returned. + + Will raise a socket exception when no matching packets are available. + """ + assert timeout is None, "The parameter TIMEOUT is deprecated and must be None" + assert addresses is None or isinstance(addresses, list) + assert addresses is None or all(isinstance(address, tuple) for address in addresses) + assert packets is None or isinstance(packets, list) + assert packets is None or all(isinstance(packet, str) for packet in packets) + + while True: + try: + packet, address = self._socket.recvfrom(10240) + except: + logger.debug("No more packets") + raise + + if not (addresses is None or address in addresses or (address[0] == "127.0.0.1" and ("0.0.0.0", address[1]) in addresses)): + logger.debug("Ignored %d bytes from %s:%d", len(packet), address[0], address[1]) + continue + + if not (packets is None or packet in packets): + logger.debug("Ignored %d bytes from %s:%d", len(packet), address[0], address[1]) + continue + + if packet.startswith("ffffffff".decode("HEX")): + tunnel = True + packet = packet[4:] + else: + tunnel = False + + candidate = Candidate(address, tunnel) + logger.debug("%d bytes from %s", len(packet), candidate) + return candidate, packet + + def receive_packets(self, timeout=None, addresses=None, packets=None): + """ + Returns a list with (candidate, packet) tuples from all matching incoming UDP packets. + + TIMEOUT is deprecated and should no longer be used. + + ADDRESSES must be None or a list of address tuples. When it is a list of addresses, only + UDP packets from ADDRESSES will be returned. + + PACKETS must be None or a list of packets. When it is a list of packets, only those PACKETS + will be returned. + """ + packets_ = [] + while True: + try: + packets_.append(self.receive_packet(timeout, addresses, packets)) + except socket.error: + break + return packets_ + + def receive_message(self, timeout=None, addresses=None, packets=None, message_names=None, payload_types=None, distributions=None, destinations=None): + """ + Returns the first matching (candidate, message) tuple from incoming UDP packets. + + TIMEOUT is deprecated and should no longer be used. + + ADDRESSES must be None or a list of address tuples. When it is a list of addresses, only + UDP packets from ADDRESSES will be returned. + + PACKETS must be None or a list of packets. When it is a list of packets, only those PACKETS + will be returned. + + MESSAGE_NAMES must be None or a list of message names. When it is a list of names, only + messages with this name will be returned. + + PAYLOAD_TYPES is deprecated and should no longer be used. + DISTRIBUTIONS is deprecated and should no longer be used. + DESTINATIONS is deprecated and should no longer be used. + + Will raise a socket exception when no matching packets are available. + """ + assert timeout is None, "The parameter TIMEOUT is deprecated and must be None" + assert isinstance(message_names, (type(None), list)) + assert payload_types is None, "The parameter PAYLOAD_TYPES is deprecated and must be None" + assert distributions is None, "The parameter DISTRIBUTIONS is deprecated and must be None" + assert destinations is None, "The parameter DESTINATIONS is deprecated and must be None" + + while True: + candidate, packet = self.receive_packet(timeout, addresses, packets) + + try: + message = self._community.get_conversion_for_packet(packet).decode_message(candidate, packet) + except KeyError as exception: + logger.exception("Ignored %s", exception) + continue + + if not (message_names is None or message.name in message_names): + logger.debug("Ignored %s (%d bytes) from %s", message.name, len(packet), candidate) + continue + + logger.debug("%s (%d bytes) from %s", message.name, len(packet), candidate) + return candidate, message + + def receive_messages(self, timeout=None, addresses=None, packets=None, message_names=None, payload_types=None, distributions=None, destinations=None): + """ + Returns a list with (candidate, message) tuples from all matching incoming UDP packets. + + TIMEOUT is deprecated and should no longer be used. + + ADDRESSES must be None or a list of address tuples. When it is a list of addresses, only + UDP packets from ADDRESSES will be returned. + + PACKETS must be None or a list of packets. When it is a list of packets, only those PACKETS + will be returned. + + MESSAGE_NAMES must be None or a list of message names. When it is a list of names, only + messages with this name will be returned. + + PAYLOAD_TYPES is deprecated and should no longer be used. + DISTRIBUTIONS is deprecated and should no longer be used. + DESTINATIONS is deprecated and should no longer be used. + """ + messages = [] + while True: + try: + messages.append(self.receive_message(timeout, addresses, packets, message_names, payload_types, distributions, destinations)) + except socket.error: + break + return messages + + def create_dispersy_authorize(self, permission_triplets, sequence_number, global_time): + """ + Returns a new dispersy-authorize message. + """ + meta = self._community.get_meta_message(u"dispersy-authorize") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(permission_triplets,)) + + def create_dispersy_identity(self, global_time): + """ + Returns a new dispersy-identity message. + """ + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(u"dispersy-identity") + return meta.impl(authentication=(self._my_member,), distribution=(global_time,)) + + def create_dispersy_undo_own(self, message, global_time, sequence_number): + """ + Returns a new dispersy-undo-own message. + """ + assert message.authentication.member == self._my_member, "use create_dispersy_undo_other" + meta = self._community.get_meta_message(u"dispersy-undo-own") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(message.authentication.member, message.distribution.global_time, message)) + + def create_dispersy_undo_other(self, message, global_time, sequence_number): + """ + Returns a new dispersy-undo-other message. + """ + assert message.authentication.member != self._my_member, "use create_dispersy_undo_own" + meta = self._community.get_meta_message(u"dispersy-undo-other") + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(message.authentication.member, message.distribution.global_time, message)) + + def create_dispersy_missing_sequence(self, missing_member, missing_message, missing_sequence_low, missing_sequence_high, global_time, destination_candidate): + """ + Returns a new dispersy-missing-sequence message. + """ + assert isinstance(missing_member, Member) + assert isinstance(missing_message, Message) + assert isinstance(missing_sequence_low, (int, long)) + assert isinstance(missing_sequence_high, (int, long)) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-missing-sequence") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(missing_member, missing_message, missing_sequence_low, missing_sequence_high)) + + def create_dispersy_signature_request(self, identifier, message, global_time): + """ + Returns a new dispersy-signature-request message. + """ + assert isinstance(message, Message.Implementation) + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(u"dispersy-signature-request") + return meta.impl(distribution=(global_time,), payload=(identifier, message,)) + + def create_dispersy_signature_response(self, identifier, message, global_time, destination_candidate): + """ + Returns a new dispersy-missing-response message. + """ + isinstance(identifier, (int, long)) + isinstance(message, Message.Implementation) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-signature-response") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(identifier, message)) + + def create_dispersy_missing_message(self, missing_member, missing_global_times, global_time, destination_candidate): + """ + Returns a new dispersy-missing-message message. + """ + assert isinstance(missing_member, Member) + assert isinstance(missing_global_times, list) + assert all(isinstance(global_time, (int, long)) for global_time in missing_global_times) + assert isinstance(global_time, (int, long)) + assert isinstance(destination_candidate, Candidate) + meta = self._community.get_meta_message(u"dispersy-missing-message") + return meta.impl(distribution=(global_time,), + destination=(destination_candidate,), + payload=(missing_member, missing_global_times)) + + def create_dispersy_missing_proof(self, member, global_time): + """ + Returns a new dispersy-missing-proof message. + """ + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + meta = self._community.get_meta_message(u"dispersy-missing-proof") + return meta.impl(distribution=(global_time,), payload=(member, global_time)) + + def create_dispersy_introduction_request(self, destination, source_lan, source_wan, advice, connection_type, sync, identifier, global_time): + """ + Returns a new dispersy-introduction-request message. + """ + assert isinstance(destination, Candidate), type(destination) + assert isinstance(source_lan, tuple), type(source_lan) + assert isinstance(source_wan, tuple), type(source_wan) + assert isinstance(advice, bool), type(advice) + assert isinstance(connection_type, unicode), type(connection_type) + if sync: + assert isinstance(sync, tuple) + assert len(sync) == 5 + time_low, time_high, modulo, offset, bloom_packets = sync + assert isinstance(time_low, (int, long)) + assert isinstance(time_high, (int, long)) + assert isinstance(modulo, int) + assert isinstance(offset, int) + assert isinstance(bloom_packets, list) + assert all(isinstance(packet, str) for packet in bloom_packets) + bloom_filter = BloomFilter(512 * 8, 0.001, prefix="x") + for packet in bloom_packets: + bloom_filter.add(packet) + sync = (time_low, time_high, modulo, offset, bloom_filter) + assert isinstance(identifier, int), type(identifier) + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(u"dispersy-introduction-request") + return meta.impl(authentication=(self._my_member,), + destination=(destination,), + distribution=(global_time,), + payload=(destination.sock_addr, source_lan, source_wan, advice, connection_type, sync, identifier)) + + def _create_text(self, message_name, text, global_time, resolution=(), destination=()): + assert isinstance(message_name, unicode) + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + assert isinstance(resolution, tuple) + assert isinstance(destination, tuple) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=(self._my_member,), + resolution=resolution, + distribution=(global_time,), + destination=destination, + payload=(text,)) + + def _create_sequence_text(self, message_name, text, global_time, sequence_number): + assert isinstance(message_name, unicode) + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + assert isinstance(sequence_number, (int, long)) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=(self._my_member,), + distribution=(global_time, sequence_number), + payload=(text,)) + + def _create_doublemember_text(self, message_name, other, text, global_time, sign): + assert isinstance(message_name, unicode) + assert isinstance(other, Member) + assert not self._my_member == other + assert isinstance(text, str) + assert isinstance(global_time, (int, long)) + meta = self._community.get_meta_message(message_name) + return meta.impl(authentication=([self._my_member, other],), + distribution=(global_time,), + payload=(text,), + sign=sign) + + def create_last_1_test(self, text, global_time): + """ + Returns a new last-1-test message. + """ + return self._create_text(u"last-1-test", text, global_time) + + def create_last_9_test(self, text, global_time): + """ + Returns a new last-9-test message. + """ + return self._create_text(u"last-9-test", text, global_time) + + def create_last_1_doublemember_text(self, other, text, global_time, sign): + """ + Returns a new last-1-doublemember-text message. + """ + return self._create_doublemember_text(u"last-1-doublemember-text", other, text, global_time, sign) + + def create_double_signed_text(self, other, text, global_time, sign): + """ + Returns a new double-signed-text message. + """ + return self._create_doublemember_text(u"double-signed-text", other, text, global_time, sign) + + def create_full_sync_text(self, text, global_time): + """ + Returns a new full-sync-text message. + """ + return self._create_text(u"full-sync-text", text, global_time) + + def create_full_sync_global_time_pruning_text(self, text, global_time): + """ + Returns a new full-sync-global-time-pruning-text message. + """ + return self._create_text(u"full-sync-global-time-pruning-text", text, global_time) + + def create_in_order_text(self, text, global_time): + """ + Returns a new ASC-text message. + """ + return self._create_text(u"ASC-text", text, global_time) + + def create_out_order_text(self, text, global_time): + """ + Returns a new DESC-text message. + """ + return self._create_text(u"DESC-text", text, global_time) + + def create_protected_full_sync_text(self, text, global_time): + """ + Returns a new protected-full-sync-text message. + """ + return self._create_text(u"protected-full-sync-text", text, global_time) + + def create_dynamic_resolution_text(self, text, global_time, policy): + """ + Returns a new dynamic-resolution-text message. + """ + assert isinstance(policy, (PublicResolution.Implementation, LinearResolution.Implementation)) + return self._create_text(u"dynamic-resolution-text", text, global_time, resolution=(policy,)) + + def create_sequence_text(self, text, global_time, sequence_number): + """ + Returns a new sequence-text message. + """ + return self._create_sequence_text(u"sequence-text", text, global_time, sequence_number) diff -Nru tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/payload.py tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/payload.py --- tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/payload.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/debugcommunity/payload.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,18 @@ +from ...payload import Payload + + +class TextPayload(Payload): + + """ + TextPayload is used to hold a single string. + """ + class Implementation(Payload.Implementation): + + def __init__(self, meta, text): + assert isinstance(text, str) + super(TextPayload.Implementation, self).__init__(meta) + self._text = text + + @property + def text(self): + return self._text diff -Nru tribler-6.2.0/Tribler/dispersy/tests/dispersytestclass.py tribler-6.2.0/Tribler/dispersy/tests/dispersytestclass.py --- tribler-6.2.0/Tribler/dispersy/tests/dispersytestclass.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/dispersytestclass.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,61 @@ +import logging +logger = logging.getLogger(__name__) + +from unittest import TestCase + +from ..callback import Callback +from ..dispersy import Dispersy +from ..endpoint import StandaloneEndpoint + + +def call_on_dispersy_thread(func): + def helper(*args, **kargs): + return args[0]._dispersy.callback.call(func, args, kargs) + helper.__name__ = func.__name__ + return helper + + +class DispersyTestFunc(TestCase): + + """ + Setup and tear down Dispersy before and after each test method. + + setUp will ensure the following members exists before each test method is called: + - self._dispersy + - self._my_member + + tearDown will ensure these members are properly cleaned after each test method is finished. + """ + + def on_callback_exception(self, exception, is_fatal): + logger.exception("%s", exception) + + # properly shutdown Dispersy + self._dispersy.stop() + self._dispersy = None + + # consider every exception a fatal error + return True + + def setUp(self): + super(DispersyTestFunc, self).setUp() + logger.debug("setUp") + + callback = Callback("Dispersy-Unit-Test") + callback.attach_exception_handler(self.on_callback_exception) + endpoint = StandaloneEndpoint(12345) + working_directory = u"." + database_filename = u":memory:" + + self._dispersy = Dispersy(callback, endpoint, working_directory, database_filename) + self._dispersy.start() + self._my_member = callback.call(self._dispersy.get_new_member, (u"low",)) + + def tearDown(self): + super(DispersyTestFunc, self).tearDown() + logger.debug("tearDown") + + if self._dispersy: + self._dispersy.stop() + self._dispersy = None + self._my_member = None diff -Nru tribler-6.2.0/Tribler/dispersy/tests/run_tests.sh tribler-6.2.0/Tribler/dispersy/tests/run_tests.sh --- tribler-6.2.0/Tribler/dispersy/tests/run_tests.sh 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/run_tests.sh 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,13 @@ +#!/bin/bash + +set -e + +#Go to the tests' dir parent +cd $(dirname $(readlink -f $0))/.. + +MODNAME=$(basename $PWD) +cd .. +nosetests --all-modules --traverse-namespace --cover-package=$MODNAME --cover-inclusive $MODNAME/tests/test_all.py $MODNAME/tests/test_candidates.py $* +#We could do it like this instead, it's simpler but uglier +#nosetests --all-modules --traverse-namespace --cover-package=. --cover-inclusive tests/test_all.py $* + diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_all.py tribler-6.2.0/Tribler/dispersy/tests/test_all.py --- tribler-6.2.0/Tribler/dispersy/tests/test_all.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_all.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,103 @@ +#!/usr/bin/env python +# Python 2.5 features +from __future__ import with_statement + +import sys +import os + +from tempfile import mkdtemp +from shutil import rmtree + +import unittest +from ..tool.main import main_real + +TMPDIR='dispersy_tests_temp_dir' + +def dispersyTest(callable_): + """ + Decorator that calls the test named like the method name from dispersy.script.* + """ + assert(callable_.__name__.startswith('test')) + name = callable_.__name__[4:] + #Ugly hack to otain the working copy dir name + #this file is at [...]/BRANCH_NAME/tests/test_all.py and we want to obtain BRANCH_NAME + working_copy_dirname = __file__.split(os.sep)[-3] + script='%s.script.%s' % (working_copy_dirname, name) + def caller(self): + sys.argv = ['', '--script', script, '--statedir', mkdtemp(suffix=name, dir=TMPDIR)] + callback = main_real() + if callback.exception: + raise type(callback.exception), callback.exception + caller.__name__ = callable_.__name__ + return caller + +class TestDispersyBatch(unittest.TestCase): + def __init__(self, methodname='runTest'): + unittest.TestCase.__init__(self, methodname) + + def setUp(self): + if not os.path.exists(TMPDIR): + os.makedirs(TMPDIR) + + def tearDown(self): + try: + rmtree(TMPDIR) + except: + pass + + @dispersyTest + def testDispersyBatchScript(self): + pass + @dispersyTest + def testDispersyBootstrapServers(self): + pass + + @dispersyTest + def testDispersyClassificationScript(self): + pass + + @dispersyTest + def testDispersyCryptoScript(self): + pass + + @dispersyTest + def testDispersyDestroyCommunityScript(self): + pass + @dispersyTest + def testDispersyDynamicSettings(self): + pass + @dispersyTest + def testDispersyIdenticalPayloadScript(self): + pass + + @dispersyTest + def testDispersyMemberTagScript(self): + pass + + @dispersyTest + def testDispersySequenceScript(self): + pass + + @dispersyTest + def testDispersyMissingMessageScript(self): + pass + + @dispersyTest + def testDispersySignatureScript(self): + pass + + @dispersyTest + def testDispersySyncScript(self): + pass + + @dispersyTest + def testDispersyTimelineScript(self): + pass + + @dispersyTest + def testDispersyUndoScript(self): + pass + + @dispersyTest + def testDispersyNeighborhoodScript(self): + pass diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_batch.py tribler-6.2.0/Tribler/dispersy/tests/test_batch.py --- tribler-6.2.0/Tribler/dispersy/tests/test_batch.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_batch.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,264 @@ +import logging +logger = logging.getLogger(__name__) + +from time import time + +from ..message import Message, BatchConfiguration +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestBatch(DispersyTestFunc): + + def __init__(self, *args, **kargs): + super(TestBatch, self).__init__(*args, **kargs) + self._big_batch_took = 0.0 + self._small_batches_took = 0.0 + + def test_max_batch_size_A(self): + return self._dispersy.callback.call(self._max_batch_size, kargs=dict(length=1000 - 1, max_size=25)) + + def test_max_batch_size_B(self): + return self._dispersy.callback.call(self._max_batch_size, kargs=dict(length=1000, max_size=25)) + + def test_max_batch_size_C(self): + return self._dispersy.callback.call(self._max_batch_size, kargs=dict(length=1000 + 1, max_size=25)) + + def _max_batch_size(self, length, max_size): + """ + Gives many messages at once, the system should process them in max-batch-size batches. + """ + class MaxBatchSizeCommunity(DebugCommunity): + + def _initialize_meta_messages(self): + super(MaxBatchSizeCommunity, self)._initialize_meta_messages() + + batch = BatchConfiguration(max_window=0.01, max_size=max_size) + + meta = self._meta_messages[u"full-sync-text"] + meta = Message(meta.community, meta.name, meta.authentication, meta.resolution, meta.distribution, meta.destination, meta.payload, meta.check_callback, meta.handle_callback, meta.undo_callback, batch=batch) + self._meta_messages[meta.name] = meta + + community = MaxBatchSizeCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("START BIG BATCH (with max batch size)") + messages = [node.create_full_sync_text("Dprint=False, big batch #%d" % global_time, global_time) for global_time in xrange(10, 10 + length)] + + begin = time() + node.give_messages(messages, cache=True) + + # wait till the batch is processed + meta = community.get_meta_message(u"full-sync-text") + while meta in self._dispersy._batch_cache: + yield 0.1 + + end = time() + logger.debug("%2.2f seconds for _max_batch_size(%d, %d)", end - begin, length, max_size) + + count, = self._dispersy.database.execute(u"SELECT COUNT(1) FROM sync WHERE meta_message = ?", (meta.database_id,)).next() + self.assertEqual(count, len(messages)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_one_batch_binary_duplicate(self): + """ + When multiple binary identical UDP packets are received, the duplicate packets need to be + reduced to one packet. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + global_time = 10 + message = node.create_full_sync_text("duplicates", global_time) + node.give_packets([message.packet for _ in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_two_batches_binary_duplicate(self): + """ + When multiple binary identical UDP packets are received, the duplicate packets need to be + reduced to one packet. + + The second batch needs to be dropped aswell, while the last unique packet of the second + batch is dropped when the when the database is consulted. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + global_time = 10 + # first batch + message = node.create_full_sync_text("duplicates", global_time) + node.give_packets([message.packet for _ in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # second batch + node.give_packets([message.packet for _ in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_one_batch_member_global_time_duplicate(self): + """ + A member can create invalid duplicate messages that are binary different. + + For instance, two different messages that are created by the same member and have the same + global_time, will be binary different while they are still duplicates. Because dispersy + uses the message creator and the global_time to uniquely identify messages. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + global_time = 10 + node.give_messages([node.create_full_sync_text("duplicates (%d)" % index, global_time) for index in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, meta.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_two_batches_member_global_time_duplicate(self): + """ + A member can create invalid duplicate messages that are binary different. + + For instance, two different messages that are created by the same member and have the same + global_time, will be binary different while they are still duplicates. Because dispersy + uses the message creator and the global_time to uniquely identify messages. + + The second batch needs to be dropped aswell, while the last unique packet of the second + batch is dropped when the when the database is consulted. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + global_time = 10 + # first batch + node.give_messages([node.create_full_sync_text("duplicates (%d)" % index, global_time) for index in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, meta.database_id))] + self.assertEqual(times, [global_time]) + + # second batch + node.give_messages([node.create_full_sync_text("duplicates (%d)" % index, global_time) for index in xrange(10)]) + + # only one message may be in the database + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, meta.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_one_big_batch(self, length=1000): + """ + Each community is handled in its own batch, hence we can measure performace differences when + we make one large batch (using one community) and many small batches (using many different + communities). + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("START BIG BATCH") + messages = [node.create_full_sync_text("Dprint=False, big batch #%d" % global_time, global_time) for global_time in xrange(10, 10 + length)] + + begin = time() + node.give_messages(messages) + end = time() + self._big_batch_took = end - begin + + meta = community.get_meta_message(u"full-sync-text") + count, = self._dispersy.database.execute(u"SELECT COUNT(1) FROM sync WHERE meta_message = ?", (meta.database_id,)).next() + self.assertEqual(count, len(messages)) + + if self._big_batch_took and self._small_batches_took: + self.assertSmaller(self._big_batch_took, self._small_batches_took * 1.1) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_many_small_batches(self, length=1000): + """ + Each community is handled in its own batch, hence we can measure performace differences when + we make one large batch (using one community) and many small batches (using many different + communities). + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("START SMALL BATCHES") + messages = [node.create_full_sync_text("Dprint=False, small batch #%d" % global_time, global_time) for global_time in xrange(10, 10 + length)] + + begin = time() + for message in messages: + node.give_message(message) + end = time() + self._small_batches_took = end - begin + + meta = community.get_meta_message(u"full-sync-text") + count, = self._dispersy.database.execute(u"SELECT COUNT(1) FROM sync WHERE meta_message = ?", (meta.database_id,)).next() + self.assertEqual(count, len(messages)) + + if self._big_batch_took and self._small_batches_took: + self.assertSmaller(self._big_batch_took, self._small_batches_took * 1.1) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_bootstrap.py tribler-6.2.0/Tribler/dispersy/tests/test_bootstrap.py --- tribler-6.2.0/Tribler/dispersy/tests/test_bootstrap.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_bootstrap.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,286 @@ +import logging +logger = logging.getLogger(__name__) +summary = logging.getLogger("test-bootstrap-summary") + +from os import environ +from unittest import skip, skipUnless +from time import time +from socket import getfqdn + +from ..candidate import BootstrapCandidate +from ..message import Message, DropMessage +from .debugcommunity.community import DebugCommunity + +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestBootstrapServers(DispersyTestFunc): + + @skipUnless(environ.get("TEST_BOOTSTRAP") == "yes", "This 'unittest' tests the external bootstrap processes, as such, this is not part of the code review process") + @call_on_dispersy_thread + def test_servers_are_up(self): + """ + Sends a dispersy-introduction-request to the trackers and counts how long it takes until the + dispersy-introduction-response is received. + """ + class PingCommunity(DebugCommunity): + + def __init__(self, *args, **kargs): + # original walker callbacks (will be set during super(...).__init__) + self._original_on_introduction_response = None + + super(PingCommunity, self).__init__(*args, **kargs) + + self._pings_done = 0 + self._request = {} + self._summary = {} + self._hostname = {} + self._identifiers = {} + self._pcandidates = self._dispersy._bootstrap_candidates.values() + # self._pcandidates = [BootstrapCandidate(("130.161.211.198", 6431))] + + for candidate in self._pcandidates: + self._request[candidate.sock_addr] = {} + self._summary[candidate.sock_addr] = [] + self._hostname[candidate.sock_addr] = getfqdn(candidate.sock_addr[0]) + self._identifiers[candidate.sock_addr] = "" + + def _initialize_meta_messages(self): + super(PingCommunity, self)._initialize_meta_messages() + + # replace the callbacks for the dispersy-introduction-response message + meta = self._meta_messages[u"dispersy-introduction-response"] + self._original_on_introduction_response = meta.handle_callback + self._meta_messages[meta.name] = Message(meta.community, meta.name, meta.authentication, meta.resolution, meta.distribution, meta.destination, meta.payload, meta.check_callback, self.on_introduction_response, meta.undo_callback, meta.batch) + + @property + def dispersy_enable_candidate_walker(self): + return False + + @property + def dispersy_enable_candidate_walker_responses(self): + return True + + def dispersy_take_step(self): + test.fail("we disabled the walker") + + def on_introduction_response(self, messages): + now = time() + logger.debug("PONG") + for message in messages: + candidate = message.candidate + if candidate.sock_addr in self._request: + request_stamp = self._request[candidate.sock_addr].pop(message.payload.identifier, 0.0) + self._summary[candidate.sock_addr].append(now - request_stamp) + self._identifiers[candidate.sock_addr] = message.authentication.member.mid + return self._original_on_introduction_response(messages) + + def ping(self, now): + logger.debug("PING") + self._pings_done += 1 + for candidate in self._pcandidates: + request = self._dispersy.create_introduction_request(self, candidate, False) + self._request[candidate.sock_addr][request.payload.identifier] = now + + def summary(self): + for sock_addr, rtts in sorted(self._summary.iteritems()): + if rtts: + summary.info("%s %15s:%-5d %-30s %dx %.1f avg [%s]", + self._identifiers[sock_addr].encode("HEX"), + sock_addr[0], + sock_addr[1], + self._hostname[sock_addr], + len(rtts), + sum(rtts) / len(rtts), + ", ".join(str(round(rtt, 1)) for rtt in rtts[-10:])) + else: + summary.warning("%s:%d %s missing", sock_addr[0], sock_addr[1], self._hostname[sock_addr]) + + def finish(self, request_count, min_response_count, max_rtt): + # write graph statistics + handle = open("summary.txt", "w+") + handle.write("HOST_NAME ADDRESS REQUESTS RESPONSES\n") + for sock_addr, rtts in self._summary.iteritems(): + handle.write("%s %s:%d %d %d\n" % (self._hostname[sock_addr], sock_addr[0], sock_addr[1], self._pings_done, len(rtts))) + handle.close() + + handle = open("walk_rtts.txt", "w+") + handle.write("HOST_NAME ADDRESS RTT\n") + for sock_addr, rtts in self._summary.iteritems(): + for rtt in rtts: + handle.write("%s %s:%d %f\n" % (self._hostname[sock_addr], sock_addr[0], sock_addr[1], rtt)) + handle.close() + + for sock_addr, rtts in self._summary.iteritems(): + test.assertLess(min_response_count, len(rtts), "Only received %d/%d responses from %s:%d" % (len(rtts), request_count, sock_addr[0], sock_addr[1])) + test.assertLess(sum(rtts) / len(rtts), max_rtt, "Average RTT %f from %s:%d is more than allowed %f" % (sum(rtts) / len(rtts), sock_addr[0], sock_addr[1], max_rtt)) + + community = PingCommunity.create_community(self._dispersy, self._my_member) + + test = self + PING_COUNT = 10 + ASSERT_MARGIN = 0.9 + MAX_RTT = 0.5 + for _ in xrange(PING_COUNT): + community.ping(time()) + yield 5.0 + community.summary() + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + # assert when not all of the servers are responding + community.finish(PING_COUNT, PING_COUNT * ASSERT_MARGIN, MAX_RTT) + + @skip("The stress test is not actually a unittest") + @call_on_dispersy_thread + def test_perform_heavy_stress_test(self): + """ + Sends many a dispersy-introduction-request messages to a single tracker and counts how long + it takes until the dispersy-introduction-response messages are received. + """ + class PingCommunity(DebugCommunity): + + def __init__(self, master, candidates): + super(PingCommunity, self).__init__(master) + + self._original_my_member = self._my_member + + self._request = {} + self._summary = {} + self._hostname = {} + self._identifiers = {} + self._pcandidates = candidates + self._queue = [] + # self._pcandidates = self._dispersy._bootstrap_candidates.values() + # self._pcandidates = [BootstrapCandidate(("130.161.211.198", 6431))] + + for candidate in self._pcandidates: + self._request[candidate.sock_addr] = {} + self._summary[candidate.sock_addr] = [] + self._hostname[candidate.sock_addr] = getfqdn(candidate.sock_addr[0]) + self._identifiers[candidate.sock_addr] = "" + + def _initialize_meta_messages(self): + super(PingCommunity, self)._initialize_meta_messages() + + # replace the callbacks for the dispersy-introduction-response message + meta = self._meta_messages[u"dispersy-introduction-response"] + self._meta_messages[meta.name] = Message(meta.community, meta.name, meta.authentication, meta.resolution, meta.distribution, meta.destination, meta.payload, self.check_introduction_response, meta.handle_callback, meta.undo_callback, meta.batch) + + @property + def dispersy_enable_candidate_walker(self): + return False + + @property + def dispersy_enable_candidate_walker_responses(self): + return True + + def dispersy_take_step(self): + test.fail("we disabled the walker") + + def create_dispersy_identity(self, sign_with_master=False, store=True, update=True, member=None): + self._my_member = member if member else self._original_my_member + try: + return super(PingCommunity, self).create_dispersy_identity(sign_with_master, store, update) + finally: + self._my_member = self._original_my_member + + def check_introduction_response(self, messages): + now = time() + for message in messages: + candidate = message.candidate + if candidate.sock_addr in self._request: + request_stamp = self._request[candidate.sock_addr].pop(message.payload.identifier, 0.0) + if request_stamp: + self._summary[candidate.sock_addr].append(now - request_stamp) + self._identifiers[candidate.sock_addr] = message.authentication.member.mid + else: + logger.warning("identifier clash %s", message.payload.identifier) + + yield DropMessage(message, "not doing anything in this script") + + def prepare_ping(self, member): + self._my_member = member + try: + for candidate in self._pcandidates: + request = self._dispersy.create_introduction_request(self, candidate, False, forward=False) + self._queue.append((request.payload.identifier, request.packet, candidate)) + finally: + self._my_member = self._original_my_member + + def ping_from_queue(self, count): + for identifier, packet, candidate in self._queue[:count]: + self._dispersy.endpoint.send([candidate], [packet]) + self._request[candidate.sock_addr][identifier] = time() + + self._queue = self._queue[count:] + + def ping(self, member): + self._my_member = member + try: + for candidate in self._pcandidates: + request = self._dispersy.create_introduction_request(self, candidate, False) + self._request[candidate.sock_addr][request.payload.identifier] = time() + finally: + self._my_member = self._original_my_member + + def summary(self): + for sock_addr, rtts in sorted(self._summary.iteritems()): + if rtts: + logger.info("%s %15s:%-5d %-30s %dx %.1f avg [%s]", + self._identifiers[sock_addr].encode("HEX"), + sock_addr[0], + sock_addr[1], + self._hostname[sock_addr], + len(rtts), + sum(rtts) / len(rtts), + ", ".join(str(round(rtt, 1)) for rtt in rtts[-10:])) + else: + logger.warning("%s:%d %s missing", sock_addr[0], sock_addr[1], self._hostname[sock_addr]) + + MEMBERS = 10000 # must be a multiple of 100 + COMMUNITIES = 1 + ROUNDS = 10 + + logger.info("prepare communities, members, etc") + with self._dispersy.database: + candidates = [BootstrapCandidate(("130.161.211.245", 6429), False)] + communities = [PingCommunity.create_community(self._dispersy, self._my_member, candidates) for _ in xrange(COMMUNITIES)] + members = [self._dispersy.get_new_member(u"low") for _ in xrange(MEMBERS)] + + for community in communities: + for member in members: + community.create_dispersy_identity(member=member) + + logger.info("prepare request messages") + for _ in xrange(ROUNDS): + for community in communities: + for member in members: + community.prepare_ping(member) + + yield 5.0 + yield 15.0 + + logger.info("ping-ping") + BEGIN = time() + for _ in xrange(ROUNDS): + for community in communities: + for _ in xrange(MEMBERS / 100): + community.ping_from_queue(100) + yield 0.1 + + for community in communities: + community.summary() + END = time() + + yield 10.0 + logger.info("--- did %d requests per community", ROUNDS * MEMBERS) + logger.info("--- spread over %.2f seconds", END - BEGIN) + for community in communities: + community.summary() + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_callback.py tribler-6.2.0/Tribler/dispersy/tests/test_callback.py --- tribler-6.2.0/Tribler/dispersy/tests/test_callback.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_callback.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,106 @@ +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestCallback(DispersyTestFunc): + + def previous_performance_profile(self): + """ +Run on MASAQ Dell laptop 23/04/12 +> python -O Tribler/Main/dispersy.py --enable-dispersy-script --script dispersy-callback --yappi + +YAPPI: 1x 2.953s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/callback.py._loop:506 +YAPPI: 210020x 0.964s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/callback.py.register:212 +YAPPI: 520985x 0.390s /usr/lib/python2.7/threading.py.isSet:380 +YAPPI: 4x 0.104s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_delay:81 +YAPPI: 3x 0.100s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register:68 +YAPPI: 110000x 0.092s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.generator_func:95 +YAPPI: 100000x 0.083s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_delay_func:82 +YAPPI: 100000x 0.082s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_func:69 +YAPPI: 867x 0.024s /usr/lib/python2.7/threading.py.wait:235 +YAPPI: 5x 0.012s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.generator:94 +YAPPI: 867x 0.007s /usr/lib/python2.7/threading.py.wait:400 +YAPPI: 379x 0.005s /home/boudewijn/local/lib/python2.7/site-packages/yappi.py.__init__:50 +YAPPI: 867x 0.003s /usr/lib/python2.7/threading.py._acquire_restore:223 +YAPPI: 1x 0.003s Tribler/Main/dispersy.py.start:106 +YAPPI: 891x 0.002s /usr/lib/python2.7/threading.py._is_owned:226 +YAPPI: 867x 0.002s /usr/lib/python2.7/threading.py._release_save:220 +YAPPI: 353x 0.002s /home/boudewijn/local/lib/python2.7/site-packages/yappi.py.func_enumerator:72 +YAPPI: 48x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/conversion.py.define_meta_message:223 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/timeline.py.Timeline:14 +YAPPI: 8x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dprint.py.dprint:595 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/script.py.:2 +YAPPI: 2x 0.001s /usr/lib/python2.7/sre_parse.py._parse:379 +YAPPI: 8x 0.001s /usr/lib/python2.7/traceback.py.extract_stack:280 +YAPPI: 29x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/message.py.__init__:499 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/RawServer.py.listen_forever:129 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/community.py.:9 +YAPPI: 194x 0.001s /usr/lib/python2.7/sre_parse.py.__next:182 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/debugcommunity.py.:1 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/lencoder.py.:3 +YAPPI: 3x 0.000s /usr/lib/python2.7/sre_compile.py._compile:32 +YAPPI: 191x 0.000s /usr/lib/python2.7/sre_parse.py.get:201 +YAPPI: 49x 0.000s /usr/lib/python2.7/linecache.py.checkcache:43 +YAPPI: 185x 0.000s /usr/lib/python2.7/sre_parse.py.append:138 +YAPPI: 4x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dispersy.py._store:1991 +YAPPI: 1x 0.000s /usr/lib/python2.7/encodings/hex_codec.py.:8 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/community.py.create_community:50 +YAPPI: 16x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/SocketHandler.py.handle_events:455 +YAPPI: 109x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/database.py.execute:149 +YAPPI: 49x 0.000s /usr/lib/python2.7/linecache.py.getline:13 +YAPPI: 5x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dispersy.py._on_incoming_packets:1622 +YAPPI: 42x 0.000s /usr/lib/python2.7/threading.py.acquire:121 +YAPPI: 57x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/clock.py.get_time:16 +YAPPI: 1x 0.000s /usr/lib/python2.7/sre_compile.py._compile_info:361 +YAPPI: 23x 0.000s /usr/lib/python2.7/threading.py.set:385 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/NATFirewall/guessip.py.get_my_wan_ip_linux:104 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/database.py.__init__:19 +YAPPI: 42x 0.000s /usr/lib/python2.7/threading.py.release:141 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/timeline.py.authorize:237 +YAPPI: 4x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/conversion.py._decode_message:1266 +YAPPI: 5x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/member.py.__init__:116 +""" + pass + + @call_on_dispersy_thread + def test_register(self): + def register_func(): + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(100000): + register(register_func) + + while container[0] < 100000: + yield 1.0 + + @call_on_dispersy_thread + def test_register_delay(self): + def register_delay_func(): + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(100000): + register(register_delay_func, delay=1.0) + + while container[0] < 100000: + yield 1.0 + + @call_on_dispersy_thread + def test_generator(self): + def generator_func(): + for _ in xrange(10): + yield 0.1 + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(10000): + register(generator_func) + + while container[0] < 10000: + yield 1.0 diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_candidates.py tribler-6.2.0/Tribler/dispersy/tests/test_candidates.py --- tribler-6.2.0/Tribler/dispersy/tests/test_candidates.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_candidates.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,643 @@ +import logging +logger = logging.getLogger(__name__) + +from fractions import gcd +from itertools import combinations, islice +from time import time +from unittest import skip + +from ..candidate import CANDIDATE_ELIGIBLE_DELAY +from ..tool.tracker import TrackerCommunity +from .debugcommunity.community import DebugCommunity +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + +class NoBootstrapDebugCommunity(DebugCommunity): + + def _iter_bootstrap(self, once=False): + while True: + yield None + + if once: + break + + +class TestCandidates(DispersyTestFunc): + """ + Tests candidate interface. + + This unit tests covers the methods: + - dispersy_yield_candidates + - dispersy_yield_verified_candidates + - dispersy_get_introduce_candidate + - dispersy_get_walk_candidate + + Most tests are performed with check_candidates, this method takes ALL_FLAGS, list were every entry is a string. The + following characters can be put in the string to enable a candidate property: + - t: SELF knows the candidate is tunnelled + - w: SELF has walked towards the candidate (but has not yet received a response) + - r: SELF has received a walk response from the candidate + - e: CANDIDATE_ELIGIBLE_DELAY seconds ago SELF performed a successful walk to candidate + - s: SELF has received an incoming walk from the candidate + - i: SELF has been introduced to the candidate + + Note that many variations of flags exist, multiple variations are generated using test_print_unittest_combinations. + """ + + @skip("This test is needed to setup all possible candidate combinations for unit tests") + def test_print_unittest_combinations(self): + """ + Prints combinations of unit tests. + """ + print " def test_no_candidates(self): return self.check_candidates([])" + flags = "twresi" + options = [] + for length in xrange(len(flags)): + for s in combinations(flags, length): + s_func = "_" + "".join(s) if s else "" + s_args = '"%s"' % "".join(s) + s_opt = "".join(s) + options.append(s_opt) + + print " def test_one%s_candidate(self): return self.check_candidates([%s])" % (s_func, s_args) + print " def test_two%s_candidates(self): return self.check_candidates([%s, %s])" % (s_func, s_args, s_args) + print " def test_many%s_candidates(self): return self.check_candidates([%s] * 22)" % (s_func, s_args) + + for length in xrange(1, len(options) + 1): + print " def test_mixed_%d_candidates(self): return self.check_candidates(%s)" % (length, options[:length]) + + def test_no_candidates(self): return self.check_candidates([]) + def test_one_candidate(self): return self.check_candidates([""]) + def test_two_candidates(self): return self.check_candidates(["", ""]) + def test_many_candidates(self): return self.check_candidates([""] * 22) + def test_one_t_candidate(self): return self.check_candidates(["t"]) + def test_two_t_candidates(self): return self.check_candidates(["t", "t"]) + def test_many_t_candidates(self): return self.check_candidates(["t"] * 22) + def test_one_w_candidate(self): return self.check_candidates(["w"]) + def test_two_w_candidates(self): return self.check_candidates(["w", "w"]) + def test_many_w_candidates(self): return self.check_candidates(["w"] * 22) + def test_one_r_candidate(self): return self.check_candidates(["r"]) + def test_two_r_candidates(self): return self.check_candidates(["r", "r"]) + def test_many_r_candidates(self): return self.check_candidates(["r"] * 22) + def test_one_e_candidate(self): return self.check_candidates(["e"]) + def test_two_e_candidates(self): return self.check_candidates(["e", "e"]) + def test_many_e_candidates(self): return self.check_candidates(["e"] * 22) + def test_one_s_candidate(self): return self.check_candidates(["s"]) + def test_two_s_candidates(self): return self.check_candidates(["s", "s"]) + def test_many_s_candidates(self): return self.check_candidates(["s"] * 22) + def test_one_i_candidate(self): return self.check_candidates(["i"]) + def test_two_i_candidates(self): return self.check_candidates(["i", "i"]) + def test_many_i_candidates(self): return self.check_candidates(["i"] * 22) + def test_one_tw_candidate(self): return self.check_candidates(["tw"]) + def test_two_tw_candidates(self): return self.check_candidates(["tw", "tw"]) + def test_many_tw_candidates(self): return self.check_candidates(["tw"] * 22) + def test_one_tr_candidate(self): return self.check_candidates(["tr"]) + def test_two_tr_candidates(self): return self.check_candidates(["tr", "tr"]) + def test_many_tr_candidates(self): return self.check_candidates(["tr"] * 22) + def test_one_te_candidate(self): return self.check_candidates(["te"]) + def test_two_te_candidates(self): return self.check_candidates(["te", "te"]) + def test_many_te_candidates(self): return self.check_candidates(["te"] * 22) + def test_one_ts_candidate(self): return self.check_candidates(["ts"]) + def test_two_ts_candidates(self): return self.check_candidates(["ts", "ts"]) + def test_many_ts_candidates(self): return self.check_candidates(["ts"] * 22) + def test_one_ti_candidate(self): return self.check_candidates(["ti"]) + def test_two_ti_candidates(self): return self.check_candidates(["ti", "ti"]) + def test_many_ti_candidates(self): return self.check_candidates(["ti"] * 22) + def test_one_wr_candidate(self): return self.check_candidates(["wr"]) + def test_two_wr_candidates(self): return self.check_candidates(["wr", "wr"]) + def test_many_wr_candidates(self): return self.check_candidates(["wr"] * 22) + def test_one_we_candidate(self): return self.check_candidates(["we"]) + def test_two_we_candidates(self): return self.check_candidates(["we", "we"]) + def test_many_we_candidates(self): return self.check_candidates(["we"] * 22) + def test_one_ws_candidate(self): return self.check_candidates(["ws"]) + def test_two_ws_candidates(self): return self.check_candidates(["ws", "ws"]) + def test_many_ws_candidates(self): return self.check_candidates(["ws"] * 22) + def test_one_wi_candidate(self): return self.check_candidates(["wi"]) + def test_two_wi_candidates(self): return self.check_candidates(["wi", "wi"]) + def test_many_wi_candidates(self): return self.check_candidates(["wi"] * 22) + def test_one_re_candidate(self): return self.check_candidates(["re"]) + def test_two_re_candidates(self): return self.check_candidates(["re", "re"]) + def test_many_re_candidates(self): return self.check_candidates(["re"] * 22) + def test_one_rs_candidate(self): return self.check_candidates(["rs"]) + def test_two_rs_candidates(self): return self.check_candidates(["rs", "rs"]) + def test_many_rs_candidates(self): return self.check_candidates(["rs"] * 22) + def test_one_ri_candidate(self): return self.check_candidates(["ri"]) + def test_two_ri_candidates(self): return self.check_candidates(["ri", "ri"]) + def test_many_ri_candidates(self): return self.check_candidates(["ri"] * 22) + def test_one_es_candidate(self): return self.check_candidates(["es"]) + def test_two_es_candidates(self): return self.check_candidates(["es", "es"]) + def test_many_es_candidates(self): return self.check_candidates(["es"] * 22) + def test_one_ei_candidate(self): return self.check_candidates(["ei"]) + def test_two_ei_candidates(self): return self.check_candidates(["ei", "ei"]) + def test_many_ei_candidates(self): return self.check_candidates(["ei"] * 22) + def test_one_si_candidate(self): return self.check_candidates(["si"]) + def test_two_si_candidates(self): return self.check_candidates(["si", "si"]) + def test_many_si_candidates(self): return self.check_candidates(["si"] * 22) + def test_one_twr_candidate(self): return self.check_candidates(["twr"]) + def test_two_twr_candidates(self): return self.check_candidates(["twr", "twr"]) + def test_many_twr_candidates(self): return self.check_candidates(["twr"] * 22) + def test_one_twe_candidate(self): return self.check_candidates(["twe"]) + def test_two_twe_candidates(self): return self.check_candidates(["twe", "twe"]) + def test_many_twe_candidates(self): return self.check_candidates(["twe"] * 22) + def test_one_tws_candidate(self): return self.check_candidates(["tws"]) + def test_two_tws_candidates(self): return self.check_candidates(["tws", "tws"]) + def test_many_tws_candidates(self): return self.check_candidates(["tws"] * 22) + def test_one_twi_candidate(self): return self.check_candidates(["twi"]) + def test_two_twi_candidates(self): return self.check_candidates(["twi", "twi"]) + def test_many_twi_candidates(self): return self.check_candidates(["twi"] * 22) + def test_one_tre_candidate(self): return self.check_candidates(["tre"]) + def test_two_tre_candidates(self): return self.check_candidates(["tre", "tre"]) + def test_many_tre_candidates(self): return self.check_candidates(["tre"] * 22) + def test_one_trs_candidate(self): return self.check_candidates(["trs"]) + def test_two_trs_candidates(self): return self.check_candidates(["trs", "trs"]) + def test_many_trs_candidates(self): return self.check_candidates(["trs"] * 22) + def test_one_tri_candidate(self): return self.check_candidates(["tri"]) + def test_two_tri_candidates(self): return self.check_candidates(["tri", "tri"]) + def test_many_tri_candidates(self): return self.check_candidates(["tri"] * 22) + def test_one_tes_candidate(self): return self.check_candidates(["tes"]) + def test_two_tes_candidates(self): return self.check_candidates(["tes", "tes"]) + def test_many_tes_candidates(self): return self.check_candidates(["tes"] * 22) + def test_one_tei_candidate(self): return self.check_candidates(["tei"]) + def test_two_tei_candidates(self): return self.check_candidates(["tei", "tei"]) + def test_many_tei_candidates(self): return self.check_candidates(["tei"] * 22) + def test_one_tsi_candidate(self): return self.check_candidates(["tsi"]) + def test_two_tsi_candidates(self): return self.check_candidates(["tsi", "tsi"]) + def test_many_tsi_candidates(self): return self.check_candidates(["tsi"] * 22) + def test_one_wre_candidate(self): return self.check_candidates(["wre"]) + def test_two_wre_candidates(self): return self.check_candidates(["wre", "wre"]) + def test_many_wre_candidates(self): return self.check_candidates(["wre"] * 22) + def test_one_wrs_candidate(self): return self.check_candidates(["wrs"]) + def test_two_wrs_candidates(self): return self.check_candidates(["wrs", "wrs"]) + def test_many_wrs_candidates(self): return self.check_candidates(["wrs"] * 22) + def test_one_wri_candidate(self): return self.check_candidates(["wri"]) + def test_two_wri_candidates(self): return self.check_candidates(["wri", "wri"]) + def test_many_wri_candidates(self): return self.check_candidates(["wri"] * 22) + def test_one_wes_candidate(self): return self.check_candidates(["wes"]) + def test_two_wes_candidates(self): return self.check_candidates(["wes", "wes"]) + def test_many_wes_candidates(self): return self.check_candidates(["wes"] * 22) + def test_one_wei_candidate(self): return self.check_candidates(["wei"]) + def test_two_wei_candidates(self): return self.check_candidates(["wei", "wei"]) + def test_many_wei_candidates(self): return self.check_candidates(["wei"] * 22) + def test_one_wsi_candidate(self): return self.check_candidates(["wsi"]) + def test_two_wsi_candidates(self): return self.check_candidates(["wsi", "wsi"]) + def test_many_wsi_candidates(self): return self.check_candidates(["wsi"] * 22) + def test_one_res_candidate(self): return self.check_candidates(["res"]) + def test_two_res_candidates(self): return self.check_candidates(["res", "res"]) + def test_many_res_candidates(self): return self.check_candidates(["res"] * 22) + def test_one_rei_candidate(self): return self.check_candidates(["rei"]) + def test_two_rei_candidates(self): return self.check_candidates(["rei", "rei"]) + def test_many_rei_candidates(self): return self.check_candidates(["rei"] * 22) + def test_one_rsi_candidate(self): return self.check_candidates(["rsi"]) + def test_two_rsi_candidates(self): return self.check_candidates(["rsi", "rsi"]) + def test_many_rsi_candidates(self): return self.check_candidates(["rsi"] * 22) + def test_one_esi_candidate(self): return self.check_candidates(["esi"]) + def test_two_esi_candidates(self): return self.check_candidates(["esi", "esi"]) + def test_many_esi_candidates(self): return self.check_candidates(["esi"] * 22) + def test_one_twre_candidate(self): return self.check_candidates(["twre"]) + def test_two_twre_candidates(self): return self.check_candidates(["twre", "twre"]) + def test_many_twre_candidates(self): return self.check_candidates(["twre"] * 22) + def test_one_twrs_candidate(self): return self.check_candidates(["twrs"]) + def test_two_twrs_candidates(self): return self.check_candidates(["twrs", "twrs"]) + def test_many_twrs_candidates(self): return self.check_candidates(["twrs"] * 22) + def test_one_twri_candidate(self): return self.check_candidates(["twri"]) + def test_two_twri_candidates(self): return self.check_candidates(["twri", "twri"]) + def test_many_twri_candidates(self): return self.check_candidates(["twri"] * 22) + def test_one_twes_candidate(self): return self.check_candidates(["twes"]) + def test_two_twes_candidates(self): return self.check_candidates(["twes", "twes"]) + def test_many_twes_candidates(self): return self.check_candidates(["twes"] * 22) + def test_one_twei_candidate(self): return self.check_candidates(["twei"]) + def test_two_twei_candidates(self): return self.check_candidates(["twei", "twei"]) + def test_many_twei_candidates(self): return self.check_candidates(["twei"] * 22) + def test_one_twsi_candidate(self): return self.check_candidates(["twsi"]) + def test_two_twsi_candidates(self): return self.check_candidates(["twsi", "twsi"]) + def test_many_twsi_candidates(self): return self.check_candidates(["twsi"] * 22) + def test_one_tres_candidate(self): return self.check_candidates(["tres"]) + def test_two_tres_candidates(self): return self.check_candidates(["tres", "tres"]) + def test_many_tres_candidates(self): return self.check_candidates(["tres"] * 22) + def test_one_trei_candidate(self): return self.check_candidates(["trei"]) + def test_two_trei_candidates(self): return self.check_candidates(["trei", "trei"]) + def test_many_trei_candidates(self): return self.check_candidates(["trei"] * 22) + def test_one_trsi_candidate(self): return self.check_candidates(["trsi"]) + def test_two_trsi_candidates(self): return self.check_candidates(["trsi", "trsi"]) + def test_many_trsi_candidates(self): return self.check_candidates(["trsi"] * 22) + def test_one_tesi_candidate(self): return self.check_candidates(["tesi"]) + def test_two_tesi_candidates(self): return self.check_candidates(["tesi", "tesi"]) + def test_many_tesi_candidates(self): return self.check_candidates(["tesi"] * 22) + def test_one_wres_candidate(self): return self.check_candidates(["wres"]) + def test_two_wres_candidates(self): return self.check_candidates(["wres", "wres"]) + def test_many_wres_candidates(self): return self.check_candidates(["wres"] * 22) + def test_one_wrei_candidate(self): return self.check_candidates(["wrei"]) + def test_two_wrei_candidates(self): return self.check_candidates(["wrei", "wrei"]) + def test_many_wrei_candidates(self): return self.check_candidates(["wrei"] * 22) + def test_one_wrsi_candidate(self): return self.check_candidates(["wrsi"]) + def test_two_wrsi_candidates(self): return self.check_candidates(["wrsi", "wrsi"]) + def test_many_wrsi_candidates(self): return self.check_candidates(["wrsi"] * 22) + def test_one_wesi_candidate(self): return self.check_candidates(["wesi"]) + def test_two_wesi_candidates(self): return self.check_candidates(["wesi", "wesi"]) + def test_many_wesi_candidates(self): return self.check_candidates(["wesi"] * 22) + def test_one_resi_candidate(self): return self.check_candidates(["resi"]) + def test_two_resi_candidates(self): return self.check_candidates(["resi", "resi"]) + def test_many_resi_candidates(self): return self.check_candidates(["resi"] * 22) + def test_one_twres_candidate(self): return self.check_candidates(["twres"]) + def test_two_twres_candidates(self): return self.check_candidates(["twres", "twres"]) + def test_many_twres_candidates(self): return self.check_candidates(["twres"] * 22) + def test_one_twrei_candidate(self): return self.check_candidates(["twrei"]) + def test_two_twrei_candidates(self): return self.check_candidates(["twrei", "twrei"]) + def test_many_twrei_candidates(self): return self.check_candidates(["twrei"] * 22) + def test_one_twrsi_candidate(self): return self.check_candidates(["twrsi"]) + def test_two_twrsi_candidates(self): return self.check_candidates(["twrsi", "twrsi"]) + def test_many_twrsi_candidates(self): return self.check_candidates(["twrsi"] * 22) + def test_one_twesi_candidate(self): return self.check_candidates(["twesi"]) + def test_two_twesi_candidates(self): return self.check_candidates(["twesi", "twesi"]) + def test_many_twesi_candidates(self): return self.check_candidates(["twesi"] * 22) + def test_one_tresi_candidate(self): return self.check_candidates(["tresi"]) + def test_two_tresi_candidates(self): return self.check_candidates(["tresi", "tresi"]) + def test_many_tresi_candidates(self): return self.check_candidates(["tresi"] * 22) + def test_one_wresi_candidate(self): return self.check_candidates(["wresi"]) + def test_two_wresi_candidates(self): return self.check_candidates(["wresi", "wresi"]) + def test_many_wresi_candidates(self): return self.check_candidates(["wresi"] * 22) + def test_mixed_1_candidates(self): return self.check_candidates(['']) + def test_mixed_2_candidates(self): return self.check_candidates(['', 't']) + def test_mixed_3_candidates(self): return self.check_candidates(['', 't', 'w']) + def test_mixed_4_candidates(self): return self.check_candidates(['', 't', 'w', 'r']) + def test_mixed_5_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e']) + def test_mixed_6_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's']) + def test_mixed_7_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i']) + def test_mixed_8_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw']) + def test_mixed_9_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr']) + def test_mixed_10_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te']) + def test_mixed_11_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts']) + def test_mixed_12_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti']) + def test_mixed_13_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr']) + def test_mixed_14_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we']) + def test_mixed_15_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws']) + def test_mixed_16_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi']) + def test_mixed_17_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're']) + def test_mixed_18_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs']) + def test_mixed_19_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri']) + def test_mixed_20_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es']) + def test_mixed_21_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei']) + def test_mixed_22_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si']) + def test_mixed_23_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr']) + def test_mixed_24_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe']) + def test_mixed_25_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws']) + def test_mixed_26_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi']) + def test_mixed_27_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre']) + def test_mixed_28_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs']) + def test_mixed_29_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri']) + def test_mixed_30_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes']) + def test_mixed_31_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei']) + def test_mixed_32_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi']) + def test_mixed_33_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre']) + def test_mixed_34_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs']) + def test_mixed_35_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri']) + def test_mixed_36_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes']) + def test_mixed_37_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei']) + def test_mixed_38_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi']) + def test_mixed_39_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res']) + def test_mixed_40_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei']) + def test_mixed_41_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi']) + def test_mixed_42_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi']) + def test_mixed_43_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre']) + def test_mixed_44_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs']) + def test_mixed_45_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri']) + def test_mixed_46_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes']) + def test_mixed_47_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei']) + def test_mixed_48_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi']) + def test_mixed_49_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres']) + def test_mixed_50_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei']) + def test_mixed_51_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi']) + def test_mixed_52_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi']) + def test_mixed_53_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres']) + def test_mixed_54_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei']) + def test_mixed_55_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi']) + def test_mixed_56_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi']) + def test_mixed_57_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi']) + def test_mixed_58_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres']) + def test_mixed_59_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres', 'twrei']) + def test_mixed_60_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres', 'twrei', 'twrsi']) + def test_mixed_61_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres', 'twrei', 'twrsi', 'twesi']) + def test_mixed_62_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres', 'twrei', 'twrsi', 'twesi', 'tresi']) + def test_mixed_63_candidates(self): return self.check_candidates(['', 't', 'w', 'r', 'e', 's', 'i', 'tw', 'tr', 'te', 'ts', 'ti', 'wr', 'we', 'ws', 'wi', 're', 'rs', 'ri', 'es', 'ei', 'si', 'twr', 'twe', 'tws', 'twi', 'tre', 'trs', 'tri', 'tes', 'tei', 'tsi', 'wre', 'wrs', 'wri', 'wes', 'wei', 'wsi', 'res', 'rei', 'rsi', 'esi', 'twre', 'twrs', 'twri', 'twes', 'twei', 'twsi', 'tres', 'trei', 'trsi', 'tesi', 'wres', 'wrei', 'wrsi', 'wesi', 'resi', 'twres', 'twrei', 'twrsi', 'twesi', 'tresi', 'wresi']) + + def create_candidates(self, community, all_flags): + assert isinstance(all_flags, list) + assert all(isinstance(flags, str) for flags in all_flags) + def generator(): + for port, flags in enumerate(all_flags, 1): + address = ("127.0.0.1", port) + tunnel = "t" in flags + yield community.create_candidate(address, tunnel, address, address, u"unknown") + return list(generator()) + + def set_timestamps(self, community, candidates, all_flags): + assert isinstance(candidates, list) + assert isinstance(all_flags, list) + assert all(isinstance(flags, str) for flags in all_flags) + now = time() + for flags, candidate in zip(all_flags, candidates): + if "w" in flags: + # SELF has performed an outgoing walk to CANDIDATE + candidate.walk(now, 10.0) + if "r" in flags: + # SELF has received an incoming walk response from CANDIDATE + candidate.walk_response() + if "e" in flags: + # CANDIDATE_ELIGIBLE_DELAY seconds ago SELF performed a successful walk to CANDIDATE + candidate.walk(now - CANDIDATE_ELIGIBLE_DELAY, 10.0) + candidate.walk_response() + if "s" in flags: + # SELF has received an incoming walk request from CANDIDATE + candidate.stumble(now) + if "i" in flags: + # SELF has received an incoming walk response which introduced CANDIDATE + candidate.intro(now) + + return now + + def select_candidates(self, candidates, all_flags): + def filter_func(flags): + """ + Returns True when the flags correspond with a Candidate that should be returned by + dispersy_yield_candidates. + """ + return ("s" in flags or + "e" in flags or + "i" in flags or + ("w" in flags and "r" in flags)) + + return [candidate for flags, candidate in zip(all_flags, candidates) if filter_func(flags)] + + def select_verified_candidates(self, candidates, all_flags): + def filter_func(flags): + """ + Returns True when the flags correspond with a Candidate that should be returned by + dispersy_yield_verified_candidates. + """ + return ("s" in flags or + "e" in flags or + ("w" in flags and "r" in flags)) + + return [candidate for flags, candidate in zip(all_flags, candidates) if filter_func(flags)] + + def select_walk_candidates(self, candidates, all_flags): + def filter_func(flags): + """ + Returns True when the flags correspond with a Candidate that should be returned by + dispersy_get_walk_candidate. + """ + if "e" in flags: + # the candidate has 'eligible' flag, i.e. it is known and we walked to it at least + # CANDIDATE_ELIGIBLE_DELAY seconds ago + return True + + if "s" in flags and not "w" in flags: + # the candidate has the 'stumble' but not the 'walk' flag, i.e. it is known but we have not recently + # walked towards it + return True + + if "i" in flags and not "w" in flags: + # the candidate has the 'introduce' but not the 'walk' flag, i.e. it is known but we have not recently + # walked towards it + return True + + return False + + return [candidate for flags, candidate in zip(all_flags, candidates) if filter_func(flags)] + + def select_introduce_candidates(self, candidates, all_flags, exclude_candidate=None): + def filter_func(flags, candidate): + """ + Returns True when the flags correspond with a Candidate that should be returned by + dispersy_get_introduce_candidate. + """ + if exclude_candidate and exclude_candidate == candidate: + return + + if exclude_candidate and not exclude_candidate.tunnel and candidate.tunnel: + return + + if "s" in flags: + return "s" + + if ("e" in flags or + ("w" in flags and "r" in flags)): + return "w" + + # introduce candidates are chosen from two pools, W and S. With both pools chosen equally often, regardless of + # the size of the pools. Hence, candidates in smaller pools will be represented more often in the result. + + W = [candidate + for flags, candidate + in zip(all_flags, candidates) + if filter_func(flags, candidate) == "w"] + + S = [candidate + for flags, candidate + in zip(all_flags, candidates) + if filter_func(flags, candidate) == "s"] + + if W and S: + factor = gcd(len(S), len(W)) + pool = (W * (len(S) / factor)) + (S * (len(W) / factor)) + else: + pool = W + S + + return sorted(pool) + + @call_on_dispersy_thread + def check_candidates(self, all_flags): + assert isinstance(all_flags, list) + assert all(isinstance(flags, str) for flags in all_flags) + + def compare(selection, actual): + selection = ["%s:%d" % c.sock_addr if c else None for c in selection] + actual = ["%s:%d" % c.sock_addr if c else None for c in actual] + try: + self.assertEquals(set(selection), set(actual)) + except: + print "FLAGS ", all_flags + print "SELECT", selection + print "ACTUAL", actual + raise + + # IC determines the number of times that an interface method is called, it should be more than zero and the + # length of ALL_FLAGS to ensure the tests can succeed + ic = max(10, len(all_flags) * 2) + # IIC determined the number of iterations that an iterator interface method is used, it can be very large since + # the iterators should end way before this number is reached + iic = 666 + + assert isinstance(ic, int) + assert isinstance(iic, int) + assert len(all_flags) < iic + community = NoBootstrapDebugCommunity.create_community(self._dispersy, self._my_member) + candidates = self.create_candidates(community, all_flags) + + # yield_candidates + self.set_timestamps(community, candidates, all_flags) + selection = self.select_candidates(candidates, all_flags) + actual_list = [islice(community.dispersy_yield_candidates(), iic) for _ in xrange(ic)] + logger.debug("A] candidates: %s", map(str, candidates)) + logger.debug("A] selection: %s", map(str, selection)) + logger.debug("A] actual_list: %s", map(str, actual_list)) + for actual in actual_list: + compare(selection, actual) + + # yield_verified_candidates + self.set_timestamps(community, candidates, all_flags) + selection = self.select_verified_candidates(candidates, all_flags) + actual_list = [islice(community.dispersy_yield_verified_candidates(), iic) for _ in xrange(ic)] + logger.debug("B] candidates: %s", map(str, candidates)) + logger.debug("B] selection: %s", map(str, selection)) + logger.debug("B] actual_list: %s", map(str, actual_list)) + for actual in actual_list: + compare(selection, actual) + + # get_introduce_candidate (no exclusion) + self.set_timestamps(community, candidates, all_flags) + selection = self.select_introduce_candidates(candidates, all_flags) or [None] + actual = [community.dispersy_get_introduce_candidate() for _ in xrange(ic)] + logger.debug("D] candidates: %s", map(str, candidates)) + logger.debug("D] selection: %s", map(str, selection)) + logger.debug("D] actual: %s", map(str, actual)) + compare(selection, actual) + + # get_introduce_candidate (with exclusion) + self.set_timestamps(community, candidates, all_flags) + for candidate in candidates: + selection = self.select_introduce_candidates(candidates, all_flags, candidate) or [None] + actual = [community.dispersy_get_introduce_candidate(candidate) for _ in xrange(ic)] + logger.debug("E] exclude: %s", str(candidate)) + logger.debug("E] candidates: %s", map(str, candidates)) + logger.debug("E] selection: %s", map(str, selection)) + logger.debug("E] actual: %s", map(str, actual)) + compare(selection, actual) + + # get_walk_candidate + # Note that we must perform the CANDIDATE.WALK to ensure this candidate is not iterated again. Because of this, + # this test must be done last. + self.set_timestamps(community, candidates, all_flags) + selection = self.select_walk_candidates(candidates, all_flags) + logger.debug("C] candidates: %s", map(str, candidates)) + logger.debug("C] selection: %s", map(str, selection)) + for _ in xrange(len(selection)): + candidate = community.dispersy_get_walk_candidate() + self.assertNotEquals(candidate, None) + self.assertIn("%s:%d" % candidate.sock_addr, ["%s:%d" % c.sock_addr for c in selection]) + candidate.walk(time(), 10.5) + for _ in xrange(5): + candidate = community.dispersy_get_walk_candidate() + self.assertEquals(candidate, None) + + @call_on_dispersy_thread + def test_get_introduce_candidate(self, community_create_method=DebugCommunity.create_community): + community = community_create_method(self._dispersy, self._my_member) + candidates = self.create_candidates(community, [""] * 5) + expected = [None, ("127.0.0.1", 1), ("127.0.0.1", 2), ("127.0.0.1", 3), ("127.0.0.1", 4)] + now = time() + got = [] + for candidate in candidates: + candidate.stumble(now) + introduce = community.dispersy_get_introduce_candidate(candidate) + got.append(introduce.sock_addr if introduce else None) + self.assertEquals(expected, got) + + return community, candidates + + @call_on_dispersy_thread + def test_tracker_get_introduce_candidate(self, community_create_method=TrackerCommunity.create_community): + community, candidates = self.test_get_introduce_candidate(community_create_method) + + # trackers should not prefer either stumbled or walked candidates, i.e. it should not return + # candidate 1 more than once/in the wrong position + now = time() + candidates[0].walk(now, 10.5) + candidates[0].walk_response() + expected = [("127.0.0.1", 5), ("127.0.0.1", 1), ("127.0.0.1", 2), ("127.0.0.1", 3), ("127.0.0.1", 4)] + got = [] + for candidate in candidates: + candidate.stumble(now) + introduce = community.dispersy_get_introduce_candidate(candidate) + got.append(introduce.sock_addr if introduce else None) + self.assertEquals(expected, got) + + @call_on_dispersy_thread + def test_introduction_probabilities(self): + c = DebugCommunity.create_community(self._dispersy, self._my_member) + + candidates = [] + for i in range(2): + address = ("127.0.0.1", i + 1) + candidate = c.create_candidate(address, False, address, address, u"unknown") + candidates.append(candidate) + + # mark 1 candidate as walk, 1 as stumble + now = time() + candidates[0].walk(now, 10.5) + candidates[0].walk_response() + candidates[1].stumble(now) + + # fetch candidates + returned_walked_candidate = 0 + expected_walked_range = range(4500, 5500) + for i in xrange(10000): + candidate = c.dispersy_get_introduce_candidate() + returned_walked_candidate += 1 if candidate.sock_addr[1] == 1 else 0 + + assert returned_walked_candidate in expected_walked_range + + @call_on_dispersy_thread + def test_walk_probabilities(self): + c = DebugCommunity.create_community(self._dispersy, self._my_member) + + candidates = [] + for i in range(3): + address = ("127.0.0.1", i + 1) + candidate = c.create_candidate(address, False, address, address, u"unknown") + candidates.append(candidate) + + # mark 1 candidate as walk, 1 as stumble + now = time() + candidates[0].walk(now - CANDIDATE_ELIGIBLE_DELAY, 10.5) + candidates[0].walk_response() + candidates[1].stumble(now) + candidates[2].intro(now) + + # fetch candidates + returned_walked_candidate = 0 + expected_walked_range = range(4497, 5475) + returned_stumble_candidate = 0 + expected_stumble_range = range(1975, 2975) + returned_intro_candidate = 0 + expected_intro_range = range(1975, 2975) + for i in xrange(10000): + candidate = c.dispersy_get_walk_candidate() + + returned_walked_candidate += 1 if candidate.sock_addr[1] == 1 else 0 + returned_stumble_candidate += 1 if candidate.sock_addr[1] == 2 else 0 + returned_intro_candidate += 1 if candidate.sock_addr[1] == 3 else 0 + + assert returned_walked_candidate in expected_walked_range, returned_walked_candidate + assert returned_stumble_candidate in expected_stumble_range, returned_stumble_candidate + assert returned_intro_candidate in expected_intro_range, returned_intro_candidate + + @call_on_dispersy_thread + def test_merge_candidates(self): + c = DebugCommunity.create_community(self._dispersy, self._my_member) + + # let's make a list of all possible combinations which should be merged into one candidate + candidates = [] + candidates.append(c.create_candidate(("1.1.1.1", 1), False, ("192.168.0.1", 1), ("1.1.1.1", 1), u"unknown")) + candidates.append(c.create_candidate(("1.1.1.1", 2), False, ("192.168.0.1", 1), ("1.1.1.1", 2), u"symmetric-NAT")) + candidates.append(c.create_candidate(("1.1.1.1", 3), False, ("192.168.0.1", 1), ("1.1.1.1", 3), u"symmetric-NAT")) + candidates.append(c.create_candidate(("1.1.1.1", 4), False, ("192.168.0.1", 1), ("1.1.1.1", 4), u"unknown")) + + c.filter_duplicate_candidate(candidates[0]) + + expected = [candidates[0].wan_address] + + got = [] + for candidate in c._candidates.itervalues(): + got.append(candidate.wan_address) + + self.assertEquals(expected, got) diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_classification.py tribler-6.2.0/Tribler/dispersy/tests/test_classification.py --- tribler-6.2.0/Tribler/dispersy/tests/test_classification.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_classification.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,384 @@ +import logging +logger = logging.getLogger(__name__) + +import gc +import inspect +import unittest + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestClassification(DispersyTestFunc): + + @call_on_dispersy_thread + def test_reclassify_unloaded_community(self): + """ + Load a community, reclassify it, load all communities of that classification to check. + """ + class ClassTestA(DebugCommunity): + pass + + class ClassTestB(DebugCommunity): + pass + + # no communities should exist + self.assertEqual([ClassTestA.load_community(self._dispersy, master) for master in ClassTestA.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + self.assertEqual([ClassTestB.load_community(self._dispersy, master) for master in ClassTestB.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + + # create master member + master = self._dispersy.get_new_member(u"high") + + # create community + self._dispersy.database.execute(u"INSERT INTO community (master, member, classification) VALUES (?, ?, ?)", + (master.database_id, self._my_member.database_id, ClassTestA.get_classification())) + + # reclassify + community = self._dispersy.reclassify_community(master, ClassTestB) + self.assertIsInstance(community, ClassTestB) + self.assertEqual(community.cid, master.mid) + try: + classification, = self._dispersy.database.execute(u"SELECT classification FROM community WHERE master = ?", (master.database_id,)).next() + except StopIteration: + self.fail() + self.assertEqual(classification, ClassTestB.get_classification()) + + # cleanup + community.unload_community() + + @call_on_dispersy_thread + def test_reclassify_loaded_community(self): + """ + Load a community, reclassify it, load all communities of that classification to check. + """ + class ClassTestC(DebugCommunity): + pass + + class ClassTestD(DebugCommunity): + pass + + # no communities should exist + self.assertEqual([ClassTestC.load_community(self._dispersy, master) for master in ClassTestC.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + self.assertEqual([ClassTestD.load_community(self._dispersy, master) for master in ClassTestD.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + + # create community + community_c = ClassTestC.create_community(self._dispersy, self._my_member) + self.assertEqual(len(list(self._dispersy.database.execute(u"SELECT * FROM community WHERE classification = ?", (ClassTestC.get_classification(),)))), 1) + + # reclassify + community_d = self._dispersy.reclassify_community(community_c, ClassTestD) + self.assertIsInstance(community_d, ClassTestD) + self.assertEqual(community_c.cid, community_d.cid) + try: + classification, = self._dispersy.database.execute(u"SELECT classification FROM community WHERE master = ?", (community_c.master_member.database_id,)).next() + except StopIteration: + self.fail() + self.assertEqual(classification, ClassTestD.get_classification()) + + # cleanup + community_d.unload_community() + + @call_on_dispersy_thread + def test_load_no_communities(self): + """ + Try to load communities of a certain classification while there are no such communities. + """ + class ClassificationLoadNoCommunities(DebugCommunity): + pass + self.assertEqual([ClassificationLoadNoCommunities.load_community(self._dispersy, master) for master in ClassificationLoadNoCommunities.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + + @call_on_dispersy_thread + def test_load_one_communities(self): + """ + Try to load communities of a certain classification while there is exactly one such + community available. + """ + class ClassificationLoadOneCommunities(DebugCommunity): + pass + + # no communities should exist + self.assertEqual([ClassificationLoadOneCommunities.load_community(self._dispersy, master) for master in ClassificationLoadOneCommunities.get_master_members(self._dispersy)], [], "Did you remove the database before running this testcase?") + + # create master member + master = self._dispersy.get_new_member(u"high") + + # create one community + self._dispersy.database.execute(u"INSERT INTO community (master, member, classification) VALUES (?, ?, ?)", + (master.database_id, self._my_member.database_id, ClassificationLoadOneCommunities.get_classification())) + + # load one community + communities = [ClassificationLoadOneCommunities.load_community(self._dispersy, master) for master in ClassificationLoadOneCommunities.get_master_members(self._dispersy)] + self.assertEqual(len(communities), 1) + self.assertIsInstance(communities[0], ClassificationLoadOneCommunities) + + # cleanup + communities[0].unload_community() + + @call_on_dispersy_thread + def test_load_two_communities(self): + """ + Try to load communities of a certain classification while there is exactly two such + community available. + """ + class LoadTwoCommunities(DebugCommunity): + pass + + # no communities should exist + self.assertEqual([LoadTwoCommunities.load_community(self._dispersy, master) for master in LoadTwoCommunities.get_master_members(self._dispersy)], []) + + masters = [] + # create two communities + community = LoadTwoCommunities.create_community(self._dispersy, self._my_member) + masters.append(community.master_member.public_key) + community.unload_community() + + community = LoadTwoCommunities.create_community(self._dispersy, self._my_member) + masters.append(community.master_member.public_key) + community.unload_community() + + # load two communities + self.assertEqual(sorted(masters), sorted(master.public_key for master in LoadTwoCommunities.get_master_members(self._dispersy))) + communities = [LoadTwoCommunities.load_community(self._dispersy, master) for master in LoadTwoCommunities.get_master_members(self._dispersy)] + self.assertEqual(sorted(masters), sorted(community.master_member.public_key for community in communities)) + self.assertEqual(len(communities), 2) + self.assertIsInstance(communities[0], LoadTwoCommunities) + self.assertIsInstance(communities[1], LoadTwoCommunities) + + # cleanup + communities[0].unload_community() + communities[1].unload_community() + + @unittest.skip("nosetests uses BufferingHandler to capture output. This handler keeps references to the community, breaking this test. Run nosetests --nologcapture --no-skip") + @call_on_dispersy_thread + def test_unloading_community(self): + """ + Test that calling community.unload_community() eventually results in a call to + community.__del__(). + """ + class ClassificationUnloadingCommunity(DebugCommunity): + pass + + def check(verbose=False): + # using a function to ensure all local variables are removed (scoping) + + i = 0 + j = 0 + for x in gc.get_objects(): + if isinstance(x, ClassificationUnloadingCommunity): + i += 1 + for obj in gc.get_referrers(x): + j += 1 + if verbose: + logger.debug("%s", str(type(obj))) + try: + lines, lineno = inspect.getsourcelines(obj) + logger.debug("Check %d %s", j, [line.rstrip() for line in lines]) + except TypeError: + logger.debug("TypeError") + + logger.debug("%d referrers", j) + return i + + community = ClassificationUnloadingCommunity.create_community(self._dispersy, self._my_member) + master = community.master_member + cid = community.cid + del community + self.assertIsInstance(self._dispersy.get_community(cid), ClassificationUnloadingCommunity) + self.assertEqual(check(), 1) + + # unload the community + self._dispersy.get_community(cid).unload_community() + try: + self._dispersy.get_community(cid, auto_load=False) + self.fail() + except KeyError: + pass + + # must be garbage collected + wait = 10 + for i in range(wait): + gc.collect() + logger.debug("waiting... %d", wait - i) + if check() == 0: + break + else: + yield 1.0 + self.assertEqual(check(True), 0) + + # load the community for cleanup + community = ClassificationUnloadingCommunity.load_community(self._dispersy, master) + self.assertEqual(check(), 1) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_enable_autoload(self): + """ + Test enable autoload. + + - Create community + - Enable auto-load (should be enabled by default) + - Define auto load + - Unload community + - Send community message + - Verify that the community got auto-loaded + - Undefine auto load + """ + # create community + community = DebugCommunity.create_community(self._dispersy, self._my_member) + cid = community.cid + message = community.get_meta_message(u"full-sync-text") + + # create node + node = DebugNode(community) + node.init_socket() + node.init_my_member(candidate=False) + yield 0.555 + + logger.debug("verify auto-load is enabled (default)") + self.assertTrue(community.dispersy_auto_load) + yield 0.555 + + logger.debug("define auto load") + self._dispersy.define_auto_load(DebugCommunity) + yield 0.555 + + logger.debug("create wake-up message") + global_time = 10 + wakeup = node.encode_message(node.create_full_sync_text("Should auto-load", global_time)) + + logger.debug("unload community") + community.unload_community() + community = None + node.set_community(None) + try: + self._dispersy.get_community(cid, auto_load=False) + self.fail() + except KeyError: + pass + yield 0.555 + + logger.debug("send community message") + node.give_packet(wakeup) + yield 0.555 + + logger.debug("verify that the community got auto-loaded") + try: + community = self._dispersy.get_community(cid) + except KeyError: + self.fail() + # verify that the message was received + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertIn(global_time, times) + yield 0.555 + + logger.debug("undefine auto load") + self._dispersy.undefine_auto_load(DebugCommunity) + yield 0.555 + + logger.debug("cleanup") + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_enable_disable_autoload(self): + """ + Test enable disable autoload. + + - Create community + - Enable auto-load (should be enabled by default) + - Define auto load + - Unload community + - Send community message + - Verify that the community got auto-loaded + - Disable auto-load + - Send community message + - Verify that the community did NOT get auto-loaded + - Undefine auto load + """ + # create community + community = DebugCommunity.create_community(self._dispersy, self._my_member) + cid = community.cid + community_database_id = community.database_id + master_member = community.master_member + message = community.get_meta_message(u"full-sync-text") + + # create node + node = DebugNode(community) + node.init_socket() + node.init_my_member(candidate=False) + + logger.debug("verify auto-load is enabled (default)") + self.assertTrue(community.dispersy_auto_load) + + logger.debug("define auto load") + self._dispersy.define_auto_load(DebugCommunity) + + logger.debug("create wake-up message") + global_time = 10 + wakeup = node.encode_message(node.create_full_sync_text("Should auto-load", global_time)) + + logger.debug("unload community") + community.unload_community() + community = None + node.set_community(None) + try: + self._dispersy.get_community(cid, auto_load=False) + self.fail() + except KeyError: + pass + + logger.debug("send community message") + node.give_packet(wakeup) + + logger.debug("verify that the community got auto-loaded") + try: + community = self._dispersy.get_community(cid) + except KeyError: + self.fail() + # verify that the message was received + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertIn(global_time, times) + + logger.debug("disable auto-load") + community.dispersy_auto_load = False + self.assertFalse(community.dispersy_auto_load) + + logger.debug("create wake-up message") + node.set_community(community) + global_time = 11 + wakeup = node.encode_message(node.create_full_sync_text("Should auto-load", global_time)) + + logger.debug("unload community") + community.unload_community() + community = None + node.set_community(None) + try: + self._dispersy.get_community(cid, auto_load=False) + self.fail() + except KeyError: + pass + + logger.debug("send community message") + node.give_packet(wakeup) + + logger.debug("verify that the community did not get auto-loaded") + try: + self._dispersy.get_community(cid, auto_load=False) + self.fail() + except KeyError: + pass + # verify that the message was NOT received + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community_database_id, node.my_member.database_id, message.database_id))] + self.assertNotIn(global_time, times) + + logger.debug("undefine auto load") + self._dispersy.undefine_auto_load(DebugCommunity) + + logger.debug("cleanup") + community = DebugCommunity.load_community(self._dispersy, master_member) + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_crypto.py tribler-6.2.0/Tribler/dispersy/tests/test_crypto.py --- tribler-6.2.0/Tribler/dispersy/tests/test_crypto.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_crypto.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,39 @@ +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestCrypto(DispersyTestFunc): + + @call_on_dispersy_thread + def test_invalid_public_key(self): + """ + SELF receives a dispersy-identity message containing an invalid public-key. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member(candidate=False, identity=False) + + # create dispersy-identity message + global_time = 10 + message = node.create_dispersy_identity(global_time) + + # replace the valid public-key with an invalid one + public_key = node.my_member.public_key + self.assertIn(public_key, message.packet) + invalid_packet = message.packet.replace(public_key, "I" * len(public_key)) + self.assertNotEqual(message.packet, invalid_packet) + + # give invalid message to SELF + node.give_packet(invalid_packet) + + # ensure that the message was not stored in the database + ids = list(self._dispersy.database.execute(u"SELECT id FROM sync WHERE community = ? AND packet = ?", + (community.database_id, buffer(invalid_packet)))) + self.assertEqual(ids, []) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_destroycommunity.py tribler-6.2.0/Tribler/dispersy/tests/test_destroycommunity.py --- tribler-6.2.0/Tribler/dispersy/tests/test_destroycommunity.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_destroycommunity.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,45 @@ +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestDestroyCommunity(DispersyTestFunc): + # TODO: test that after a hard-kill, all new incoming messages are dropped. + # TODO: test that after a hard-kill, nothing is added to the candidate table anymore + + @call_on_dispersy_thread + def test_hard_kill(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + yield 0.555 + + # should be no messages from NODE yet + times = list(self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))) + self.assertEqual(times, []) + + # send a message + global_time = 10 + node.give_message(node.create_full_sync_text("should be accepted (1)", global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(len(times), 1) + self.assertIn(global_time, times) + + # destroy the community + community.create_dispersy_destroy_community(u"hard-kill") + yield 0.555 + + # node should receive the dispersy-destroy-community message + _, message = node.receive_message(message_names=[u"dispersy-destroy-community"]) + self.assertFalse(message.payload.is_soft_kill) + self.assertTrue(message.payload.is_hard_kill) + + # the malicious_proof table must be empty + self.assertEqual(list(self._dispersy.database.execute(u"SELECT * FROM malicious_proof WHERE community = ?", (community.database_id,))), []) + + # the database should have been cleaned + # todo diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_dynamicsettings.py tribler-6.2.0/Tribler/dispersy/tests/test_dynamicsettings.py --- tribler-6.2.0/Tribler/dispersy/tests/test_dynamicsettings.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_dynamicsettings.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,290 @@ +import logging +logger = logging.getLogger(__name__) + +from ..resolution import PublicResolution, LinearResolution +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestDynamicSettings(DispersyTestFunc): + + @call_on_dispersy_thread + def test_default_resolution(self): + """ + Ensure that the default resolution policy is used first. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"dynamic-resolution-text") + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # check default policy + policy, proof = community.timeline.get_resolution_policy(meta, community.global_time) + self.assertIsInstance(policy, PublicResolution) + self.assertEqual(proof, []) + + # NODE creates a message (should allow, because the default policy is PublicResolution) + global_time = 10 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, policy.implement())) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_change_resolution(self): + """ + Change the resolution policy from default to linear and to public again. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"dynamic-resolution-text") + public = meta.resolution.policies[0] + linear = meta.resolution.policies[1] + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # check default policy + public_policy, proof = community.timeline.get_resolution_policy(meta, community.global_time + 1) + self.assertIsInstance(public_policy, PublicResolution) + self.assertEqual(proof, []) + + # change and check policy + message = community.create_dispersy_dynamic_settings([(meta, linear)]) + linear_policy, proof = community.timeline.get_resolution_policy(meta, community.global_time + 1) + self.assertIsInstance(linear_policy, LinearResolution) + self.assertEqual(proof, [message]) + + # NODE creates a message (should allow) + global_time = message.distribution.global_time + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public_policy.implement())) + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # NODE creates a message (should drop) + global_time += 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, linear_policy.implement())) + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + pass + else: + self.fail("must not accept the message") + + # change and check policy + message = community.create_dispersy_dynamic_settings([(meta, public)]) + public_policy, proof = community.timeline.get_resolution_policy(meta, community.global_time + 1) + self.assertIsInstance(public_policy, PublicResolution) + self.assertEqual(proof, [message]) + + # NODE creates a message (should drop) + global_time = message.distribution.global_time + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public_policy.implement())) + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + pass + else: + self.fail("must not accept the message") + + # NODE creates a message (should allow) + global_time += 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public_policy.implement())) + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_change_resolution_undo(self): + """ + Change the resolution policy from default to linear, the messages already accepted should be + undone + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"dynamic-resolution-text") + public = meta.resolution.policies[0] + linear = meta.resolution.policies[1] + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # create policy change, but do not yet process + community.update_global_time(10) + self.assertEqual(community.global_time, 10) + policy_linear = community.create_dispersy_dynamic_settings([(meta, linear)], store=False, update=False, forward=False) + self.assertEqual(policy_linear.distribution.global_time, 11) # hence the policy starts at 12 + + community.update_global_time(20) + self.assertEqual(community.global_time, 20) + policy_public = community.create_dispersy_dynamic_settings([(meta, public)], store=False, update=False, forward=False) + self.assertEqual(policy_public.distribution.global_time, 21) # hence the policy starts at 22 + + # because above policy changes were not applied (i.e. update=False) everything is still + # PublicResolution without any proof + for global_time in range(1, 32): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, PublicResolution) + self.assertEqual(proof, []) + + # NODE creates a message (should allow) + global_time = 25 + text_message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public.implement())) + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, text_message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + logger.debug("-- apply linear") + + # process the policy change + node.give_message(policy_linear) + + for global_time in range(1, 12): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, PublicResolution) + self.assertEqual(proof, []) + for global_time in range(12, 32): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, LinearResolution) + self.assertEqual([message.packet.encode("HEX") for message in proof], [policy_linear.packet.encode("HEX")]) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, text_message.distribution.global_time)).next() + except StopIteration: + self.fail("the message must be in the database with undone > 0") + self.assertGreater(undone, 0) + + logger.debug("-- apply public") + + # process the policy change + node.give_message(policy_public) + + for global_time in range(1, 12): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, PublicResolution) + self.assertEqual(proof, []) + for global_time in range(12, 22): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, LinearResolution) + self.assertEqual([message.packet for message in proof], [policy_linear.packet]) + for global_time in range(22, 32): + policy, proof = community.timeline.get_resolution_policy(meta, global_time) + self.assertIsInstance(policy, PublicResolution) + self.assertEqual([message.packet for message in proof], [policy_public.packet]) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, text_message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_wrong_resolution(self): + """ + For consistency we should not accept messages that have the wrong policy. + + Hence, when a message is created by a member with linear permission, but the community is + set to public resolution, the message should NOT be accepted. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"dynamic-resolution-text") + public = meta.resolution.policies[0] + linear = meta.resolution.policies[1] + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # set linear policy + community.create_dispersy_dynamic_settings([(meta, linear)]) + + # give permission to node + community.create_dispersy_authorize([(self._dispersy.get_member(node.my_member.public_key), meta, u"permit")]) + + # NODE creates a message (should allow, linear resolution and we have permission) + global_time = community.global_time + 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, linear.implement())) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # NODE creates a message (should drop because we use public resolution while linear is + # currently configured) + global_time = community.global_time + 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public.implement())) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + pass + else: + self.fail("must NOT accept the message") + + # set public policy + community.create_dispersy_dynamic_settings([(meta, public)]) + + # NODE creates a message (should allow, we use public resolution and that is the active policy) + global_time = community.global_time + 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, public.implement())) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + self.fail("must accept the message") + self.assertEqual(undone, 0, "must accept the message") + + # NODE creates a message (should drop because we use linear resolution while public is + # currently configured) + global_time = community.global_time + 1 + message = node.give_message(node.create_dynamic_resolution_text("Dprint=True", global_time, linear.implement())) + + try: + undone, = self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time)).next() + except StopIteration: + pass + else: + self.fail("must NOT accept the message") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_identitalpayload.py tribler-6.2.0/Tribler/dispersy/tests/test_identitalpayload.py --- tribler-6.2.0/Tribler/dispersy/tests/test_identitalpayload.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_identitalpayload.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,116 @@ +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestIdenticalPayload(DispersyTestFunc): + + @call_on_dispersy_thread + def test_incoming__drop_first(self): + """ + NODE creates two messages with the same community/member/global-time triplets. + + - One of the two should be dropped + - Both binary signatures should end up in the bloom filter (temporarily) (NO LONGER THE CASE) + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + yield 0.555 + + # create messages + global_time = 10 + messages = [] + messages.append(node.create_full_sync_text("Identical payload message", global_time)) + messages.append(node.create_full_sync_text("Identical payload message", global_time)) + self.assertNotEqual(messages[0].packet, messages[1].packet, "the signature must make the messages unique") + + # sort. we now know that the first message must be dropped + messages.sort(key=lambda x: x.packet) + + # give messages in different batches + node.give_message(messages[0]) + yield 0.555 + node.give_message(messages[1]) + yield 0.555 + + # only one message may be in the database + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, global_time)).next() + except StopIteration: + self.fail("neither messages is stored") + + packet = str(packet) + self.assertEqual(packet, messages[1].packet) + + # 03/11/11 Boudewijn: we no longer store the ranges in memory, hence only the new packet + # will be in the bloom filter + # + # both packets must be in the bloom filter + # assert_(len(community._sync_ranges) == 1) + # for message in messages: + # for bloom_filter in community._sync_ranges[0].bloom_filters: + # assert_(message.packet in bloom_filter) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_incoming__drop_second(self): + """ + NODE creates two messages with the same community/member/global-time triplets. + + - One of the two should be dropped + - Both binary signatures should end up in the bloom filter (temporarily) (NO LONGER THE CASE) + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + yield 0.555 + + # create messages + global_time = 10 + messages = [] + messages.append(node.create_full_sync_text("Identical payload message", global_time)) + messages.append(node.create_full_sync_text("Identical payload message", global_time)) + self.assertNotEqual(messages[0].packet, messages[1].packet, "the signature must make the messages unique") + + # sort. we now know that the first message must be dropped + messages.sort(key=lambda x: x.packet) + + # give messages in different batches + node.give_message(messages[1]) + yield 0.555 + node.give_message(messages[0]) + yield 0.555 + + # only one message may be in the database + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, global_time)).next() + except StopIteration: + self.fail("neither messages is stored") + + packet = str(packet) + self.assertEqual(packet, messages[1].packet) + + # 03/11/11 Boudewijn: we no longer store the ranges in memory, hence only the new packet + # will be in the bloom filter + # + # both packets must be in the bloom filter + # assert_(len(community._sync_ranges) == 1) + # for message in messages: + # for bloom_filter in community._sync_ranges[0].bloom_filters: + # assert_(message.packet in bloom_filter) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_member.py tribler-6.2.0/Tribler/dispersy/tests/test_member.py --- tribler-6.2.0/Tribler/dispersy/tests/test_member.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_member.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,103 @@ +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestMemberTag(DispersyTestFunc): + + @call_on_dispersy_thread + def test_ignore_test(self): + """ + Test the must_ignore = True feature. + + When we ignore a specific member we will still accept messages from that member and store + them in our database. However, the GUI may choose not to display any messages from them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + self.assertEqual(community.fetch_packets(meta.name), []) + + # send a message + global_time = 10 + messages = [] + messages.append(node.give_message(node.create_full_sync_text("should be accepted (1)", global_time))) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # we now tag the member as ignore + self._dispersy.get_member(node.my_member.public_key).must_ignore = True + + tags, = self._dispersy.database.execute(u"SELECT tags FROM member WHERE id = ?", (node.my_member.database_id,)).next() + self.assertIn(u"ignore", tags.split(",")) + + # send a message and ensure it is in the database (ignore still means it must be stored in + # the database) + global_time = 20 + messages.append(node.give_message(node.create_full_sync_text("should be accepted (2)", global_time))) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # we now tag the member not to ignore + self._dispersy.get_member(node.my_member.public_key).must_ignore = False + + # send a message + global_time = 30 + messages.append(node.give_message(node.create_full_sync_text("should be accepted (3)", global_time))) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_blacklist_test(self): + """ + Test the must_blacklist = True feature. + + When we 'blacklist' a specific member we will no longer accept or store messages from that + member. No callback will be given to the community code. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + self.assertEqual(community.fetch_packets(meta.name), []) + + # send a message + global_time = 10 + messages = [] + messages.append(node.give_message(node.create_full_sync_text("should be accepted (1)", global_time))) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # we now tag the member as blacklist + self._dispersy.get_member(node.my_member.public_key).must_blacklist = True + + tags, = self._dispersy.database.execute(u"SELECT tags FROM member WHERE id = ?", (node.my_member.database_id,)).next() + self.assertIn(u"blacklist", tags.split(",")) + + # send a message and ensure it is not in the database + global_time = 20 + node.give_message(node.create_full_sync_text("should NOT be accepted (2)", global_time)) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # we now tag the member not to blacklist + self._dispersy.get_member(node.my_member.public_key).must_blacklist = False + + # send a message + global_time = 30 + messages.append(node.give_message(node.create_full_sync_text("should be accepted (3)", global_time))) + self.assertEqual([message.packet for message in messages], community.fetch_packets(meta.name)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_missingmessage.py tribler-6.2.0/Tribler/dispersy/tests/test_missingmessage.py --- tribler-6.2.0/Tribler/dispersy/tests/test_missingmessage.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_missingmessage.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,121 @@ +import logging +logger = logging.getLogger(__name__) + +from random import shuffle + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestMissingMessage(DispersyTestFunc): + + @call_on_dispersy_thread + def test_single_request(self): + """ + SELF generates a few messages and NODE requests one of them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # create messages + messages = [] + for i in xrange(10): + messages.append(community.create_full_sync_text("Message #%d" % i)) + + # ensure we don't obtain the messages from the socket cache + node.drop_packets() + + for message in messages: + # request messages + node.give_message(node.create_dispersy_missing_message(community.my_member, [message.distribution.global_time], 25, community.my_candidate)) + yield 0.11 + + # receive response + _, response = node.receive_message(message_names=[message.name]) + self.assertEqual(response.distribution.global_time, message.distribution.global_time) + self.assertEqual(response.payload.text, message.payload.text) + logger.debug("ok @%d", response.distribution.global_time) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_single_request_out_of_order(self): + """ + SELF generates a few messages and NODE requests one of them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # create messages + messages = [] + for i in xrange(10): + messages.append(community.create_full_sync_text("Message #%d" % i)) + + # ensure we don't obtain the messages from the socket cache + node.drop_packets() + + shuffle(messages) + for message in messages: + # request messages + node.give_message(node.create_dispersy_missing_message(community.my_member, [message.distribution.global_time], 25, community.my_candidate)) + yield 0.11 + + # receive response + _, response = node.receive_message(message_names=[message.name]) + self.assertEqual(response.distribution.global_time, message.distribution.global_time) + self.assertEqual(response.payload.text, message.payload.text) + logger.debug("ok @%d", response.distribution.global_time) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_triple_request(self): + """ + SELF generates a few messages and NODE requests three of them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # create messages + messages = [] + for i in xrange(10): + messages.append(community.create_full_sync_text("Message #%d" % i)) + meta = messages[0].meta + + # ensure we don't obtain the messages from the socket cache + node.drop_packets() + + # request messages + global_times = [messages[index].distribution.global_time for index in [2, 4, 6]] + node.give_message(node.create_dispersy_missing_message(community.my_member, global_times, 25, community.my_candidate)) + yield 0.11 + + # receive response + responses = [] + _, response = node.receive_message(message_names=[meta.name]) + responses.append(response) + _, response = node.receive_message(message_names=[meta.name]) + responses.append(response) + _, response = node.receive_message(message_names=[meta.name]) + responses.append(response) + + self.assertEqual(sorted(response.distribution.global_time for response in responses), global_times) + logger.debug("ok @%s", global_times) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_nat_detection.py tribler-6.2.0/Tribler/dispersy/tests/test_nat_detection.py --- tribler-6.2.0/Tribler/dispersy/tests/test_nat_detection.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_nat_detection.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,27 @@ +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread +from .debugcommunity.community import DebugCommunity + +class TestNATDetection(DispersyTestFunc): + """ + Tests NAT detection. + + These unit tests should cover all methods which are related to detecting the NAT type of a peer. + """ + + @call_on_dispersy_thread + def test_symmetric_vote(self): + """ + After receiving two votes from different candidates for different port numbers, a peer + must change it's connection type to summetric-NAT. + """ + c = DebugCommunity.create_community(self._dispersy, self._my_member) + + for i in range(2): + address = ("127.0.0.2", i + 1) + candidate = c.create_candidate(address, False, address, address, u"unknown") + self._dispersy.wan_address_vote(("127.0.0.1", i + 1), candidate) + + assert self._dispersy._connection_type == u"symmetric-NAT" + +if __name__ == "__main__": + unittest.main() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_neighborhood.py tribler-6.2.0/Tribler/dispersy/tests/test_neighborhood.py --- tribler-6.2.0/Tribler/dispersy/tests/test_neighborhood.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_neighborhood.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,61 @@ +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestNeighborhood(DispersyTestFunc): + + def test_forward_1(self): + return self.forward(1) + + def test_forward_10(self): + return self.forward(10) + + def test_forward_2(self): + return self.forward(2) + + def test_forward_3(self): + return self.forward(3) + + def test_forward_20(self): + return self.forward(20) + + @call_on_dispersy_thread + def forward(self, node_count): + """ + SELF should forward created messages to its neighbors. + + - Multiple (NODE_COUNT) nodes connect to SELF + - SELF creates a new message + - At most 10 NODES should receive the message once + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-text") + + # check configuration + self.assertEqual(meta.destination.node_count, 10) + + # provide SELF with a neighbourhood + nodes = [DebugNode(community) for _ in xrange(node_count)] + for node in nodes: + node.init_socket() + node.init_my_member() + + # SELF creates a message + message = community.create_full_sync_text("Hello World!") + yield 0.1 + + # ensure sufficient NODES received the message + forwarded_node_count = 0 + for node in nodes: + forwarded = [m for _, m in node.receive_messages(message_names=[u"full-sync-text"])] + self.assertIn(len(forwarded), (0, 1)) + if len(forwarded) == 1: + self.assertEqual(forwarded[0].packet, message.packet) + forwarded_node_count += 1 + + self.assertEqual(forwarded_node_count, min(node_count, meta.destination.node_count)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_overlay.py tribler-6.2.0/Tribler/dispersy/tests/test_overlay.py --- tribler-6.2.0/Tribler/dispersy/tests/test_overlay.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_overlay.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,182 @@ +import logging +logger = logging.getLogger(__name__) +summary = logging.getLogger("test-overlay-summary") + +from os import environ +from pprint import pformat +from time import time +from unittest import skipUnless +from collections import defaultdict + +from ..conversion import DefaultConversion +from .debugcommunity.community import DebugCommunity +from .debugcommunity.conversion import DebugCommunityConversion +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestOverlay(DispersyTestFunc): + + @skipUnless(environ.get("TEST_OVERLAY_ALL_CHANNEL") == "yes", "This 'unittest' tests the health of a live overlay, as such, this is not part of the code review process") + def test_all_channel_community(self): + return self.check_live_overlay(cid_hex="8164f55c2f828738fa779570e4605a81fec95c9d", + version="\x01", + enable_fast_walker=False) + + @skipUnless(environ.get("TEST_OVERLAY_BARTER") == "yes", "This 'unittest' tests the health of a live overlay, as such, this is not part of the code review process") + def test_barter_community(self): + return self.check_live_overlay(cid_hex="4fe1172862c649485c25b3d446337a35f389a2a2", + version="\x01", + enable_fast_walker=False) + + @skipUnless(environ.get("TEST_OVERLAY_SEARCH") == "yes", "This 'unittest' tests the health of a live overlay, as such, this is not part of the code review process") + def test_search_community(self): + return self.check_live_overlay(cid_hex="2782dc9253cef6cc9272ee8ed675c63743c4eb3a", + version="\x01", + enable_fast_walker=True) + + @call_on_dispersy_thread + def check_live_overlay(self, cid_hex, version, enable_fast_walker): + class Conversion(DebugCommunityConversion): + # there are overlays that modify the introduction request, ensure that the returned offset 'consumed' all + # bytes in the packet + def _decode_introduction_request(self, placeholder, offset, data): + _, payload = super(Conversion, self)._decode_introduction_request(placeholder, offset, data) + return len(data), payload + + class Community(DebugCommunity): + def __init__(self, dispersy, master): + super(Community, self).__init__(dispersy, master) + self.dispersy.callback.register(self.fast_walker) + + def initiate_conversions(self): + return [DefaultConversion(self), Conversion(self, version)] + + def dispersy_claim_sync_bloom_filter(self, request_cache): + # we only want to walk in the community, not exchange data + return None + + def fast_walker(self): + for _ in xrange(10): + now = time() + + # count -everyone- that is active (i.e. walk or stumble) + active_canidates = list(self.dispersy_yield_verified_candidates()) + if len(active_canidates) > 20: + logger.debug("there are %d active non-bootstrap candidates available, prematurely quitting fast walker", len(active_canidates)) + break + + # request bootstrap peers that are eligible + eligible_candidates = [candidate + for candidate + in self._dispersy.bootstrap_candidates + if candidate.is_eligible_for_walk(now)] + for count, candidate in enumerate(eligible_candidates[:len(eligible_candidates) / 2], 1): + logger.debug("%d/%d extra walk to %s", count, len(eligible_candidates), candidate) + self.create_introduction_request(candidate, allow_sync=False) + + # request peers that are eligible + eligible_candidates = [candidate + for candidate + in self._candidates.itervalues() + if candidate.is_eligible_for_walk(now)] + for count, candidate in enumerate(eligible_candidates[:len(eligible_candidates) / 2], 1): + logger.debug("%d/%d extra walk to %s", count, len(eligible_candidates), candidate) + self.create_introduction_request(candidate, allow_sync=False) + + # wait for NAT hole punching + yield 1.0 + + summary.debug("finished") + + class Info(object): + pass + + assert isinstance(cid_hex, str) + assert len(cid_hex) == 40 + assert isinstance(enable_fast_walker, bool) + cid = cid_hex.decode("HEX") + + self._dispersy.statistics.enable_debug_statistics(True) + community = Community.join_community(self._dispersy, self._dispersy.get_temporary_member_from_id(cid), self._my_member) + summary.info(community.cid.encode("HEX")) + + history = [] + begin = time() + for _ in xrange(60 * 15): + yield 1.0 + now = time() + info = Info() + info.diff = now - begin + info.candidates = [(candidate, candidate.get_category(now)) for candidate in community._candidates.itervalues()] + info.verified_candidates = [(candidate, candidate.get_category(now)) for candidate in community.dispersy_yield_verified_candidates()] + info.bootstrap_attempt = self._dispersy.statistics.walk_bootstrap_attempt + info.bootstrap_success = self._dispersy.statistics.walk_bootstrap_success + info.bootstrap_ratio = 100.0 * info.bootstrap_success / info.bootstrap_attempt if info.bootstrap_attempt else 0.0 + info.candidate_attempt = self._dispersy.statistics.walk_attempt - self._dispersy.statistics.walk_bootstrap_attempt + info.candidate_success = self._dispersy.statistics.walk_success - self._dispersy.statistics.walk_bootstrap_success + info.candidate_ratio = 100.0 * info.candidate_success / info.candidate_attempt if info.candidate_attempt else 0.0 + info.incoming_walks = self._dispersy.statistics.walk_advice_incoming_request + history.append(info) + + summary.info("after %.1f seconds there are %d verified candidates [w%d:s%d:i%d:n%d]", + info.diff, + len([_ for _, category in info.candidates if category in (u"walk", u"stumble")]), + len([_ for _, category in info.candidates if category == u"walk"]), + len([_ for _, category in info.candidates if category == u"stumble"]), + len([_ for _, category in info.candidates if category == u"intro"]), + len([_ for _, category in info.candidates if category == u"none"])) + summary.debug("bootstrap walking: %d/%d ~%.1f%%", info.bootstrap_success, info.bootstrap_attempt, info.bootstrap_ratio) + summary.debug("candidate walking: %d/%d ~%.1f%%", info.candidate_success, info.candidate_attempt, info.candidate_ratio) + + helper_requests = defaultdict(lambda: defaultdict(int)) + helper_responses = defaultdict(lambda: defaultdict(int)) + + for destination, requests in self._dispersy.statistics.outgoing_introduction_request.iteritems(): + responses = self._dispersy.statistics.incoming_introduction_response[destination] + + # who introduced me to DESTINATION? + for helper, introductions in self._dispersy.statistics.received_introductions.iteritems(): + if destination in introductions: + helper_requests[helper][destination] = requests + helper_responses[helper][destination] = responses + + l = [(100.0 * sum(helper_responses[helper].itervalues()) / sum(helper_requests[helper].itervalues()), + sum(helper_requests[helper].itervalues()), + sum(helper_responses[helper].itervalues()), + helper_requests[helper], + helper_responses[helper], + helper) + for helper + in helper_requests] + + for ratio, req, res, req_dict, res_dict, helper, in sorted(l): + summary.debug("%.1f%% %3d %3d %15s:%-4d #%d %s", ratio, req, res, helper[0], helper[1], + len(req_dict), + "; ".join("%s:%d:%d/%d" % (addr[0], addr[1], res_dict[addr], req_dict[addr]) + for addr + in req_dict)) + + self._dispersy.statistics.update() + summary.debug("\n%s", pformat(self._dispersy.statistics.get_dict())) + + # write graph statistics + handle = open("%s_connections.txt" % cid_hex, "w+") + handle.write("TIME VERIFIED_CANDIDATES WALK_CANDIDATES STUMBLE_CANDIDATES INTRO_CANDIDATES NONE_CANDIDATES B_ATTEMPTS B_SUCCESSES C_ATTEMPTS C_SUCCESSES INCOMING_WALKS\n") + for info in history: + handle.write("%f %d %d %d %d %d %d %d %d %d %d\n" % ( + info.diff, + len(info.verified_candidates), + len([_ for _, category in info.candidates if category == u"walk"]), + len([_ for _, category in info.candidates if category == u"stumble"]), + len([_ for _, category in info.candidates if category == u"intro"]), + len([_ for _, category in info.candidates if category == u"none"]), + info.bootstrap_attempt, + info.bootstrap_success, + info.candidate_attempt, + info.candidate_success, + info.incoming_walks)) + + # determine test success or failure + average_verified_candidates = 1.0 * sum(len(info.verified_candidates) for info in history) / len(history) + summary.debug("Average verified candidates: %.1f", average_verified_candidates) + self.assertGreater(average_verified_candidates, 10.0) diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_pruning.py tribler-6.2.0/Tribler/dispersy/tests/test_pruning.py --- tribler-6.2.0/Tribler/dispersy/tests/test_pruning.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_pruning.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,269 @@ +import logging +logger = logging.getLogger(__name__) + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestPruning(DispersyTestFunc): + + @call_on_dispersy_thread + def test_local_creation_causes_pruning(self): + """ + SELF creates messages that should be properly pruned. + + - SELF creates 10 pruning messages [1:10]. These should be active. + - SELF creates 10 pruning messages [11:20]. These new messages should be active, while + [1:10] should become inactive. + - SELF creates 10 pruning messages [21:30]. These new messages should be active, while + [1:10] should be pruned and [11:20] should become inactive. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-global-time-pruning-text") + + # check settings + self.assertEqual(meta.distribution.pruning.inactive_threshold, 10, "check message configuration") + self.assertEqual(meta.distribution.pruning.prune_threshold, 20, "check message configuration") + + # create 10 pruning messages + messages = [community.create_full_sync_global_time_pruning_text("Hello World #%d" % i, forward=False) for i in xrange(0, 10)] + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 pruning messages + inactive = messages + messages = [community.create_full_sync_global_time_pruning_text("Hello World #%d" % i, forward=False) for i in xrange(10, 20)] + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in inactive), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 pruning messages + pruned = inactive + inactive = messages + messages = [community.create_full_sync_global_time_pruning_text("Hello World #%d" % i, forward=False) for i in xrange(20, 30)] + self.assertTrue(all(message.distribution.pruning.is_pruned() for message in pruned), "all messages should be pruned") + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in inactive), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # pruned messages should no longer exist in the database + for message in pruned: + try: + self._dispersy.database.execute(u"SELECT * FROM sync WHERE id = ?", (message.packet_id,)).next() + except StopIteration: + pass + else: + self.fail("Message should not be in the database") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_local_creation_of_other_messages_causes_pruning(self): + """ + SELF creates messages that should be properly pruned. + + - SELF creates 10 pruning messages [1:10]. These should be active. + - SELF creates 10 normal messages [11:20]. [1:10] should become inactive. + - SELF creates 10 normal messages [21:30]. [1:10] should become pruned. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-global-time-pruning-text") + + # check settings + self.assertEqual(meta.distribution.pruning.inactive_threshold, 10, "check message configuration") + self.assertEqual(meta.distribution.pruning.prune_threshold, 20, "check message configuration") + + # create 10 pruning messages + messages = [community.create_full_sync_global_time_pruning_text("Hello World #%d" % i, forward=False) for i in xrange(0, 10)] + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 normal messages + _ = [community.create_full_sync_text("Hello World #%d" % i, forward=False) for i in xrange(10, 20)] + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in messages), "all messages should be inactive") + + # create 10 normal messages + _ = [community.create_full_sync_text("Hello World #%d" % i, forward=False) for i in xrange(20, 30)] + self.assertTrue(all(message.distribution.pruning.is_pruned() for message in messages), "all messages should be pruned") + + # pruned messages should no longer exist in the database + for message in messages: + try: + self._dispersy.database.execute(u"SELECT * FROM sync WHERE id = ?", (message.packet_id,)).next() + except StopIteration: + pass + else: + self.fail("Message should not be in the database") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_remote_creation_causes_pruning(self): + """ + NODE creates messages that should cause proper pruning on SELF. + + - NODE creates 10 pruning messages [1:10] and gives them to SELF. These should be active. + - NODE creates 10 pruning messages [11:20] and gives them to SELF. These new messages should + be active, while [1:10] should become inactive. + - NODE creates 10 pruning messages [21:30] and gives them to SELF. These new messages should + be active, while [1:10] should become pruned and [11:20] should become inactive. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-global-time-pruning-text") + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # check settings + self.assertEqual(meta.distribution.pruning.inactive_threshold, 10, "check message configuration") + self.assertEqual(meta.distribution.pruning.prune_threshold, 20, "check message configuration") + + # create 10 pruning messages + messages = [node.create_full_sync_global_time_pruning_text("Hello World #%d" % i, i + 10) for i in xrange(0, 10)] + node.give_messages(messages) + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 pruning messages + inactive = messages + messages = [node.create_full_sync_global_time_pruning_text("Hello World #%d" % i, i + 10) for i in xrange(10, 20)] + node.give_messages(messages) + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in inactive), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 pruning messages + pruned = inactive + inactive = messages + messages = [node.create_full_sync_global_time_pruning_text("Hello World #%d" % i, i + 10) for i in xrange(20, 30)] + node.give_messages(messages) + self.assertTrue(all(message.distribution.pruning.is_pruned() for message in pruned), "all messages should be pruned") + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in inactive), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # pruned messages should no longer exist in the database + for message in pruned: + try: + self._dispersy.database.execute(u"SELECT * FROM sync WHERE id = ?", (message.packet_id,)).next() + except StopIteration: + pass + else: + self.fail("Message should not be in the database") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_remote_creation_of_other_messages_causes_pruning(self): + """ + NODE creates messages that should cause proper pruning on SELF. + + - NODE creates 10 pruning messages [1:10] and gives them to SELF. These should be active. + - NODE creates 10 normal messages [11:20] and gives them to SELF. The pruning messages [1:10] + should become inactive. + - NODE creates 10 normal messages [21:30] and give them to SELF. The pruning messages [1:10] + should become pruned. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-global-time-pruning-text") + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # check settings + self.assertEqual(meta.distribution.pruning.inactive_threshold, 10, "check message configuration") + self.assertEqual(meta.distribution.pruning.prune_threshold, 20, "check message configuration") + + # create 10 pruning messages + messages = [node.create_full_sync_global_time_pruning_text("Hello World #%d" % i, i + 10) for i in xrange(0, 10)] + node.give_messages(messages) + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages), "all messages should be active") + + # create 10 normal messages + _ = [node.create_full_sync_text("Hello World #%d" % i, i + 10) for i in xrange(10, 20)] + node.give_messages(_) + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in messages), "all messages should be inactive") + + # create 10 normal messages + _ = [node.create_full_sync_text("Hello World #%d" % i, i + 10) for i in xrange(20, 30)] + node.give_messages(_) + self.assertTrue(all(message.distribution.pruning.is_pruned() for message in messages), "all messages should be pruned") + + # pruned messages should no longer exist in the database + for message in messages: + try: + self._dispersy.database.execute(u"SELECT * FROM sync WHERE id = ?", (message.packet_id,)).next() + except StopIteration: + pass + else: + self.fail("Message should not be in the database") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_sync_response_response_filtering_inactive(self): + """ + Testing the bloom filter sync. + + - SELF creates 20 pruning messages [1:20]. Messages [1:10] will be inactive and [11:20] will + be active. + - NODE asks for a sync and receives the active messages [11:20]. + - SELF creates 5 normal messages [21:25]. Messages [1:5] will be pruned, [6:15] will become + inactive, and [16:20] will become active. + - NODE asks for a sync and received the active messages [16:20]. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"full-sync-global-time-pruning-text") + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # check settings + self.assertEqual(meta.distribution.pruning.inactive_threshold, 10, "check message configuration") + self.assertEqual(meta.distribution.pruning.prune_threshold, 20, "check message configuration") + + # SELF creates 20 messages + messages = [community.create_full_sync_global_time_pruning_text("Hello World #%d" % i, forward=False) for i in xrange(0, 20)] + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in messages[0:10]), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages[10:20]), "all messages should be active") + + # NODE requests missing messages + sync = (1, 0, 1, 0, []) + global_time = 1 # ensure we do not increase the global time, causing further pruning + node.drop_packets() + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", sync, 42, global_time)) + yield 0.1 + + # SELF should return the 10 active messages and nothing more + responses = [response for _, response in node.receive_messages(message_names=[u"full-sync-global-time-pruning-text"])] + self.assertEqual(node.receive_messages(), []) + self.assertEqual(len(responses), 10) + self.assertTrue(all(message.packet == response.packet for message, response in zip(messages[10:20], responses))) + + # SELF creates 5 normal messages + _ = [community.create_full_sync_text("Hello World #%d" % i, forward=False) for i in xrange(20, 25)] + self.assertTrue(all(message.distribution.pruning.is_pruned() for message in messages[0:5]), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_inactive() for message in messages[5:15]), "all messages should be inactive") + self.assertTrue(all(message.distribution.pruning.is_active() for message in messages[15:20]), "all messages should be active") + + # NODE requests missing messages + sync = (1, 0, 1, 0, []) + global_time = 1 # ensure we do not increase the global time, causing further pruning + node.drop_packets() + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", sync, 42, global_time)) + yield 0.1 + + # SELF should return the 5 active messages and nothing more + responses = [response for _, response in node.receive_messages(message_names=[u"full-sync-global-time-pruning-text"])] + self.assertEqual(node.receive_messages(), []) + self.assertEqual(len(responses), 5) + self.assertTrue(all(message.packet == response.packet for message, response in zip(messages[15:20], responses))) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_sequence.py tribler-6.2.0/Tribler/dispersy/tests/test_sequence.py --- tribler-6.2.0/Tribler/dispersy/tests/test_sequence.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_sequence.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,382 @@ +from collections import defaultdict + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestSequence(DispersyTestFunc): + + @call_on_dispersy_thread + def incoming_simple_conflict_different_global_time(self): + """ + A broken NODE creates conflicting messages with the same sequence number that SELF should + properly filter. + + We use the following messages: + - M@5#1 :: global time 5, sequence number 1 + - M@6#1 :: global time 6, sequence number 1 + - etc... + + TODO Same payload? Different signatures? + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + meta = community.get_meta_message(u"sequence-text") + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # MSGS[GLOBAL-TIME][SEQUENCE-NUMBER] + msgs = defaultdict(dict) + for i in xrange(1, 10): + for j in xrange(1, 10): + msgs[i][j] = node.create_sequence_text("M@%d#%d" % (i, j), i, j) + + community.delete_messages(meta.name) + # SELF must accept M@6#1 + node.give_message(msgs[6][1]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[6][1].packet]) + + # SELF must reject M@6#1 (already have this message) + node.give_message(msgs[6][1]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[6][1].packet]) + + # SELF must prefer M@5#1 (duplicate sequence number, prefer lower global time) + node.give_message(msgs[5][1]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet]) + + # SELF must reject M@6#1 (duplicate sequence number, prefer lower global time) + node.give_message(msgs[6][1]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet]) + + # SELF must reject M@4#2 (global time is lower than previous global time in sequence) + node.give_message(msgs[4][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet]) + + # SELF must reject M@5#2 (global time is lower than previous global time in sequence) + node.give_message(msgs[5][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet]) + + # SELF must accept M@7#2 + node.give_message(msgs[7][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[7][2].packet]) + + # SELF must reject M@7#2 (already have this message) + node.give_message(msgs[7][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[7][2].packet]) + + # SELF must prefer M@6#2 (duplicate sequence number, prefer lower global time) + node.give_message(msgs[6][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet]) + + # SELF must reject M@7#2 (duplicate sequence number, prefer lower global time) + node.give_message(msgs[7][2]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet]) + + # SELF must reject M@4#3 (global time is lower than previous global time in sequence) + node.give_message(msgs[4][3]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet]) + + # SELF must reject M@6#3 (global time is lower than previous global time in sequence) + node.give_message(msgs[6][3]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet]) + + # SELF must accept M@8#3 + node.give_message(msgs[8][3]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet, msgs[8][3].packet]) + + # SELF must accept M@9#4 + node.give_message(msgs[9][4]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet, msgs[8][3].packet, msgs[9][4].packet]) + + # SELF must accept M@7#3 + # It would be possible to keep M@9#4, but the way that the code is structures makes this + # difficult (i.e. M@7#3 has not yet passed all the numerous checks at the point where we + # have to delete). In the future we can optimize by pushing the newer messages (such as + # M@7#3) into the waiting or incoming packet queue, this will allow them to be re-inserted + # after M@6#2 has been fully accepted. + node.give_message(msgs[7][3]) + self.assertEqual(community.fetch_packets(meta.name), [msgs[5][1].packet, msgs[6][2].packet, msgs[7][3].packet]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + def test_requests_1_1(self): + self.requests(1, [1], (1, 1)) + + def test_requests_1_2(self): + self.requests(1, [10], (10, 10)) + + def test_requests_1_3(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 10)) + + def test_requests_1_4(self): + self.requests(1, [3, 4, 5, 6, 7, 8, 9, 10], (3, 10)) + + def test_requests_1_5(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7], (1, 7)) + + def test_requests_1_6(self): + self.requests(1, [3, 4, 5, 6, 7], (3, 7)) + + def test_requests_2_1(self): + self.requests(2, [1], (1, 1)) + + def test_requests_2_2(self): + self.requests(2, [10], (10, 10)) + + def test_requests_2_3(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 10)) + + def test_requests_2_4(self): + self.requests(2, [3, 4, 5, 6, 7, 8, 9, 10], (3, 10)) + + def test_requests_2_5(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7], (1, 7)) + + def test_requests_2_6(self): + self.requests(2, [3, 4, 5, 6, 7], (3, 7)) + + def test_requests_3_1(self): + self.requests(3, [1], (1, 1)) + + def test_requests_3_2(self): + self.requests(3, [10], (10, 10)) + + def test_requests_3_3(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 10)) + + def test_requests_3_4(self): + self.requests(3, [3, 4, 5, 6, 7, 8, 9, 10], (3, 10)) + + def test_requests_3_5(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7], (1, 7)) + + def test_requests_3_6(self): + self.requests(3, [3, 4, 5, 6, 7], (3, 7)) + + # multi-range requests + def test_requests_1_7(self): + self.requests(1, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_1_8(self): + self.requests(1, [1, 2, 3, 4, 5], (1, 4), (2, 5)) + + def test_requests_1_9(self): + self.requests(1, [1, 2, 3, 4, 5], (1, 2), (2, 3), (3, 4), (4, 5)) + + def test_requests_1_10(self): + self.requests(1, [1, 2, 3, 4, 5], (1, 1), (5, 5)) + + def test_requests_1_11(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (4, 5), (7, 8)) + + def test_requests_1_12(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8, 9], (1, 2), (4, 5), (7, 8), (1, 5), (7, 9)) + + def test_requests_2_7(self): + self.requests(2, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_2_8(self): + self.requests(2, [1, 2, 3, 4, 5], (1, 4), (2, 5)) + + def test_requests_2_9(self): + self.requests(2, [1, 2, 3, 4, 5], (1, 2), (2, 3), (3, 4), (4, 5)) + + def test_requests_2_10(self): + self.requests(2, [1, 2, 3, 4, 5], (1, 1), (5, 5)) + + def test_requests_2_11(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (4, 5), (7, 8)) + + def test_requests_2_12(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8, 9], (1, 2), (4, 5), (7, 8), (1, 5), (7, 9)) + + def test_requests_3_7(self): + self.requests(3, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_3_8(self): + self.requests(3, [1, 2, 3, 4, 5], (1, 4), (2, 5)) + + def test_requests_3_9(self): + self.requests(3, [1, 2, 3, 4, 5], (1, 2), (2, 3), (3, 4), (4, 5)) + + def test_requests_3_10(self): + self.requests(3, [1, 2, 3, 4, 5], (1, 1), (5, 5)) + + def test_requests_3_11(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (4, 5), (7, 8)) + + def test_requests_3_12(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8, 9], (1, 2), (4, 5), (7, 8), (1, 5), (7, 9)) + + # multi-range requests, in different orders + def test_requests_1_13(self): + self.requests(1, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_1_14(self): + self.requests(1, [1, 2, 3, 4, 5], (2, 5), (1, 4)) + + def test_requests_1_15(self): + self.requests(1, [1, 2, 3, 4, 5], (4, 5), (3, 4), (1, 2), (2, 3)) + + def test_requests_1_16(self): + self.requests(1, [1, 2, 3, 4, 5], (5, 5), (1, 1)) + + def test_requests_1_17(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (7, 8), (4, 5)) + + def test_requests_1_18(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8, 9], (7, 9), (1, 5), (7, 8), (4, 5), (1, 2)) + + def test_requests_2_13(self): + self.requests(2, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_2_14(self): + self.requests(2, [1, 2, 3, 4, 5], (2, 5), (1, 4)) + + def test_requests_2_15(self): + self.requests(2, [1, 2, 3, 4, 5], (4, 5), (3, 4), (1, 2), (2, 3)) + + def test_requests_2_16(self): + self.requests(2, [1, 2, 3, 4, 5], (5, 5), (1, 1)) + + def test_requests_2_17(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (7, 8), (4, 5)) + + def test_requests_2_18(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8, 9], (7, 9), (1, 5), (7, 8), (4, 5), (1, 2)) + + def test_requests_3_13(self): + self.requests(3, [1], (1, 1), (1, 1), (1, 1)) + + def test_requests_3_14(self): + self.requests(3, [1, 2, 3, 4, 5], (2, 5), (1, 4)) + + def test_requests_3_15(self): + self.requests(3, [1, 2, 3, 4, 5], (4, 5), (3, 4), (1, 2), (2, 3)) + + def test_requests_3_16(self): + self.requests(3, [1, 2, 3, 4, 5], (5, 5), (1, 1)) + + def test_requests_3_17(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8], (1, 2), (7, 8), (4, 5)) + + def test_requests_3_18(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8, 9], (7, 9), (1, 5), (7, 8), (4, 5), (1, 2)) + + # single range requests, invalid requests + def test_requests_1_19(self): + self.requests(1, [10], (10, 11)) + + def test_requests_1_20(self): + self.requests(1, [], (11, 11)) + + def test_requests_1_21(self): + self.requests(1, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 11112)) + + def test_requests_1_22(self): + self.requests(1, [], (1111, 11112)) + + def test_requests_2_19(self): + self.requests(2, [10], (10, 11)) + + def test_requests_2_20(self): + self.requests(2, [], (11, 11)) + + def test_requests_2_21(self): + self.requests(2, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 11112)) + + def test_requests_2_22(self): + self.requests(2, [], (1111, 11112)) + + def test_requests_3_19(self): + self.requests(3, [10], (10, 11)) + + def test_requests_3_20(self): + self.requests(3, [], (11, 11)) + + def test_requests_3_21(self): + self.requests(3, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], (1, 11112)) + + def test_requests_3_22(self): + self.requests(3, [], (1111, 11112)) + + # multi-range requests, invalid requests + def test_requests_1_23(self): + self.requests(1, [10], (10, 11), (10, 100), (50, 75)) + + def test_requests_1_24(self): + self.requests(1, [], (11, 11), (11, 50), (100, 200)) + + def test_requests_2_23(self): + self.requests(2, [10], (10, 11), (10, 100), (50, 75)) + + def test_requests_2_24(self): + self.requests(2, [], (11, 11), (11, 50), (100, 200)) + + def test_requests_3_23(self): + self.requests(3, [10], (10, 11), (10, 100), (50, 75)) + + def test_requests_3_24(self): + self.requests(3, [], (11, 11), (11, 50), (100, 200)) + + def setUp(self): + """ + SELF generates messages with sequence [1:MESSAGE_COUNT]. + """ + def on_dispersy_thread(): + self._community = DebugCommunity.create_community(self._dispersy, self._my_member) + self._nodes = [DebugNode(self._community) for _ in xrange(3)] + for node in self._nodes: + node.init_socket() + node.init_my_member() + + # create messages + self._messages = [] + for i in xrange(1, 11): + message = self._community.create_sequence_text("Sequence message #%d" % i) + assert message.distribution.sequence_number == i + self._messages.append(message) + + super(TestSequence, self).setUp() + self._dispersy.callback.call(on_dispersy_thread) + + @call_on_dispersy_thread + def requests(self, node_count, responses, *pairs): + """ + NODE1 and NODE2 requests (non)overlapping sequences, SELF should send back the requested + messages only once. + """ + community = self._community + nodes = self._nodes[:node_count] + meta = self._messages[0].meta + + # flush incoming socket buffer + for node in nodes: + node.drop_packets() + + # request missing + sequence_numbers = set() + for low, high in pairs: + sequence_numbers.update(xrange(low, high + 1)) + for node in nodes: + node.give_message(node.create_dispersy_missing_sequence(community.my_member, meta, low, high, community.global_time, community.my_candidate), cache=True) + # one additional yield. Dispersy should batch these requests together + yield 0.001 + + for node in nodes: + self.assertEqual(node.receive_messages(message_names=[meta.name]), [], "should not yet have any responses") + + yield 0.11 + + # receive response + for node in nodes: + for i in responses: + _, response = node.receive_message(message_names=[meta.name]) + self.assertEqual(response.distribution.sequence_number, i) + + # there should not be any no further responses + for node in nodes: + self.assertEqual(node.receive_messages(message_names=[meta.name]), [], "should not yet have any responses") diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_signature.py tribler-6.2.0/Tribler/dispersy/tests/test_signature.py --- tribler-6.2.0/Tribler/dispersy/tests/test_signature.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_signature.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,171 @@ +import logging +logger = logging.getLogger(__name__) + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode + +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestSignature(DispersyTestFunc): + + @call_on_dispersy_thread + def test_no_response_from_node(self): + """ + SELF will request a signature from NODE. Node will ignore this request and SELF should get + a timeout on the signature request after a few seconds. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + container = {"timeout": 0} + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + yield 0.555 + + logger.debug("SELF requests NODE to double sign") + + def on_response(request, response, modified): + self.assertIsNone(response) + container["timeout"] += 1 + return False, False, False + + community.create_double_signed_text("Accept=", node.candidate, self._dispersy.get_member(node.my_member.public_key), on_response, (), 3.0) + yield 0.11 + + logger.debug("NODE receives dispersy-signature-request message") + _, message = node.receive_message(message_names=[u"dispersy-signature-request"]) + # do not send a response + + # should timeout + wait = 4 + for counter in range(wait): + logger.debug("waiting... %d", wait - counter) + yield 1.0 + yield 0.11 + + logger.debug("SELF must have timed out by now") + self.assertEqual(container["timeout"], 1) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_response_from_node(self): + """ + SELF will request a signature from NODE. SELF will receive the signature and produce a + double signed message. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + container = {"response": 0} + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("SELF requests NODE to double sign") + + def on_response(request, response, modified): + self.assertEqual(container["response"], 0) + self.assertTrue(response.authentication.is_signed) + self.assertFalse(modified) + container["response"] += 1 + return False + community.create_double_signed_text("Accept=", node.candidate, self._dispersy.get_member(node.my_member.public_key), on_response, (), 3.0) + yield 0.11 + + logger.debug("NODE receives dispersy-signature-request message from SELF") + candidate, message = node.receive_message(message_names=[u"dispersy-signature-request"]) + submsg = message.payload.message + second_signature_offset = len(submsg.packet) - community.my_member.signature_length + first_signature_offset = second_signature_offset - node.my_member.signature_length + self.assertEqual(submsg.packet[second_signature_offset:], "\x00" * node.my_member.signature_length, "The first signature MUST BE \x00's. The creator must hold control over the community+member+global_time triplet") + signature = node.my_member.sign(submsg.packet, length=first_signature_offset) + submsg.authentication.set_signature(node.my_member, signature) + + logger.debug("NODE sends dispersy-signature-response message to SELF") + identifier = message.payload.identifier + global_time = community.global_time + node.give_message(node.create_dispersy_signature_response(identifier, submsg, global_time, candidate)) + yield 1.11 + self.assertEqual(container["response"], 1) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_response_from_self(self): + """ + NODE will request a signature from SELF. SELF will receive the request and respond with a signature response. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("NODE requests SELF to double sign") + identifier = 12345 + global_time = 10 + submsg = node.create_double_signed_text(community.my_member, "Allow=True", global_time, sign=False) + node.give_message(node.create_dispersy_signature_request(identifier, submsg, global_time)) + yield 0.11 + + logger.debug("Node waits for SELF to provide a signature response") + _, message = node.receive_message(message_names=[u"dispersy-signature-response"]) + self.assertEqual(message.payload.identifier, identifier) + + # the response message should: + # 1. everything up to the first signature must be the same + second_signature_offset = len(submsg.packet) - community.my_member.signature_length + first_signature_offset = second_signature_offset - node.my_member.signature_length + self.assertEqual(message.payload.message.packet[:first_signature_offset], submsg.packet[:first_signature_offset]) + + # 2. the first signature must be zero's (this is NODE's signature and hasn't been set yet) + self.assertEqual(message.payload.message.packet[first_signature_offset:second_signature_offset], "\x00" * node.my_member.signature_length) + + # 3. the second signature must be set + self.assertNotEqual(message.payload.message.packet[second_signature_offset:], "\x00" * community.my_member.signature_length) + + # 4. the second signature must be valid + self.assertTrue(community.my_member.verify(message.payload.message.packet[:first_signature_offset], + message.payload.message.packet[second_signature_offset:])) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_no_response_from_self(self): + """ + NODE will request a signature from SELF. SELF will ignore this request and NODE should not get any signature + response. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + logger.debug("NODE requests SELF to double sign") + identifier = 12345 + global_time = 10 + submsg = node.create_double_signed_text(community.my_member, "Allow=False", global_time, sign=False) + node.give_message(node.create_dispersy_signature_request(identifier, submsg, global_time)) + yield 0.11 + + logger.debug("Node waits for SELF to provide a signature response") + for _ in xrange(4): + yield 1.0 + messages = node.receive_messages(message_names=[u"dispersy-signature-response"]) + self.assertEqual(messages, []) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_sync.py tribler-6.2.0/Tribler/dispersy/tests/test_sync.py --- tribler-6.2.0/Tribler/dispersy/tests/test_sync.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_sync.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,459 @@ +import logging +logger = logging.getLogger(__name__) + +import socket + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestSync(DispersyTestFunc): + + @call_on_dispersy_thread + def test_modulo(self): + """ + SELF creates several messages, NODE asks for specific modulo to sync and only those modulo + may be sent back. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"full-sync-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # SELF creates messages + messages = [community.create_full_sync_text("foo-bar", forward=False) for _ in xrange(30)] + + for modulo in xrange(0, 10): + for offset in xrange(0, modulo): + # global times that we should receive + global_times = [message.distribution.global_time for message in messages if (message.distribution.global_time + offset) % modulo == 0] + + sync = (1, 0, modulo, offset, []) + node.drop_packets() + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", sync, 42, 110)) + + received = [] + while True: + try: + _, message = node.receive_message(message_names=[u"full-sync-text"]) + received.append(message.distribution.global_time) + except socket.error: + break + + self.assertEqual(sorted(global_times), sorted(received)) + logger.debug("%%%d+%d: %s -> OK", modulo, offset, sorted(global_times)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_in_order(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"ASC-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + times = list(self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))) + self.assertEqual(times, []) + + # create some data + global_times = range(10, 15) + for global_time in global_times: + node.give_message(node.create_in_order_text("Message #%d" % global_time, global_time)) + + # send an empty sync message to obtain all messages ASC + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", (min(global_times), 0, 1, 0, []), 42, max(global_times))) + yield 0.1 + + for global_time in global_times: + _, message = node.receive_message(message_names=[u"ASC-text"]) + self.assertEqual(message.distribution.global_time, global_time) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_out_order(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"DESC-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + times = list(self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))) + self.assertEqual(times, []) + + # create some data + global_times = range(10, 15) + for global_time in global_times: + node.give_message(node.create_out_order_text("Message #%d" % global_time, global_time)) + + # send an empty sync message to obtain all messages DESC + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", (min(global_times), 0, 1, 0, []), 42, max(global_times))) + yield 0.1 + + for global_time in reversed(global_times): + _, message = node.receive_message(message_names=[u"DESC-text"]) + self.assertEqual(message.distribution.global_time, global_time) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_mixed_order(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + in_order_message = community.get_meta_message(u"ASC-text") + out_order_message = community.get_meta_message(u"DESC-text") + # random_order_message = community.get_meta_message(u"random-order-text") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + count, = self._dispersy.database.execute(u"SELECT COUNT(*) FROM sync WHERE sync.community = ? AND sync.meta_message IN (?, ?)", (community.database_id, in_order_message.database_id, out_order_message.database_id)).next() + self.assertEqual(count, 0) + + # create some data + global_times = range(10, 25, 2) + in_order_times = [] + out_order_times = [] + # random_order_times = [] + for global_time in global_times: + in_order_times.append(global_time) + node.give_message(node.create_in_order_text("Message #%d" % global_time, global_time)) + global_time += 1 + out_order_times.append(global_time) + node.give_message(node.create_out_order_text("Message #%d" % global_time, global_time)) + # global_time += 1 + # random_order_times.append(global_time) + # node.give_message(node.create_random_order_text_message("Message #%d" % global_time, global_time)) + out_order_times.sort(reverse=True) + logger.debug("Total ASC:%d; DESC:", len(in_order_times)) + + def get_messages_back(): + received_times = [] + for _ in range(len(global_times) * 2): + _, message = node.receive_message(message_names=[u"ASC-text", u"DESC-text"]) + #, u"random-order-text"]) + received_times.append(message.distribution.global_time) + + return received_times + + # lists = [] + for _ in range(5): + # send an empty sync message to obtain all messages in random-order + node.give_message(node.create_dispersy_introduction_request(community.my_candidate, node.lan_address, node.wan_address, False, u"unknown", (min(global_times), 0, 1, 0, []), 42, max(global_times))) + yield 0.1 + + received_times = get_messages_back() + + # followed by DESC + received_out_times = received_times[0:len(out_order_times)] + self.assertEqual(out_order_times, received_out_times) + + # the first items must be ASC + received_in_times = received_times[len(out_order_times):len(in_order_times) + len(out_order_times)] + self.assertEqual(in_order_times, received_in_times) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_last_1(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"last-1-test") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + times = list(self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))) + self.assertEqual(times, []) + + # send a message + global_time = 10 + node.give_message(node.create_last_1_test("should be accepted (1)", global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # send a message + global_time = 11 + node.give_message(node.create_last_1_test("should be accepted (2)", global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # send a message (older: should be dropped) + node.give_message(node.create_last_1_test("should be dropped (1)", global_time - 1)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # as proof for the drop, the newest message should be sent back + yield 0.1 + _, message = node.receive_message(message_names=[u"last-1-test"]) + self.assertEqual(message.distribution.global_time, global_time) + + # send a message (duplicate: should be dropped) + node.give_message(node.create_last_1_test("should be dropped (2)", global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # send a message + global_time = 12 + node.give_message(node.create_last_1_test("should be accepted (3)", global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_last_9(self): + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"last-9-test") + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # should be no messages from NODE yet + times = list(self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))) + self.assertEqual(times, []) + + all_messages = [21, 20, 28, 27, 22, 23, 24, 26, 25] + messages_so_far = [] + for global_time in all_messages: + # send a message + message = node.create_last_9_test(str(global_time), global_time) + messages_so_far.append(global_time) + node.give_message(message) + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, global_time, message.database_id)).next() + except StopIteration: + self.fail() + self.assertEqual(str(packet), message.packet) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(sorted(times), sorted(messages_so_far)) + self.assertEqual(sorted(all_messages), sorted(messages_so_far)) + + logger.debug("Older: should be dropped") + for global_time in [11, 12, 13, 19, 18, 17]: + # send a message (older: should be dropped) + node.give_message(node.create_last_9_test(str(global_time), global_time)) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(sorted(times), sorted(messages_so_far)) + + logger.debug("Duplicate: should be dropped") + for global_time in all_messages: + # send a message (duplicate: should be dropped) + message = node.create_last_9_test("wrong content!", global_time) + node.give_message(message) + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, global_time, message.database_id)).next() + except StopIteration: + self.fail() + self.assertNotEqual(str(packet), message.packet) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(sorted(times), sorted(messages_so_far)) + + logger.debug("Should be added and old one removed") + match_times = sorted(times[:]) + for global_time in [30, 35, 37, 31, 32, 34, 33, 36, 38, 45, 44, 43, 42, 41, 40, 39]: + # send a message (should be added and old one removed) + message = node.create_last_9_test(str(global_time), global_time) + node.give_message(message) + match_times.pop(0) + match_times.append(global_time) + match_times.sort() + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, global_time, message.database_id)).next() + except StopIteration: + self.fail() + self.assertEqual(str(packet), message.packet) + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, node.my_member.database_id, message.database_id))] + self.assertEqual(sorted(times), match_times) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_last_1_doublemember(self): + """ + Normally the LastSyncDistribution policy stores the last N messages for each member that + created the message. However, when the DoubleMemberAuthentication policy is used, there are + two members. + + This can be handled in two ways: + + 1. The first member who signed the message is still seen as the creator and hence the last + N messages of this member are stored. + + 2. Each member combination is used and the last N messages for each member combination is + used. For example: when member A and B sign a message it will not count toward the + last-N of messages signed by A and C (which is another member combination.) + + Currently we only implement option #2. There currently is no parameter to switch between + these options. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"last-1-doublemember-text") + + # create node and ensure that SELF knows the node address + nodeA = DebugNode(community) + nodeA.init_socket() + nodeA.init_my_member() + + # create node and ensure that SELF knows the node address + nodeB = DebugNode(community) + nodeB.init_socket() + nodeB.init_my_member() + + # create node and ensure that SELF knows the node address + nodeC = DebugNode(community) + nodeC.init_socket() + nodeC.init_my_member() + + # dump some junk data, TODO: should not use this btw in actual test... + # self._dispersy.database.execute(u"INSERT INTO sync (community, meta_message, member, global_time) VALUES (?, ?, 42, 9)", (community.database_id, message.database_id)) + # sync_id = self._dispersy.database.last_insert_rowid + # self._dispersy.database.execute(u"INSERT INTO reference_member_sync (member, sync) VALUES (42, ?)", (sync_id,)) + # self._dispersy.database.execute(u"INSERT INTO reference_member_sync (member, sync) VALUES (43, ?)", (sync_id,)) + # + # self._dispersy.database.execute(u"INSERT INTO sync (community, meta_message, member, global_time) VALUES (?, ?, 4, 9)", (community.database_id, message.database_id)) + # sync_id = self._dispersy.database.last_insert_rowid + # self._dispersy.database.execute(u"INSERT INTO reference_member_sync (member, sync) VALUES (4, ?)", (sync_id,)) + # self._dispersy.database.execute(u"INSERT INTO reference_member_sync (member, sync) VALUES (43, ?)", (sync_id,)) + + # send a message + global_time = 10 + other_global_time = global_time + 1 + messages = [] + messages.append(nodeA.create_last_1_doublemember_text(nodeB.my_member, "should be accepted (1)", global_time, sign=True)) + messages.append(nodeA.create_last_1_doublemember_text(nodeC.my_member, "should be accepted (1)", other_global_time, sign=True)) + nodeA.give_messages(messages) + entries = list(self._dispersy.database.execute(u"SELECT sync.global_time, sync.member, double_signed_sync.member1, double_signed_sync.member2 FROM sync JOIN double_signed_sync ON double_signed_sync.sync = sync.id WHERE sync.community = ? AND sync.member = ? AND sync.meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))) + self.assertEqual(len(entries), 2) + self.assertIn((global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeB.my_member.database_id), max(nodeA.my_member.database_id, nodeB.my_member.database_id)), entries) + self.assertIn((other_global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeC.my_member.database_id), max(nodeA.my_member.database_id, nodeC.my_member.database_id)), entries) + + # send a message + global_time = 20 + other_global_time = global_time + 1 + messages = [] + messages.append(nodeA.create_last_1_doublemember_text(nodeB.my_member, "should be accepted (2) @%d" % global_time, global_time, sign=True)) + messages.append(nodeA.create_last_1_doublemember_text(nodeC.my_member, "should be accepted (2) @%d" % other_global_time, other_global_time, sign=True)) + nodeA.give_messages(messages) + entries = list(self._dispersy.database.execute(u"SELECT sync.global_time, sync.member, double_signed_sync.member1, double_signed_sync.member2 FROM sync JOIN double_signed_sync ON double_signed_sync.sync = sync.id WHERE sync.community = ? AND sync.member = ? AND sync.meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))) + self.assertEqual(len(entries), 2) + self.assertIn((global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeB.my_member.database_id), max(nodeA.my_member.database_id, nodeB.my_member.database_id)), entries) + self.assertIn((other_global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeC.my_member.database_id), max(nodeA.my_member.database_id, nodeC.my_member.database_id)), entries) + + # send a message (older: should be dropped) + old_global_time = 8 + messages = [] + messages.append(nodeA.create_last_1_doublemember_text(nodeB.my_member, "should be dropped (1)", old_global_time, sign=True)) + messages.append(nodeA.create_last_1_doublemember_text(nodeC.my_member, "should be dropped (1)", old_global_time, sign=True)) + nodeA.give_messages(messages) + entries = list(self._dispersy.database.execute(u"SELECT sync.global_time, sync.member, double_signed_sync.member1, double_signed_sync.member2 FROM sync JOIN double_signed_sync ON double_signed_sync.sync = sync.id WHERE sync.community = ? AND sync.member = ? AND sync.meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))) + self.assertEqual(len(entries), 2) + self.assertIn((global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeB.my_member.database_id), max(nodeA.my_member.database_id, nodeB.my_member.database_id)), entries) + self.assertIn((other_global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeC.my_member.database_id), max(nodeA.my_member.database_id, nodeC.my_member.database_id)), entries) + + yield 0.1 + nodeA.drop_packets() + + # send a message (older: should be dropped) + old_global_time = 8 + messages = [] + messages.append(nodeB.create_last_1_doublemember_text(nodeA.my_member, "should be dropped (1)", old_global_time, sign=True)) + messages.append(nodeC.create_last_1_doublemember_text(nodeA.my_member, "should be dropped (1)", old_global_time, sign=True)) + nodeA.give_messages(messages) + entries = list(self._dispersy.database.execute(u"SELECT sync.global_time, sync.member, double_signed_sync.member1, double_signed_sync.member2 FROM sync JOIN double_signed_sync ON double_signed_sync.sync = sync.id WHERE sync.community = ? AND sync.member = ? AND sync.meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))) + self.assertEqual(len(entries), 2) + self.assertIn((global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeB.my_member.database_id), max(nodeA.my_member.database_id, nodeB.my_member.database_id)), entries) + self.assertIn((other_global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeC.my_member.database_id), max(nodeA.my_member.database_id, nodeC.my_member.database_id)), entries) + + # as proof for the drop, the newest message should be sent back + yield 0.1 + times = [] + _, message = nodeA.receive_message(message_names=[u"last-1-doublemember-text"]) + times.append(message.distribution.global_time) + _, message = nodeA.receive_message(message_names=[u"last-1-doublemember-text"]) + times.append(message.distribution.global_time) + self.assertEqual(sorted(times), [global_time, other_global_time]) + + # send a message (older + different member combination: should be dropped) + old_global_time = 9 + messages = [] + messages.append(nodeB.create_last_1_doublemember_text(nodeA.my_member, "should be dropped (2)", old_global_time, sign=True)) + messages.append(nodeC.create_last_1_doublemember_text(nodeA.my_member, "should be dropped (2)", old_global_time, sign=True)) + nodeA.give_messages(messages) + entries = list(self._dispersy.database.execute(u"SELECT sync.global_time, sync.member, double_signed_sync.member1, double_signed_sync.member2 FROM sync JOIN double_signed_sync ON double_signed_sync.sync = sync.id WHERE sync.community = ? AND sync.member = ? AND sync.meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))) + self.assertEqual(len(entries), 2) + self.assertIn((global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeB.my_member.database_id), max(nodeA.my_member.database_id, nodeB.my_member.database_id)), entries) + self.assertIn((other_global_time, nodeA.my_member.database_id, min(nodeA.my_member.database_id, nodeC.my_member.database_id), max(nodeA.my_member.database_id, nodeC.my_member.database_id)), entries) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_last_1_doublemember_unique_member_global_time(self): + """ + Even with double member messages, the first member is the creator and may only have one + message for each global time. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + message = community.get_meta_message(u"last-1-doublemember-text") + + # create node and ensure that SELF knows the node address + nodeA = DebugNode(community) + nodeA.init_socket() + nodeA.init_my_member() + + # create node and ensure that SELF knows the node address + nodeB = DebugNode(community) + nodeB.init_socket() + nodeB.init_my_member() + + # create node and ensure that SELF knows the node address + nodeC = DebugNode(community) + nodeC.init_socket() + nodeC.init_my_member() + + # send two messages + global_time = 10 + messages = [] + messages.append(nodeA.create_last_1_doublemember_text(nodeB.my_member, "should be accepted (1.1)", global_time, sign=True)) + messages.append(nodeA.create_last_1_doublemember_text(nodeC.my_member, "should be accepted (1.2)", global_time, sign=True)) + + # we NEED the messages to be handled in one batch. using the socket may change this + nodeA.give_messages(messages) + + times = [x for x, in self._dispersy.database.execute(u"SELECT global_time FROM sync WHERE community = ? AND member = ? AND meta_message = ?", (community.database_id, nodeA.my_member.database_id, message.database_id))] + self.assertEqual(times, [global_time]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_timeline.py tribler-6.2.0/Tribler/dispersy/tests/test_timeline.py --- tribler-6.2.0/Tribler/dispersy/tests/test_timeline.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_timeline.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,274 @@ +import logging +logger = logging.getLogger(__name__) + +from ..message import DelayMessageByProof +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestTimeline(DispersyTestFunc): + + @call_on_dispersy_thread + def test_succeed_check(self): + """ + Create a community and perform check if a hard-kill message is accepted. + + Whenever a community is created the owner message is authorized to use the + dispersy-destroy-community message. Hence, this message should be accepted by the + timeline.check(). + """ + # create a community. + community = DebugCommunity.create_community(self._dispersy, self._my_member) + # the master member must have given my_member all permissions for dispersy-destroy-community + yield 0.555 + + logger.debug("master_member: %s, %s", community.master_member.database_id, community.master_member.mid.encode("HEX")) + logger.debug(" my_member: %s, %s", community.my_member.database_id, community.my_member.mid.encode("HEX")) + + # check if we are still allowed to send the message + message = community.create_dispersy_destroy_community(u"hard-kill", store=False, update=False, forward=False) + self.assertEqual(message.authentication.member, self._my_member) + result = list(message.check_callback([message])) + self.assertEqual(result, [message], "check_... methods should return a generator with the accepted messages") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_fail_check(self): + """ + Create a community and perform check if a hard-kill message is NOT accepted. + + Whenever a community is created the owner message is authorized to use the + dispersy-destroy-community message. We will first revoke the authorization (to use this + message) and ensure that the message is no longer accepted by the timeline.check(). + """ + # create a community. + community = DebugCommunity.create_community(self._dispersy, self._my_member) + # the master member must have given my_member all permissions for dispersy-destroy-community + yield 0.555 + + logger.debug("master_member: %d, %s", community.master_member.database_id, community.master_member.mid.encode("HEX")) + logger.debug(" my_member: %d, %s", community.my_member.database_id, community.my_member.mid.encode("HEX")) + + # remove the right to hard-kill + community.create_dispersy_revoke([(community.my_member, community.get_meta_message(u"dispersy-destroy-community"), u"permit")], sign_with_master=True, store=False, forward=False) + + # check if we are still allowed to send the message + message = community.create_dispersy_destroy_community(u"hard-kill", store=False, update=False, forward=False) + self.assertEqual(message.authentication.member, self._my_member) + result = list(message.check_callback([message])) + self.assertEqual(len(result), 1, "check_... methods should return a generator with the accepted messages") + self.assertIsInstance(result[0], DelayMessageByProof, "check_... methods should return a generator with the accepted messages") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill", sign_with_master=True) + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_loading_community(self): + """ + When a community is loaded it must load all available dispersy-authorize and dispersy-revoke + message from the database. + """ + class LoadingCommunityTestCommunity(DebugCommunity): + pass + + # create a community. the master member must have given my_member all permissions for + # dispersy-destroy-community + community = LoadingCommunityTestCommunity.create_community(self._dispersy, self._my_member) + cid = community.cid + + logger.debug("master_member: %d, %s", community.master_member.database_id, community.master_member.mid.encode("HEX")) + logger.debug(" my_member: %d, %s", community.my_member.database_id, community.my_member.mid.encode("HEX")) + + logger.debug("unload community") + community.unload_community() + community = None + yield 0.555 + + # load the same community and see if the same permissions are loaded + communities = [LoadingCommunityTestCommunity.load_community(self._dispersy, master) + for master + in LoadingCommunityTestCommunity.get_master_members(self._dispersy)] + self.assertEqual(len(communities), 1) + self.assertEqual(communities[0].cid, cid) + community = communities[0] + + # check if we are still allowed to send the message + message = community.create_dispersy_destroy_community(u"hard-kill", store=False, update=False, forward=False) + self.assertTrue(community.timeline.check(message)) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_delay_by_proof(self): + """ + When SELF receives a message that it has no permission for, it will send a + dispersy-missing-proof message to try to obtain the dispersy-authorize. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node1 = DebugNode(community) + node1.init_socket() + node1.init_my_member() + yield 0.555 + + # create node and ensure that SELF knows the node address + node2 = DebugNode(community) + node2.init_socket() + node2.init_my_member() + yield 0.555 + + # permit NODE1 + logger.debug("SELF creates dispersy-authorize for NODE1") + community.create_dispersy_authorize([(node1.my_member, community.get_meta_message(u"protected-full-sync-text"), u"permit"), + (node1.my_member, community.get_meta_message(u"protected-full-sync-text"), u"authorize")]) + + # NODE2 created message @20 + logger.debug("NODE2 creates protected-full-sync-text, should be delayed for missing proof") + global_time = 20 + message = node2.create_protected_full_sync_text("Protected message", global_time) + node2.give_message(message) + yield 0.555 + + # may NOT have been stored in the database + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node2.my_member.database_id, global_time)).next() + except StopIteration: + pass + + else: + self.fail("should not have stored, did not have permission") + + # SELF sends dispersy-missing-proof to NODE2 + logger.debug("NODE2 receives dispersy-missing-proof") + _, message = node2.receive_message(message_names=[u"dispersy-missing-proof"]) + self.assertEqual(message.payload.member.public_key, node2.my_member.public_key) + self.assertEqual(message.payload.global_time, global_time) + + logger.debug("=====") + logger.debug("node1: %d", node1.my_member.database_id) + logger.debug("node2: %d", node2.my_member.database_id) + + # NODE1 provides proof + logger.debug("NODE1 creates and provides missing proof") + sequence_number = 1 + proof_global_time = 10 + node2.give_message(node1.create_dispersy_authorize([(node2.my_member, community.get_meta_message(u"protected-full-sync-text"), u"permit")], sequence_number, proof_global_time)) + yield 0.555 + + logger.debug("=====") + + # must have been stored in the database + logger.debug("SELF must have processed both the proof and the protected-full-sync-text message") + try: + packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node2.my_member.database_id, global_time)).next() + except StopIteration: + self.fail("should have been stored") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_missing_proof(self): + """ + When SELF receives a dispersy-missing-proof message she needs to find and send the proof. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node = DebugNode(community) + node.init_socket() + node.init_my_member() + yield 0.555 + + # SELF creates a protected message + message = community.create_protected_full_sync_text("Protected message") + + # flush incoming socket buffer + node.drop_packets() + + # NODE pretends to receive the protected message and requests the proof + node.give_message(node.create_dispersy_missing_proof(message.authentication.member, message.distribution.global_time)) + yield 0.555 + + # SELF sends dispersy-authorize to NODE + _, authorize = node.receive_message(message_names=[u"dispersy-authorize"]) + + permission_triplet = (community.my_member, community.get_meta_message(u"protected-full-sync-text"), u"permit") + self.assertIn(permission_triplet, authorize.payload.permission_triplets) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_missing_authorize_proof(self): + """ + MASTER + \\ authorize(MASTER, OWNER) + \\ + OWNER + \\ authorize(OWNER, NODE1) + \\ + NODE1 + + When SELF receives a dispersy-missing-proof message from NODE2 for authorize(OWNER, NODE1) + the dispersy-authorize message for authorize(MASTER, OWNER) must be returned. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create node and ensure that SELF knows the node address + node1 = DebugNode(community) + node1.init_socket() + node1.init_my_member() + yield 0.555 + + # create node and ensure that SELF knows the node address + node2 = DebugNode(community) + node2.init_socket() + node2.init_my_member() + yield 0.555 + + # permit NODE1 + logger.debug("SELF creates dispersy-authorize for NODE1") + message = community.create_dispersy_authorize([(node1.my_member, community.get_meta_message(u"protected-full-sync-text"), u"permit"), + (node1.my_member, community.get_meta_message(u"protected-full-sync-text"), u"authorize")]) + + # flush incoming socket buffer + node2.drop_packets() + + logger.debug("===") + logger.debug("master: %d", community.master_member.database_id) + logger.debug("member: %d", community.my_member.database_id) + logger.debug("node1: %d", node1.my_member.database_id) + logger.debug("node2: %d", node2.my_member.database_id) + + # NODE2 wants the proof that OWNER is allowed to grant authorization to NODE1 + logger.debug("NODE2 asks for proof that NODE1 is allowed to authorize") + node2.give_message(node2.create_dispersy_missing_proof(message.authentication.member, message.distribution.global_time)) + yield 0.555 + + logger.debug("===") + + # SELF sends dispersy-authorize containing authorize(MASTER, OWNER) to NODE + logger.debug("NODE2 receives the proof from SELF") + _, authorize = node2.receive_message(message_names=[u"dispersy-authorize"]) + + permission_triplet = (message.authentication.member, community.get_meta_message(u"protected-full-sync-text"), u"permit") + logger.debug("%s", (permission_triplet[0].database_id, permission_triplet[1].name, permission_triplet[2])) + logger.debug("%s", [(x.database_id, y.name, z) for x, y, z in authorize.payload.permission_triplets]) + self.assertIn(permission_triplet, authorize.payload.permission_triplets) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_undo.py tribler-6.2.0/Tribler/dispersy/tests/test_undo.py --- tribler-6.2.0/Tribler/dispersy/tests/test_undo.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_undo.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,454 @@ +import logging +logger = logging.getLogger(__name__) + +from ..message import Message +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + + +class TestUndo(DispersyTestFunc): + + @call_on_dispersy_thread + def test_self_undo_own(self): + """ + SELF generates a few messages and then undoes them. + + This is always allowed. In fact, no check is made since only externally received packets + will be checked. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create messages + messages = [community.create_full_sync_text("Should undo #%d" % i, forward=False) for i in xrange(10)] + + # check that they are in the database and are NOT undone + for message in messages: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, community.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # undo all messages + undoes = [community.create_dispersy_undo(message, forward=False) for message in messages] + + # check that they are in the database and ARE undone + for undo, message in zip(undoes, messages): + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, community.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(undo.packet_id,)]) + + # check that all the undo messages are in the database and are NOT undone + for message in undoes: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, community.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill", forward=False) + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_self_undo_other(self): + """ + NODE generates a few messages and then SELF undoes them. + + This is always allowed. In fact, no check is made since only externally received packets + will be checked. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # NODE creates messages + messages = [node.create_full_sync_text("Should undo #%d" % global_time, global_time) for global_time in xrange(10, 20)] + node.give_messages(messages) + + # check that they are in the database and are NOT undone + for message in messages: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # SELF undoes all messages + undoes = [community.create_dispersy_undo(message, forward=False) for message in messages] + + # check that they are in the database and ARE undone + for undo, message in zip(undoes, messages): + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(undo.packet_id,)]) + + # check that all the undo messages are in the database and are NOT undone + for message in undoes: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, community.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill", forward=False) + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_node_undo_own(self): + """ + SELF gives NODE permission to undo, NODE generates a few messages and then undoes them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # SELF grants undo permission to NODE + community.create_dispersy_authorize([(node.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # create messages + messages = [node.create_full_sync_text("Should undo @%d" % global_time, global_time) for global_time in xrange(10, 20)] + node.give_messages(messages) + + # check that they are in the database and are NOT undone + for message in messages: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # undo all messages + sequence_number = 1 + undoes = [node.create_dispersy_undo_own(message, message.distribution.global_time + 100, sequence_number + i) for i, message in enumerate(messages)] + node.give_messages(undoes) + + # check that they are in the database and ARE undone + for undo, message in zip(undoes, messages): + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(len(undone), 1) + undone_packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE id = ?", (undone[0][0],)).next() + undone_packet = str(undone_packet) + self.assertEqual(undo.packet, undone_packet) + + # check that all the undo messages are in the database and are NOT undone + for message in undoes: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_node_undo_other(self): + """ + SELF gives NODE1 permission to undo, NODE2 generates a few messages and then NODE1 undoes + them. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node1 = DebugNode(community) + node1.init_socket() + node1.init_my_member() + + node2 = DebugNode(community) + node2.init_socket() + node2.init_my_member() + + # SELF grants undo permission to NODE1 + community.create_dispersy_authorize([(node1.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # NODE2 creates messages + messages = [node2.create_full_sync_text("Should undo @%d" % global_time, global_time) for global_time in xrange(10, 20)] + node2.give_messages(messages) + + # check that they are in the database and are NOT undone + for message in messages: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node2.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # NODE1 undoes all messages + sequence_number = 1 + undoes = [node1.create_dispersy_undo_other(message, message.distribution.global_time + 100, sequence_number + i) for i, message in enumerate(messages)] + node1.give_messages(undoes) + + # check that they are in the database and ARE undone + for undo, message in zip(undoes, messages): + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node2.my_member.database_id, message.distribution.global_time))) + self.assertEqual(len(undone), 1) + undone_packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE id = ?", (undone[0][0],)).next() + undone_packet = str(undone_packet) + self.assertEqual(undo.packet.encode("HEX"), undone_packet.encode("HEX")) + + # check that all the undo messages are in the database and are NOT undone + for message in undoes: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node1.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_self_malicious_undo(self): + """ + SELF generated a message and then undoes it twice. The dispersy core should ensure that + (given that the message was processed, hence update=True) that the second undo is refused + and the first undo should be returned instead. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + # create message + message = community.create_full_sync_text("Should undo") + + # undo once + undo1 = community.create_dispersy_undo(message) + self.assertIsInstance(undo1, Message.Implementation) + + # undo twice. instead of a new dispersy-undo, a new instance of the previous UNDO1 must be + # returned + undo2 = community.create_dispersy_undo(message) + self.assertEqual(undo1.packet, undo2.packet) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_node_malicious_undo(self): + """ + SELF gives NODE permission to undo, NODE generates a message and then undoes it twice. The + second undo can cause nodes to keep syncing packets that other nodes will keep dropping + (because you can only drop a message once, but the two messages are binary unique). + + Sending two undoes for the same message is considered malicious behavior, resulting in: + 1. the offending node must be put on the blacklist + 2. the proof of malicious behaviour must be forwarded to other nodes + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # SELF grants undo permission to NODE + community.create_dispersy_authorize([(node.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # create message + global_time = 10 + message = node.create_full_sync_text("Should undo @%d" % global_time, global_time) + node.give_message(message) + + # undo once + global_time = 20 + sequence_number = 1 + undo1 = node.create_dispersy_undo_own(message, global_time, sequence_number) + node.give_message(undo1) + + # undo twice + global_time = 30 + sequence_number = 2 + undo2 = node.create_dispersy_undo_own(message, global_time, sequence_number) + node.give_message(undo2) + yield 0.1 + + # check that the member is declared malicious + self.assertTrue(self._dispersy.get_member(node.my_member.public_key).must_blacklist) + + # all messages for the malicious member must be removed + packets = list(self._dispersy.database.execute(u"SELECT packet FROM sync WHERE community = ? AND member = ?", + (community.database_id, node.my_member.database_id))) + self.assertEqual(packets, []) + + node2 = DebugNode(community) + + node2.init_socket() + node2.init_my_member() + + # ensure we don't obtain the messages from the socket cache + yield 0.1 + node2.drop_packets() + + # propagate a message from the malicious member + logger.debug("giving faulty message %s", message) + node2.give_message(message) + yield 0.1 + + # we should receive proof that NODE is malicious + malicious_packets = [packet for _, packet in node2.receive_packets()] + self.assertEqual(sorted(malicious_packets), sorted([undo1.packet, undo2.packet])) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_node_non_malicious_undo(self): + """ + SELF gives NODE permission to undo, NODE generates a message, SELF generates an undo, NODE + generates an undo. The second undo should NOT cause NODE of SELF to be marked as malicious. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # SELF grants undo permission to NODE + community.create_dispersy_authorize([(node.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # create message + global_time = 10 + message = node.create_full_sync_text("Should undo @%d" % global_time, global_time) + node.give_message(message) + + # SELF undoes + community.create_dispersy_undo(message) + + # NODE undoes + global_time = 30 + sequence_number = 1 + undo = node.create_dispersy_undo_own(message, global_time, sequence_number) + node.give_message(undo) + + # check that they are in the database and ARE undone + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, message.authentication.member.database_id, message.distribution.global_time))) + self.assertEqual(len(undone), 1) + undone_packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE id = ?", (undone[0][0],)).next() + undone_packet = str(undone_packet) + self.assertEqual(undo.packet, undone_packet) + + # check that the member is not declared malicious + self.assertFalse(self._dispersy.get_member(node.my_member.public_key).must_blacklist) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_missing_message(self): + """ + SELF gives NODE permission to undo, NODE generates a few messages without sending them to + SELF. Following, NODE undoes the messages and sends the undo messages to SELF. SELF must + now use a dispersy-missing-message to request the messages that are about to be undone. The + messages need to be processed and subsequently undone. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node = DebugNode(community) + node.init_socket() + node.init_my_member() + + # SELF grants undo permission to NODE + community.create_dispersy_authorize([(node.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # create messages + messages = [node.create_full_sync_text("Should undo @%d" % global_time, global_time) for global_time in xrange(10, 20)] + + # undo all messages + sequence_number = 1 + undoes = [node.create_dispersy_undo_own(message, message.distribution.global_time + 100, i + sequence_number) for i, message in enumerate(messages)] + node.give_messages(undoes) + + # receive the dispersy-missing-message messages + global_times = [message.distribution.global_time for message in messages] + global_time_requests = [] + for _ in xrange(len(messages)): + _, message = node.receive_message(message_names=[u"dispersy-missing-message"]) + self.assertEqual(message.payload.member.public_key, node.my_member.public_key) + global_time_requests.extend(message.payload.global_times) + self.assertEqual(sorted(global_times), sorted(global_time_requests)) + + # give all 'delayed' messages + node.give_messages(messages) + + yield sum(community.get_meta_message(name).batch.max_window for name in [u"full-sync-text", u"dispersy-undo-own", u"dispersy-undo-other"]) + yield 2.0 + + # check that they are in the database and ARE undone + for undo, message in zip(undoes, messages): + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(len(undone), 1) + undone_packet, = self._dispersy.database.execute(u"SELECT packet FROM sync WHERE id = ?", (undone[0][0],)).next() + undone_packet = str(undone_packet) + self.assertEqual(undo.packet, undone_packet) + + # check that all the undo messages are in the database and are NOT undone + for message in undoes: + undone = list(self._dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", + (community.database_id, node.my_member.database_id, message.distribution.global_time))) + self.assertEqual(undone, [(0,)]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_revoke_simple(self): + """ + SELF gives NODE1 permission to undo, SELF revokes this permission. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node1 = DebugNode(community) + node1.init_socket() + node1.init_my_member() + + # SELF grants undo permission to NODE1 + community.create_dispersy_authorize([(node1.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # SELF revoke undo permission from NODE1 + community.create_dispersy_revoke([(node1.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + @call_on_dispersy_thread + def test_revoke_causing_undo(self): + """ + SELF gives NODE1 permission to undo, SELF created a message, NODE1 undoes the message, SELF + revokes the undo permission AFTER the message was undone -> the message is not re-done. + """ + community = DebugCommunity.create_community(self._dispersy, self._my_member) + + node1 = DebugNode(community) + node1.init_socket() + node1.init_my_member() + + # SELF grants undo permission to NODE1 + community.create_dispersy_authorize([(node1.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + + # SELF creates a message + message = community.create_full_sync_text("will be undone") + self.assert_message_stored(community, community.my_member, message.distribution.global_time) + + # NODE1 undoes the message + sequence_number = 1 + node1.give_message(node1.create_dispersy_undo_other(message, message.distribution.global_time + 1, sequence_number)) + self.assert_message_stored(community, community.my_member, message.distribution.global_time, undone="undone") + + # SELF revoke undo permission from NODE1 + community.create_dispersy_revoke([(node1.my_member, community.get_meta_message(u"full-sync-text"), u"undo")]) + self.assert_message_stored(community, community.my_member, message.distribution.global_time, undone="undone") + + # cleanup + community.create_dispersy_destroy_community(u"hard-kill") + self._dispersy.get_community(community.cid).unload_community() + + def assert_message_stored(self, community, member, global_time, undone="done"): + self.assertIsInstance(undone, str) + self.assertIn(undone, ("done", "undone")) + + try: + actual_undone, = community.dispersy.database.execute(u"SELECT undone FROM sync WHERE community = ? AND member = ? AND global_time = ?", (community.database_id, member.database_id, global_time)).next() + except StopIteration: + self.fail("Message must be stored in the database") + + self.assertIsInstance(actual_undone, int) + self.assertGreaterEqual(actual_undone, 0) + self.assertTrue((undone == "done" and actual_undone == 0) or undone == "undone" and 0 < actual_undone,) diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_unittest.py tribler-6.2.0/Tribler/dispersy/tests/test_unittest.py --- tribler-6.2.0/Tribler/dispersy/tests/test_unittest.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_unittest.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,117 @@ +import logging +logger = logging.getLogger(__name__) + +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + +def failure_to_success(exception_class, exception_message): + def helper1(func): + def helper2(*args, **kargs): + try: + func(*args, **kargs) + except Exception as exception: + if isinstance(exception, exception_class) and exception.message == exception_message: + return + + # not one of the pre-programmed exceptions, test should indicate failure + raise + + helper2.__name__ = func.__name__ + return helper2 + return helper1 + +class TestUnittest(DispersyTestFunc): + """ + Tests ensuring that an exception anywhere in _dispersy.callback is propagated to the unittest framework. + """ + + @failure_to_success(AssertionError, "This must fail") + @call_on_dispersy_thread + def test_assert(self): + " Trivial assert. " + self.assertTrue(False, "This must fail") + + @failure_to_success(KeyError, "This must fail") + @call_on_dispersy_thread + def test_KeyError(self): + " Trivial KeyError. " + raise KeyError("This must fail") + + @failure_to_success(AssertionError, "This must fail") + @call_on_dispersy_thread + def test_assert_callback(self): + " Assert within a registered task. " + def task(): + self.assertTrue(False, "This must fail") + self._dispersy.callback.register(task) + yield 10.0 + + @failure_to_success(KeyError, "This must fail") + @call_on_dispersy_thread + def test_KeyError_callback(self): + " KeyError within a registered task. " + def task(): + raise KeyError("This must fail") + self._dispersy.callback.register(task) + yield 10.0 + + @failure_to_success(AssertionError, "This must fail") + @call_on_dispersy_thread + def test_assert_callback_generator(self): + " Assert within a registered generator task. " + def task(): + yield 0.1 + yield 0.1 + self.assertTrue(False, "This must fail") + self._dispersy.callback.register(task) + yield 10.0 + + @failure_to_success(KeyError, "This must fail") + @call_on_dispersy_thread + def test_KeyError_callback_generator(self): + " KeyError within a registered generator task. " + def task(): + yield 0.1 + yield 0.1 + raise KeyError("This must fail") + self._dispersy.callback.register(task) + yield 10.0 + + @failure_to_success(AssertionError, "This must fail") + @call_on_dispersy_thread + def test_assert_callback_call(self): + " Assert within a 'call' task. " + def task(): + self.assertTrue(False, "This must fail") + self._dispersy.callback.call(task) + yield 10.0 + + @failure_to_success(KeyError, "This must fail") + @call_on_dispersy_thread + def test_KeyError_callback_call(self): + " KeyError within a 'call' task. " + def task(): + raise KeyError("This must fail") + self._dispersy.callback.call(task) + yield 10.0 + + @failure_to_success(AssertionError, "This must fail") + @call_on_dispersy_thread + def test_assert_callback_call_generator(self): + " Assert within a 'call' generator task. " + def task(): + yield 0.1 + yield 0.1 + self.assertTrue(False, "This must fail") + self._dispersy.callback.call(task) + yield 10.0 + + @failure_to_success(KeyError, "This must fail") + @call_on_dispersy_thread + def test_KeyError_callback_call_generator(self): + " KeyError within a 'call' generator task. " + def task(): + yield 0.1 + yield 0.1 + raise KeyError("This must fail") + self._dispersy.callback.call(task) + yield 10.0 diff -Nru tribler-6.2.0/Tribler/dispersy/tests/test_walker.py tribler-6.2.0/Tribler/dispersy/tests/test_walker.py --- tribler-6.2.0/Tribler/dispersy/tests/test_walker.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tests/test_walker.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,97 @@ +import logging +logger = logging.getLogger(__name__) + +from .debugcommunity.community import DebugCommunity +from .debugcommunity.node import DebugNode +from .dispersytestclass import DispersyTestFunc, call_on_dispersy_thread + +class TestWalker(DispersyTestFunc): + + def test_one_walker(self): return self.check_walker([""]) + def test_two_walker(self): return self.check_walker(["", ""]) + def test_many_walker(self): return self.check_walker([""] * 22) + def test_one_t_walker(self): return self.check_walker(["t"]) + def test_two_t_walker(self): return self.check_walker(["t", "t"]) + def test_many_t_walker(self): return self.check_walker(["t"] * 22) + def test_two_mixed_walker_a(self): return self.check_walker(["", "t"]) + def test_many_mixed_walker_a(self): return self.check_walker(["", "t"] * 11) + def test_two_mixed_walker_b(self): return self.check_walker(["t", ""]) + def test_many_mixed_walker_b(self): return self.check_walker(["t", ""] * 11) + + def create_nodes(self, community, all_flags): + assert isinstance(all_flags, list) + assert all(isinstance(flags, str) for flags in all_flags) + def generator(): + for flags in all_flags: + node = DebugNode(community) + node.init_socket("t" in flags) + node.init_my_member(candidate=False) + yield node + return list(generator()) + + @call_on_dispersy_thread + def check_walker(self, all_flags): + """ + All nodes will perform a introduction request to SELF in one batch. + """ + logger.debug("") + assert isinstance(all_flags, list) + assert all(isinstance(flags, str) for flags in all_flags) + + community = DebugCommunity.create_community(self._dispersy, self._my_member) + nodes = self.create_nodes(community, all_flags) + + # create all requests + requests = [node.create_dispersy_introduction_request(community.my_candidate, + node.lan_address, + node.wan_address, + True, + u"unknown", + None, + identifier, + 42) + for identifier, node + in enumerate(nodes, 1)] + + # give all requests in one batch to dispersy + self._dispersy.on_incoming_packets([(node.candidate, request.packet) + for node, request + in zip(nodes, requests)]) + + is_tunnelled_map = dict([(node.lan_address, node.tunnel) for node in nodes]) + num_tunnelled_nodes = len([node for node in nodes if node.tunnel]) + num_non_tunnelled_nodes = len([node for node in nodes if not node.tunnel]) + + for node in nodes: + _, response = node.receive_message() + logger.debug("SELF responded to %s's request with LAN:%s WAN:%s", node.candidate, response.payload.lan_introduction_address, response.payload.wan_introduction_address) + + if node.tunnel: + # NODE is behind a tunnel, SELF can introduce tunnelled and non-tunnelled nodes to NODE. This is + # because both the tunnelled (SwiftEndpoint) and non-tunnelled (StandaloneEndpoint) nodes can handle + # incoming messages with the FFFFFFFF prefix) + if num_tunnelled_nodes + num_non_tunnelled_nodes == 1: + self.assertEquals(response.payload.lan_introduction_address, ("0.0.0.0", 0)) + self.assertEquals(response.payload.wan_introduction_address, ("0.0.0.0", 0)) + + if num_tunnelled_nodes + num_non_tunnelled_nodes > 1: + self.assertNotEquals(response.payload.lan_introduction_address, ("0.0.0.0", 0)) + self.assertNotEquals(response.payload.wan_introduction_address, ("0.0.0.0", 0)) + + # it must be any known node + self.assertIn(response.payload.lan_introduction_address, is_tunnelled_map) + + else: + # NODE is -not- behind a tunnel, SELF can only introduce non-tunnelled nodes to NODE. This is because + # only non-tunnelled (StandaloneEndpoint) nodes can handle incoming messages -without- the FFFFFFFF + # prefix. + if num_non_tunnelled_nodes == 1: + self.assertEquals(response.payload.lan_introduction_address, ("0.0.0.0", 0)) + self.assertEquals(response.payload.wan_introduction_address, ("0.0.0.0", 0)) + + if num_non_tunnelled_nodes > 1: + self.assertNotEquals(response.payload.lan_introduction_address, ("0.0.0.0", 0)) + self.assertNotEquals(response.payload.wan_introduction_address, ("0.0.0.0", 0)) + + # it may only be non-tunnelled + self.assertFalse(is_tunnelled_map[response.payload.lan_introduction_address]) diff -Nru tribler-6.2.0/Tribler/dispersy/timeline.py tribler-6.2.0/Tribler/dispersy/timeline.py --- tribler-6.2.0/Tribler/dispersy/timeline.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/timeline.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,431 @@ +""" +The Timeline is an important part of Dispersy. The Timeline can be +queried as to who had what actions at some point in time. +""" + +import logging +logger = logging.getLogger(__name__) + +from itertools import count, groupby + +from .authentication import MemberAuthentication, DoubleMemberAuthentication +from .resolution import PublicResolution, LinearResolution, DynamicResolution + + +class Timeline(object): + + def __init__(self, community): + if __debug__: + from .community import Community + assert isinstance(community, Community) + + # the community that this timeline is keeping track off + self._community = community + + # _members contains the permission grants and revokes per member + # Member / [(global_time, {u"permission^message-name":(True/False, [Message.Implementation])})] + self._members = {} + + # _policies contains the policies that the community is currently using (dynamic settings) + # [(global_time, {u"resolution^message-name":(resolution-policy, [Message.Implementation])})] + self._policies = [] + + if __debug__: + def printer(self): + for global_time, dic in self._policies: + logger.debug("policy @%d", global_time) + for key, (policy, proofs) in dic.iteritems(): + logger.debug("policy %50s %s based on %d proofs", key, policy, len(proofs)) + + for member, lst in self._members.iteritems(): + logger.debug("member %d %s", member.database_id, member.mid.encode("HEX")) + for global_time, dic in lst: + logger.debug("member %d @%d", member.database_id, global_time) + for key, (allowed, proofs) in sorted(dic.iteritems()): + if allowed: + assert all(proof.name == u"dispersy-authorize" for proof in proofs) + logger.debug("member %d %50s granted by %s", member.database_id, key, ", ".join("%d@%d" % (proof.authentication.member.database_id, proof.distribution.global_time) for proof in proofs)) + else: + assert all(proof.name == u"dispersy-revoke" for proof in proofs) + logger.debug("member %d %50s revoked by %s", member.database_id, key, ", ".join("%d@%d" % (proof.authentication.member.database_id, proof.distribution.global_time) for proof in proofs)) + + def check(self, message, permission=u"permit"): + """ + Check if message is allowed. + + Returns an (allowed, proofs) tuple where allowed is either True or False and proofs is a + list containing zero or more Message.Implementation instances that grant or revoke + permissions. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message.Implementation), message + assert isinstance(message.authentication, (MemberAuthentication.Implementation, DoubleMemberAuthentication.Implementation)), message.authentication + assert isinstance(permission, unicode) + assert permission in (u"permit", u"authorize", u"revoke", u"undo") + if isinstance(message.authentication, MemberAuthentication.Implementation): + # MemberAuthentication + + if message.name == u"dispersy-authorize" or message.name == u"dispersy-revoke": + assert isinstance(message.resolution, PublicResolution.Implementation), message + if __debug__: + logger.debug("collecting proof for container message %s", message.name) + logger.debug("master-member: %d; my-member: %d", message.community.master_member.database_id, message.community.my_member.database_id) + self.printer() + + # if one or more of the contained permission_triplets are allowed, we will allow the + # entire message. when the message is processed only the permission_triplets that + # are still valid will be used + all_allowed = [] + all_proofs = set() + + # question: is message.authentication.member allowed to authorize or revoke one or + # more of the contained permission triplets? + + # proofs for the permission triplets in the payload + key = lambda member_sub_message__: member_sub_message__[1] + for sub_message, iterator in groupby(message.payload.permission_triplets, key=key): + permission_pairs = [(sub_message, sub_permission) for _, _, sub_permission in iterator] + allowed, proofs = self._check(message.authentication.member, message.distribution.global_time, sub_message.resolution, permission_pairs) + all_allowed.append(allowed) + all_proofs.update(proofs) + + if __debug__: + logger.debug("is one or more permission triplets allowed? %s. based on %d proofs", any(all_allowed), len(all_proofs)) + + return any(all_allowed), [proof for proof in all_proofs] + + elif message.name == u"dispersy-undo-other": + assert isinstance(message.resolution, LinearResolution.Implementation), message + if __debug__: + logger.debug("collecting proof for container message dispersy-undo-other") + logger.debug("master-member: %d; my-member: %d", message.community.master_member.database_id, message.community.my_member.database_id) + logger.debug("dispersy-undo-other created by %d@%d", message.authentication.member.database_id, message.distribution.global_time) + logger.debug(" undoing message by %d@%d (%s, %s)", message.payload.member.database_id, message.payload.global_time, message.payload.packet.name, message.payload.packet.resolution) + self.printer() + + return self._check(message.authentication.member, message.distribution.global_time, message.resolution, [(message.payload.packet.meta, u"undo")]) + + else: + return self._check(message.authentication.member, message.distribution.global_time, message.resolution, [(message.meta, permission)]) + else: + # DoubleMemberAuthentication + all_proofs = set() + for member in message.authentication.members: + allowed, proofs = self._check(member, message.distribution.global_time, message.resolution, [(message.meta, permission)]) + all_proofs.update(proofs) + if not allowed: + return (False, [proof for proof in all_proofs]) + return (True, [proof for proof in all_proofs]) + + def allowed(self, meta, global_time=0, permission=u"permit"): + """ + Check if we are allowed to create a message. + """ + if __debug__: + from .message import Message + assert isinstance(meta, Message) + assert isinstance(global_time, (int, long)) + assert global_time >= 0 + assert isinstance(permission, unicode) + assert permission in (u"permit", u"authorize", u"revoke", u"undo") + return self._check(self._community.my_member, global_time if global_time else self._community.global_time, meta.resolution, [(meta, permission)]) + + def _check(self, member, global_time, resolution, permission_pairs): + """ + Check is MEMBER has all of the permission pairs in PERMISSION_PAIRS at GLOBAL_TIME. + + Returns a (allowed, proofs) tuple where allowed is either True or False and proofs is a list + containing the Message.Implementation instances grant or revoke the permissions. + """ + if __debug__: + from .member import Member + from .message import Message + assert isinstance(member, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + assert isinstance(permission_pairs, list) + assert len(permission_pairs) > 0 + for pair in permission_pairs: + assert isinstance(pair, tuple) + assert len(pair) == 2 + assert isinstance(pair[0], Message), "Requires meta message" + assert isinstance(pair[1], unicode) + assert pair[1] in (u"permit", u"authorize", u"revoke", u"undo") + assert isinstance(resolution, (PublicResolution.Implementation, LinearResolution.Implementation, DynamicResolution.Implementation, PublicResolution, LinearResolution, DynamicResolution)), resolution + + # TODO: we can make this more efficient by changing the loop a bit. make a shallow copy of + # the permission_pairs and remove one after another as they succeed. key is to loop though + # the self._members[member] once (currently looping over the timeline for every item in + # permission_pairs). + + all_proofs = [] + + for message, permission in permission_pairs: + # the master member can do anything + if member == self._community.master_member: + logger.debug("ACCEPT time:%d user:%d -> %s^%s (master member)", global_time, member.database_id, permission, message.name) + + else: + # dynamically set the resolution policy + if isinstance(resolution, DynamicResolution): + resolution, proofs = self.get_resolution_policy(message, global_time) + assert isinstance(resolution, (PublicResolution, LinearResolution)) + all_proofs.extend(proofs) + + elif isinstance(resolution, DynamicResolution.Implementation): + local_resolution, proofs = self.get_resolution_policy(message, global_time) + assert isinstance(local_resolution, (PublicResolution, LinearResolution)) + all_proofs.extend(proofs) + + if not resolution.policy.meta == local_resolution: + logger.debug("FAIL time:%d user:%d (conflicting resolution policy %s %s)", global_time, member.database_id, resolution.policy.meta, local_resolution) + return (False, all_proofs) + + resolution = resolution.policy + logger.debug("APPLY time:%d resolution^%s -> %s", global_time, message.name, resolution.__class__.__name__) + + # everyone is allowed PublicResolution + if isinstance(resolution, (PublicResolution, PublicResolution.Implementation)): + logger.debug("ACCEPT time:%d user:%d -> %s^%s (public resolution)", global_time, member.database_id, permission, message.name) + + # allowed LinearResolution is stored in Timeline + elif isinstance(resolution, (LinearResolution, LinearResolution.Implementation)): + key = permission + "^" + message.name + + if member in self._members: + iterator = reversed(self._members[member]) + try: + # go backwards while time > global_time + while True: + time, permissions = iterator.next() + if time <= global_time: + break + + # check permissions and continue backwards in time + while True: + if key in permissions: + assert isinstance(permissions[key], tuple) + assert len(permissions[key]) == 2 + assert isinstance(permissions[key][0], bool) + assert isinstance(permissions[key][1], list) + assert len(permissions[key][1]) > 0 + assert all(isinstance(x, Message.Implementation) for x in permissions[key][1]) + allowed, proofs = permissions[key] + + if allowed: + logger.debug("ACCEPT time:%d user:%d -> %s (authorized)", global_time, member.database_id, key) + all_proofs.extend(proofs) + break + else: + logger.warning("DENIED time:%d user:%d -> %s (revoked)", global_time, member.database_id, key) + return (False, [proofs]) + + time, permissions = iterator.next() + + except StopIteration: + logger.warning("FAIL time:%d user:%d -> %s (not authorized)", global_time, member.database_id, key) + return (False, []) + else: + logger.warning("FAIL time:%d user:%d -> %s (no authorization)", global_time, member.database_id, key) + return (False, []) + + # accept with proof + assert len(all_proofs) > 0 + + else: + raise NotImplementedError("Unknown Resolution") + + return (True, all_proofs) + + def authorize(self, author, global_time, permission_triplets, proof): + if __debug__: + from .member import Member + from .message import Message + assert isinstance(author, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + assert isinstance(permission_triplets, list) + assert len(permission_triplets) > 0 + for triplet in permission_triplets: + assert isinstance(triplet, tuple) + assert len(triplet) == 3 + assert isinstance(triplet[0], Member) + assert isinstance(triplet[1], Message) + assert isinstance(triplet[1].resolution, (PublicResolution, LinearResolution, DynamicResolution)) + assert isinstance(triplet[1].authentication, (MemberAuthentication, DoubleMemberAuthentication)) + assert isinstance(triplet[2], unicode) + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo") + assert isinstance(proof, Message.Implementation) + assert proof.name in (u"dispersy-authorize", u"dispersy-revoke", u"dispersy-undo-own", u"dispersy-undo-other") + + # TODO: we must remove duplicates in the below permission_pairs list + # check that AUTHOR is allowed to perform these authorizations + authorize_allowed, authorize_proofs = self._check(author, global_time, LinearResolution(), [(message, u"authorize") for _, message, __ in permission_triplets]) + if not authorize_allowed: + logger.debug("the author is NOT allowed to perform authorisations for one or more of the given permission triplets") + logger.debug("-- the author is... the master member? %s; my member? %s", author == self._community.master_member, author == self._community.my_member) + return (False, authorize_proofs) + + for member, message, permission in permission_triplets: + if isinstance(message.resolution, (PublicResolution, LinearResolution, DynamicResolution)): + if not member in self._members: + self._members[member] = [] + + key = permission + "^" + message.name + + for index, (time, permissions) in zip(count(0), self._members[member]): + # extend when time == global_time + if time == global_time: + if key in permissions: + allowed, proofs = permissions[key] + if allowed: + # multiple proofs for the same permissions at this exact time + logger.debug("AUTHORISE time:%d user:%d -> %s (extending duplicate)", global_time, member.database_id, key) + proofs.append(proof) + + else: + # TODO: when two authorise contradict each other on the same global + # time, the ordering of the packet will decide the outcome. we need + # those packets! [SELECT packet FROM sync WHERE ...] + raise NotImplementedError("Requires ordering by packet to resolve permission conflict") + + else: + # no earlier proof on this global time + logger.debug("AUTHORISE time:%d user:%d -> %s (extending)", global_time, member.database_id, key) + permissions[key] = (True, [proof]) + break + + # insert when time > global_time + elif time > global_time: + # TODO: ensure that INDEX is correct! + logger.debug("AUTHORISE time:%d user:%d -> %s (inserting)", global_time, member.database_id, key) + self._members[member].insert(index, (global_time, {key: (True, [proof])})) + break + + # otherwise: go forward while time < global_time + + else: + # we have reached the end without a BREAK: append the permission + logger.debug("AUTHORISE time:%d user:%d -> %s (appending)", global_time, member.database_id, key) + self._members[member].append((global_time, {key: (True, [proof])})) + + else: + raise NotImplementedError(message.resolution) + + return (True, authorize_proofs) + + def revoke(self, author, global_time, permission_triplets, proof): + if __debug__: + from .member import Member + from .message import Message + assert isinstance(author, Member) + assert isinstance(global_time, (int, long)) + assert global_time > 0 + assert isinstance(permission_triplets, list) + assert len(permission_triplets) > 0 + for triplet in permission_triplets: + assert isinstance(triplet, tuple) + assert len(triplet) == 3 + assert isinstance(triplet[0], Member) + assert isinstance(triplet[1], Message) + assert isinstance(triplet[1].resolution, (PublicResolution, LinearResolution, DynamicResolution)) + assert isinstance(triplet[1].authentication, (MemberAuthentication, DoubleMemberAuthentication)) + assert isinstance(triplet[2], unicode) + assert triplet[2] in (u"permit", u"authorize", u"revoke", u"undo") + assert isinstance(proof, Message.Implementation) + assert proof.name in (u"dispersy-authorize", u"dispersy-revoke", u"dispersy-undo-own", u"dispersy-undo-other") + + # TODO: we must remove duplicates in the below permission_pairs list + # check that AUTHOR is allowed to perform these authorizations + revoke_allowed, revoke_proofs = self._check(author, global_time, LinearResolution(), [(message, u"revoke") for _, message, __ in permission_triplets]) + if not revoke_allowed: + logger.debug("the author is NOT allowed to perform authorizations for one or more of the given permission triplets") + logger.debug("-- the author is... the master member? %s; my member? %s", author == self._community.master_member, author == self._community.my_member) + return (False, revoke_proofs) + + for member, message, permission in permission_triplets: + if isinstance(message.resolution, (PublicResolution, LinearResolution, DynamicResolution)): + if not member in self._members: + self._members[member] = [] + + key = permission + "^" + message.name + + for index, (time, permissions) in zip(count(0), self._members[member]): + # extend when time == global_time + if time == global_time: + if key in permissions: + allowed, proofs = permissions[key] + if allowed: + # TODO: when two authorize contradict each other on the same global + # time, the ordering of the packet will decide the outcome. we need + # those packets! [SELECT packet FROM sync WHERE ...] + raise NotImplementedError("Requires ordering by packet to resolve permission conflict") + + else: + # multiple proofs for the same permissions at this exact time + logger.debug("REVOKE time:%d user:%d -> %s (extending duplicate)", global_time, member.database_id, key) + proofs.append(proof) + + else: + # no earlier proof on this global time + logger.debug("REVOKE time:%d user:%d -> %s (extending)", global_time, member.database_id, key) + permissions[key] = (False, [proof]) + break + + # insert when time > global_time + elif time > global_time: + # TODO: ensure that INDEX is correct! + logger.debug("REVOKE time:%d user:%d -> %s (inserting)", global_time, member.database_id, key) + self._members[member].insert(index, (global_time, {key: (False, [proof])})) + break + + # otherwise: go forward while time < global_time + + else: + # we have reached the end without a BREAK: append the permission + logger.debug("REVOKE time:%d user:%d -> %s (appending)", global_time, member.database_id, key) + self._members[member].append((global_time, {key: (False, [proof])})) + + else: + raise NotImplementedError(message.resolution) + + return (True, revoke_proofs) + + def get_resolution_policy(self, message, global_time): + """ + Returns the resolution policy and associated proof that is used for MESSAGE at time + GLOBAL_TIME. + """ + if __debug__: + from .message import Message + assert isinstance(message, Message) + assert isinstance(global_time, (int, long)) + + key = u"resolution^" + message.name + for policy_time, policies in reversed(self._policies): + if policy_time < global_time and key in policies: + logger.debug("using %s for time %d (configured at %s)", policies[key][0].__class__.__name__, global_time, policy_time) + return policies[key] + + logger.debug("using %s for time %d (default)", message.resolution.default.__class__.__name__, global_time) + return message.resolution.default, [] + + def change_resolution_policy(self, message, global_time, policy, proof): + if __debug__: + from .message import Message + assert isinstance(message, Message) + assert isinstance(global_time, (int, long)) + assert isinstance(policy, (PublicResolution, LinearResolution)) + assert isinstance(proof, Message.Implementation) + + for policy_time, policies in reversed(self._policies): + if policy_time == global_time: + break + else: + policies = {} + self._policies.append((global_time, policies)) + self._policies.sort() + + # TODO it is possible that different members set different policies at the same time + policies[u"resolution^" + message.name] = (policy, [proof]) diff -Nru tribler-6.2.0/Tribler/dispersy/tool/callbackscript.py tribler-6.2.0/Tribler/dispersy/tool/callbackscript.py --- tribler-6.2.0/Tribler/dispersy/tool/callbackscript.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/callbackscript.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,107 @@ +from ..script import ScriptBase + +class DispersyCallbackScript(ScriptBase): + def run(self): + self.add_testcase(self.previous_performance_profile) + self.add_testcase(self.register) + self.add_testcase(self.register_delay) + self.add_testcase(self.generator) + + def previous_performance_profile(self): + """ +Run on MASAQ Dell laptop 23/04/12 +> python -O Tribler/Main/dispersy.py --enable-dispersy-script --script dispersy-callback --yappi + +YAPPI: 1x 2.953s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/callback.py._loop:506 +YAPPI: 210020x 0.964s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/callback.py.register:212 +YAPPI: 520985x 0.390s /usr/lib/python2.7/threading.py.isSet:380 +YAPPI: 4x 0.104s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_delay:81 +YAPPI: 3x 0.100s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register:68 +YAPPI: 110000x 0.092s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.generator_func:95 +YAPPI: 100000x 0.083s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_delay_func:82 +YAPPI: 100000x 0.082s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.register_func:69 +YAPPI: 867x 0.024s /usr/lib/python2.7/threading.py.wait:235 +YAPPI: 5x 0.012s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/tool/callbackscript.py.generator:94 +YAPPI: 867x 0.007s /usr/lib/python2.7/threading.py.wait:400 +YAPPI: 379x 0.005s /home/boudewijn/local/lib/python2.7/site-packages/yappi.py.__init__:50 +YAPPI: 867x 0.003s /usr/lib/python2.7/threading.py._acquire_restore:223 +YAPPI: 1x 0.003s Tribler/Main/dispersy.py.start:106 +YAPPI: 891x 0.002s /usr/lib/python2.7/threading.py._is_owned:226 +YAPPI: 867x 0.002s /usr/lib/python2.7/threading.py._release_save:220 +YAPPI: 353x 0.002s /home/boudewijn/local/lib/python2.7/site-packages/yappi.py.func_enumerator:72 +YAPPI: 48x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/conversion.py.define_meta_message:223 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/timeline.py.Timeline:14 +YAPPI: 8x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dprint.py.dprint:595 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/script.py.:2 +YAPPI: 2x 0.001s /usr/lib/python2.7/sre_parse.py._parse:379 +YAPPI: 8x 0.001s /usr/lib/python2.7/traceback.py.extract_stack:280 +YAPPI: 29x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/message.py.__init__:499 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/RawServer.py.listen_forever:129 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/community.py.:9 +YAPPI: 194x 0.001s /usr/lib/python2.7/sre_parse.py.__next:182 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/debugcommunity.py.:1 +YAPPI: 1x 0.001s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/lencoder.py.:3 +YAPPI: 3x 0.000s /usr/lib/python2.7/sre_compile.py._compile:32 +YAPPI: 191x 0.000s /usr/lib/python2.7/sre_parse.py.get:201 +YAPPI: 49x 0.000s /usr/lib/python2.7/linecache.py.checkcache:43 +YAPPI: 185x 0.000s /usr/lib/python2.7/sre_parse.py.append:138 +YAPPI: 4x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dispersy.py._store:1991 +YAPPI: 1x 0.000s /usr/lib/python2.7/encodings/hex_codec.py.:8 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/community.py.create_community:50 +YAPPI: 16x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/SocketHandler.py.handle_events:455 +YAPPI: 109x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/database.py.execute:149 +YAPPI: 49x 0.000s /usr/lib/python2.7/linecache.py.getline:13 +YAPPI: 5x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/dispersy.py._on_incoming_packets:1622 +YAPPI: 42x 0.000s /usr/lib/python2.7/threading.py.acquire:121 +YAPPI: 57x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/BitTornado/clock.py.get_time:16 +YAPPI: 1x 0.000s /usr/lib/python2.7/sre_compile.py._compile_info:361 +YAPPI: 23x 0.000s /usr/lib/python2.7/threading.py.set:385 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/NATFirewall/guessip.py.get_my_wan_ip_linux:104 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/database.py.__init__:19 +YAPPI: 42x 0.000s /usr/lib/python2.7/threading.py.release:141 +YAPPI: 1x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/timeline.py.authorize:237 +YAPPI: 4x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/conversion.py._decode_message:1266 +YAPPI: 5x 0.000s /home/boudewijn/svn.tribler.org/abc/branches/mainbranch/Tribler/Core/dispersy/member.py.__init__:116 +""" + pass + + def register(self): + def register_func(): + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(100000): + register(register_func) + + while container[0] < 100000: + yield 1.0 + + def register_delay(self): + def register_delay_func(): + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(100000): + register(register_delay_func, delay=1.0) + + while container[0] < 100000: + yield 1.0 + + def generator(self): + def generator_func(): + for _ in xrange(10): + yield 0.1 + container[0] += 1 + + container = [0] + register = self._dispersy.callback.register + + for _ in xrange(10000): + register(generator_func) + + while container[0] < 10000: + yield 1.0 diff -Nru tribler-6.2.0/Tribler/dispersy/tool/ldecoder.py tribler-6.2.0/Tribler/dispersy/tool/ldecoder.py --- tribler-6.2.0/Tribler/dispersy/tool/ldecoder.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/ldecoder.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,374 @@ +from bz2 import BZ2File +from os import walk +from os.path import join +from traceback import print_exc +import sys + + +class NotInterested(Exception): + pass + + +def _counter(start): + assert isinstance(start, (int, long)) + count = start + while True: + yield count + count += 1 + + +def _ignore_seperator(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for start in _counter(offset): + if not stream[start] == " ": + return start + raise ValueError() + + +def _decode_str(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] == ":": + length = int(stream[offset:split]) + return split + length + 1, stream[split + 1:split + length+1] + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_hex(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] == ":": + length = int(stream[offset:split]) + return split + length + 1, stream[split + 1:split + length+1].decode("HEX") + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_unicode(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] == ":": + length = int(stream[offset:split]) + return split + length + 1, stream[split + 1:split + length+1].decode("UTF8") + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_Hex(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] == ":": + length = int(stream[offset:split]) + return split + length + 1, stream[split + 1:split + length+1].decode("HEX").decode("UTF8") + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_int(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if not stream[split] in "1234567890-": + return split, int(stream[offset:split]) + + +def _decode_long(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if not stream[split] in "1234567890-": + return split, long(stream[offset:split]) + + +def _decode_float(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if not stream[split] in "1234567890+-.e": + return split, float(stream[offset:split]) + + +def _decode_boolean(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + if stream[offset:offset + 4] == "True": + return offset + 4, True + elif stream[offset:offset + 5] == "False": + return offset + 5, False + else: + raise ValueError() + + +def _decode_none(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + if stream[offset:offset + 4] == "None": + return offset + 4, None + else: + raise ValueError("Expected None") + + +def _decode_tuple(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] in ":": + length = int(stream[offset:split]) + if not stream[split + 1] == "(": + raise ValueError("Expected '('", stream[split + 1]) + offset = split + 2 # compensate for ':(' + l = [] + if length: + for index in range(length): + offset, value = _decode(offset, stream) + l.append(value) + + if index < length and stream[offset] == "," and stream[offset + 1] == " ": + offset += 2 # compensate for ', ' + elif index == length - 1 and stream[offset] == ")": + offset += 1 # compensate for ')' + else: + raise ValueError() + else: + if not stream[offset] == ")": + raise ValueError("Expected ')'", stream[split + 1]) + offset += 1 # compensate for ')' + + return offset, tuple(l) + + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_list(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] in ":": + length = int(stream[offset:split]) + if not stream[split + 1] == "[": + raise ValueError("Expected '['", stream[split + 1]) + offset = split + 2 # compensate for ':[' + l = [] + if length: + for index in range(length): + offset, value = _decode(offset, stream) + l.append(value) + + if index < length and stream[offset] == "," and stream[offset + 1] == " ": + offset += 2 # compensate for ', ' + elif index == length - 1 and stream[offset] == "]": + offset += 1 # compensate for ']' + else: + raise ValueError() + else: + if not stream[offset] == "]": + raise ValueError("Expected ']'", stream[split + 1]) + offset += 1 # compensate for ']' + + return offset, l + + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode_dict(offset, stream): + assert isinstance(offset, (int, long)) + assert isinstance(stream, str) + for split in _counter(offset): + if stream[split] in ":": + length = int(stream[offset:split]) + if not stream[split + 1] == "{": + raise ValueError("Expected '{'", stream[split + 1]) + offset = split + 2 # compensate for ':{' + d = {} + for index in range(length): + offset, key = _decode(offset, stream) + if key in d: + raise ValueError("Duplicate map entry", key) + if not stream[offset] == ":": + raise ValueError("Expected ':'", stream[offset]) + offset += 1 # compensate for ':' + offset, value = _decode(offset, stream) + d[key] = value + + if index < length and stream[offset] == "," and stream[offset + 1] == " ": + offset += 2 # compensate for ', ' + elif index == length - 1 and stream[offset] == "}": + offset += 1 # compensate for '}' + else: + raise ValueError() + + return offset, d + + elif not stream[split] in "1234567890": + raise ValueError("Can not decode string length", stream[split]) + + +def _decode(offset, stream): + if stream[offset] in _decode_mapping: + return _decode_mapping[stream[offset]](offset + 1, stream) + else: + raise ValueError("Can not decode {0}".format(stream[offset])) + + +def _parse(handle, interests, raise_exceptions=True): + assert isinstance(interests, set) + for lineno, stream in zip(_counter(1), handle): + if stream.startswith("#"): + continue + + try: + offset = _ignore_seperator(17, stream) + if not stream[offset] == "s": + raise ValueError("Expected a string encoded message") + offset, message = _decode_str(offset + 1, stream) + + if not interests or message in interests: + stamp = float(stream[:17]) + kargs = {} + while offset < len(stream) - 1: + offset = _ignore_seperator(offset, stream) + for split in _counter(offset): + if stream[split] == ":": + key = stream[offset:split].strip() + offset, value = _decode(split + 1, stream) + kargs[key] = value + break + + elif not stream[split] in _valid_key_chars: + raise ValueError("Can not decode character", stream[split], "on line", lineno, "offset", offset) + + yield lineno, stamp, message, kargs + except Exception as e: + if raise_exceptions: + raise ValueError("Cannot read line", str(e), "on line", lineno) + else: + print >> sys.stderr, "Cannot read line", str(e), "on line", lineno + print_exc() + + +def bz2parse(filename, interests=(), raise_exceptions = True): + """ + Parse the content of bz2 encoded FILENAME. + + Yields a (LINENO, TIMESTAMP, MESSAGE, KARGS) tuple for each line in the file. + """ + assert isinstance(filename, (str, unicode)) + assert isinstance(interests, (tuple, list, set)) + assert all(isinstance(interest, str) for interest in interests) + return _parse(BZ2File(filename, "r"), set(interests), raise_exceptions) + + +def parse(filename, interests=(), raise_exceptions = True): + """ + Parse the content of FILENAME. + + Yields a (LINENO, TIMESTAMP, MESSAGE, KARGS) tuple for each line in + the file. + """ + assert isinstance(filename, (str, unicode)) + assert isinstance(interests, (tuple, list, set)) + assert all(isinstance(interest, str) for interest in interests) + return _parse(open(filename, "r"), set(interests), raise_exceptions) + + +def parselast(filename, interests=(), raise_exceptions = True, chars = 2048): + """ + Parse the last X chars from the content of FILENAME. + + Yields a (LINENO, TIMESTAMP, MESSAGE, KARGS) tuple for each line in + the file. + """ + assert isinstance(filename, (str, unicode)) + assert isinstance(interests, (tuple, list, set)) + assert all(isinstance(interest, str) for interest in interests) + + # From http://stackoverflow.com/a/260352 + f = open(filename, "r") + f.seek(0, 2) # Seek @ EOF + fsize = f.tell() # Get Size + f.seek(max(fsize - chars, 0), 0) # Set pos @ last n chars + + # skip broken line + f.readline() + + lines = f.readlines() + lines.reverse() + return _parse(lines, set(interests), raise_exceptions) + + +class NextFile(Exception): + pass + + +class Parser(object): + + def __init__(self, verbose=True): + self.verbose = verbose + self.filename = "" + self.progress = 0 + self.mapping = {} + + def mapto(self, func, *messages): + for message in messages: + if not message in self.mapping: + self.mapping[message] = [] + self.mapping[message].append(func) + + def unknown(self, _, name, **kargs): + if self.verbose: + print "# unknown log entry '%s'" % name, "[%s]" % ", ".join(kargs.iterkeys()) + self.mapping[name] = [self.ignore] + + def ignore(self, stamp, _, **kargs): + pass + + def start_parser(self, filename): + """Called once before starting to parse FILENAME""" + self.filename = filename + self.progress += 1 + + def stop_parser(self, lineno): + """Called once when finished parsing LINENO lines""" + if self.verbose: + print "#", self.progress, self.filename, "->", lineno, "lines" + + def parse_directory(self, directory, filename, bzip2=False, unknown=False): + parser = bz2parse if bzip2 else parse + interests = () if unknown else set(self.mapping.keys()) + unknown = [self.unknown] + + for directory, _, filenames in walk(directory): + if filename in filenames: + filepath = join(directory, filename) + + self.start_parser(filepath) + lineno = 0 + try: + for lineno, timestamp, name, kargs in parser(filepath, interests): + for func in self.mapping.get(name, unknown): + func(timestamp, name, **kargs) + except NextFile: + pass + self.stop_parser(lineno) + +_valid_key_chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890_" +_decode_mapping = {"s": _decode_str, + "h": _decode_hex, + "u": _decode_unicode, + "H": _decode_Hex, + "i": _decode_int, + "j": _decode_long, + "f": _decode_float, + "b": _decode_boolean, + "n": _decode_none, + "t": _decode_tuple, + "l": _decode_list, + "m": _decode_dict} diff -Nru tribler-6.2.0/Tribler/dispersy/tool/lencoder.py tribler-6.2.0/Tribler/dispersy/tool/lencoder.py --- tribler-6.2.0/Tribler/dispersy/tool/lencoder.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/lencoder.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,165 @@ +from atexit import register +from bz2 import BZ2File +from time import time +import re + + +def _encode_str(l, value): + assert isinstance(l, list) + assert isinstance(value, str) + for char in value: + if not char in _printable: + value = value.encode("HEX") + l.extend(("h", str(len(value)), ":", value)) + break + else: + l.extend(("s", str(len(value)), ":", value)) + + +def _encode_unicode(l, value): + value = value.encode("UTF-8") + for char in value: + if not char in _printable: + value = value.encode("HEX") + l.extend(("H", str(len(value)), ":", value)) + break + else: + l.extend(("u", str(len(value)), ":", value)) + + +def _encode_int(l, value): + l.extend(("i", str(value))) + + +def _encode_long(l, value): + l.extend(("j", str(value))) + + +def _encode_float(l, value): + l.extend(("f", str(value))) + + +def _encode_boolean(l, value): + l.extend(("b", value and "True" or "False")) + + +def _encode_none(l, value): + l.append("nNone") + + +def _encode_tuple(l, values): + if values: + l.extend(("t", str(len(values)), ":", "(")) + for value in values: + _encode(l, value) + l.append(", ") + l[-1] = ")" + else: + l.append("t0:()") + + +def _encode_list(l, values): + if values: + l.extend(("l", str(len(values)), ":", "[")) + for value in values: + _encode(l, value) + l.append(", ") + l[-1] = "]" + else: + l.append("l0:[]") + + +def _encode_dict(l, values): + if values: + l.extend(("m", str(len(values)), ":", "{")) + for key, value in values.iteritems(): + _encode(l, key) + l.append(":") + _encode(l, value) + l.append(", ") + l[-1] = "}" + else: + l.append("m0:{}") + + +def _encode(l, value): + if type(value) in _encode_mapping: + _encode_mapping[type(value)](l, value) + else: + raise ValueError("Can not encode %s" % type(value)) + + +def log(filename, _message, **kargs): + assert isinstance(_message, str) + assert ";" not in _message + + global _encode_initiated + if _encode_initiated: + l = ["{0:.6f}".format(time()), _seperator] + else: + _encode_initiated = True + l = ["################################################################################", "\n", + "{0:.6f}".format(time()), _seperator, "s6:logger", _seperator, "event:s5:start", "\n", + "{0:.6f}".format(time()), _seperator] + + _encode_str(l, _message) + for key in sorted(kargs.keys()): + l.append(_seperator) + l.extend((key, ":")) + _encode(l, kargs[key]) + l.append("\n") + s = "".join(l) + + # save to file + open(filename, "a+").write(s) + + +def bz2log(filename, _message, **kargs): + assert isinstance(_message, str) + assert ";" not in _message + + global _cache, _encode_initiated + handle = _cache.get(filename) + if _encode_initiated: + l = ["{0:.6f}".format(time()), _seperator] + else: + _encode_initiated = True + l = ["################################################################################", "\n", + "{0:.6f}".format(time()), _seperator, "s6:logger", _seperator, "event:s5:start", "\n", + "{0:.6f}".format(time()), _seperator] + handle = BZ2File(filename, "w", 8 * 1024, 9) + register(handle.close) + _cache[filename] = handle + + _encode_str(l, _message) + for key in sorted(kargs.keys()): + l.append(_seperator) + l.extend((key, ":")) + _encode(l, kargs[key]) + l.append("\n") + s = "".join(l) + + # write to file + handle.write(s) + + return handle + + +def make_valid_key(key): + return re.sub('[^a-zA-Z0-9_]', '_', key) + +_printable = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!\"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ " +_seperator = " " +_valid_key_chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890_" +_cache = {} +_encode_initiated = False +_encode_mapping = {str: _encode_str, + unicode: _encode_unicode, + int: _encode_int, + long: _encode_long, + float: _encode_float, + bool: _encode_boolean, + type(None): _encode_none, + tuple: _encode_tuple, + list: _encode_list, + dict: _encode_dict} diff -Nru tribler-6.2.0/Tribler/dispersy/tool/main.py tribler-6.2.0/Tribler/dispersy/tool/main.py --- tribler-6.2.0/Tribler/dispersy/tool/main.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/main.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,109 @@ +""" +Run Dispersy in standalone mode. +""" + +import logging.config +try: + logging.config.fileConfig("logger.conf") +except: + print "Unable to load logging config from 'logger.conf' file." +logging.basicConfig(format="%(asctime)-15s [%(levelname)s] %(message)s") +logger = logging.getLogger(__name__) + +# optparse is deprecated since python 2.7 +import optparse +import signal + +from ..dispersy import Dispersy +from ..endpoint import StandaloneEndpoint +from .mainthreadcallback import MainThreadCallback + + +def start_script(dispersy, opt): + try: + module, class_ = opt.script.strip().rsplit(".", 1) + cls = getattr(__import__(module, fromlist=[class_]), class_) + except Exception as exception: + logger.exception("%s", exception) + raise SystemExit(str(exception), "Invalid --script", opt.script) + + try: + kargs = {} + if opt.kargs: + for karg in opt.kargs.split(","): + if "=" in karg: + key, value = karg.split("=", 1) + kargs[key.strip()] = value.strip() + except: + raise SystemExit("Invalid --kargs", opt.kargs) + + script = cls(dispersy, **kargs) + script.next_testcase() + + +def main_real(setup=None): + assert setup is None or callable(setup) + + # define options + command_line_parser = optparse.OptionParser() + command_line_parser.add_option("--profiler", action="store_true", help="use cProfile on the Dispersy thread", default=False) + command_line_parser.add_option("--memory-dump", action="store_true", help="use meliae to dump the memory periodically", default=False) + command_line_parser.add_option("--databasefile", action="store", help="use an alternate databasefile", default=u"dispersy.db") + command_line_parser.add_option("--statedir", action="store", type="string", help="Use an alternate statedir", default=u".") + command_line_parser.add_option("--ip", action="store", type="string", default="0.0.0.0", help="Dispersy uses this ip") + command_line_parser.add_option("--port", action="store", type="int", help="Dispersy uses this UDL port", default=12345) + command_line_parser.add_option("--script", action="store", type="string", help="Script to execute, i.e. module.module.class", default="") + command_line_parser.add_option("--kargs", action="store", type="string", help="Executes --script with these arguments. Example 'startingtimestamp=1292333014,endingtimestamp=12923340000'") + command_line_parser.add_option("--debugstatistics", action="store_true", help="turn on debug statistics", default=False) + command_line_parser.add_option("--strict", action="store_true", help="Exit on any exception", default=False) + # swift + # command_line_parser.add_option("--swiftproc", action="store_true", help="Use swift to tunnel all traffic", default=False) + # command_line_parser.add_option("--swiftpath", action="store", type="string", default="./swift") + # command_line_parser.add_option("--swiftcmdlistenport", action="store", type="int", default=7760+481) + # command_line_parser.add_option("--swiftdlsperproc", action="store", type="int", default=1000) + if setup: + setup(command_line_parser) + + # parse command-line arguments + opt, args = command_line_parser.parse_args() + if not opt.script: + command_line_parser.print_help() + exit(1) + + # setup + dispersy = Dispersy(MainThreadCallback("Dispersy"), StandaloneEndpoint(opt.port, opt.ip), unicode(opt.statedir), unicode(opt.databasefile)) + dispersy.statistics.enable_debug_statistics(opt.debugstatistics) + + if opt.strict: + def exception_handler(exception, fatal): + print "An exception occurred. Quitting because we are running with --strict enabled." + # return fatal=True + return True + dispersy.callback.attach_exception_handler(exception_handler) + + # if opt.swiftproc: + # from Tribler.Core.Swift.SwiftProcessMgr import SwiftProcessMgr + # sesslock = threading.Lock() + # spm = SwiftProcessMgr(opt.swiftpath, opt.swiftcmdlistenport, opt.swiftdlsperproc, sesslock) + # swift_process = spm.get_or_create_sp(opt.statedir) + # dispersy.endpoint = TunnelEndpoint(swift_process, dispersy) + # swift_process.add_download(dispersy.endpoint) + # else: + + # register tasks + dispersy.callback.register(start_script, (dispersy, opt)) + + def signal_handler(sig, frame): + print "Received", sig, "signal in", frame + dispersy.stop() + signal.signal(signal.SIGINT, signal_handler) + + # start + dispersy.start() + dispersy.callback.loop() + return dispersy.callback + + +def main(setup=None): + callback = main_real(setup) + exit(1 if callback.exception else 0) diff -Nru tribler-6.2.0/Tribler/dispersy/tool/mainthreadcallback.py tribler-6.2.0/Tribler/dispersy/tool/mainthreadcallback.py --- tribler-6.2.0/Tribler/dispersy/tool/mainthreadcallback.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/mainthreadcallback.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,23 @@ +from thread import get_ident +from threading import currentThread + +from ..callback import Callback + + +class MainThreadCallback(Callback): + + """ + MainThreadCallback must be used when Dispersy must run on the main process thread. + """ + def __init__(self, name="Generic-Callback"): + assert isinstance(name, str), type(name) + super(MainThreadCallback, self).__init__(name) + + # we will be running on this thread + self._thread_ident = get_ident() + + # set the thread name + currentThread().setName(name) + + def start(self, *args, **kargs): + return True diff -Nru tribler-6.2.0/Tribler/dispersy/tool/scenarioscript.py tribler-6.2.0/Tribler/dispersy/tool/scenarioscript.py --- tribler-6.2.0/Tribler/dispersy/tool/scenarioscript.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/scenarioscript.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,543 @@ +import logging +logger = logging.getLogger(__name__) + +try: + from scipy.stats import poisson, expon +except ImportError: + poisson = expon = None + print "Unable to import from scipy. ScenarioPoisson and ScenarioExpon are disabled" + +try: + from psutil import Process, cpu_percent +except ImportError: + Process = cpu_percent = None + print "Unable to import from psutil. Process statistics are disabled" + +from hashlib import sha1 +from os import getpid, uname, path +from random import random, uniform +from re import compile as re_compile +from sys import maxsize +from time import time +from shutil import copyfile + +from ..crypto import ec_generate_key, ec_to_public_bin, ec_to_private_bin +from ..dispersydatabase import DispersyDatabase +from ..script import ScriptBase +from .ldecoder import Parser, NextFile + + +class ScenarioScript(ScriptBase): + + def __init__(self, *args, **kargs): + super(ScenarioScript, self).__init__(*args, **kargs) + self._my_member = None + self._master_member = None + self._cid = sha1(self.master_member_public_key).digest() + self._is_joined = False + + self.log("scenario-init", peernumber=int(self._kargs["peernumber"]), hostname=uname()[1]) + + if self.enable_statistics: + self._dispersy.callback.register(self._periodically_log_statistics) + + @property + def enable_wait_for_wan_address(self): + return False + + @property + def enable_statistics(self): + return 30.0 + + def run(self): + self.add_testcase(self._run_scenario) + + def _run_scenario(self): + for deadline, _, call, args in self.parse_scenario(): + yield max(0.0, deadline - time()) + logger.debug(call.__name__) + if call(*args) == "END": + return + + @property + def my_member_security(self): + return u"low" + + @property + def master_member_public_key(self): + raise NotImplementedError("must return an experiment specific master member public key") + # if False: + # when crypto.py is disabled a public key is slightly + # different... + # master_public_key = ";".join(("60", master_public_key[:60].encode("HEX"), "")) + # return "3081a7301006072a8648ce3d020106052b81040027038192000404668ed626c6d6bf4a280cf4824c8cd31fe4c7c46767afb127129abfccdf8be3c38d4b1cb8792f66ccb603bfed395e908786049cb64bacab198ef07d49358da490fbc41f43ade33e05c9991a1bb7ef122cda5359d908514b3c935fe17a3679b6626161ca8d8d934d372dec23cc30ff576bfcd9c292f188af4142594ccc5f6376e2986e1521dc874819f7bcb7ae3ce400".decode("HEX") + + @property + def community_class(self): + raise NotImplementedError("must return an experiment community class") + + @property + def community_args(self): + return () + + @property + def community_kargs(self): + return {} + + def log(self, _message, **kargs): + pass + + def _periodically_log_statistics(self): + statistics = self._dispersy.statistics + process = Process(getpid()) if Process else None + + while True: + statistics.update() + + # CPU + if cpu_percent: + self.log("scenario-cpu", percentage=cpu_percent(interval=0, percpu=True)) + + # memory + if process: + rss, vms = process.get_memory_info() + self.log("scenario-memory", rss=rss, vms=vms) + + # bandwidth + self.log("scenario-bandwidth", + up=self._dispersy.endpoint.total_up, + down=self._dispersy.endpoint.total_down, + drop_count=self._dispersy.statistics.drop_count, + delay_count=statistics.delay_count, + delay_send=statistics.delay_send, + delay_success=statistics.delay_success, + delay_timeout=statistics.delay_timeout, + success_count=statistics.success_count, + received_count=statistics.received_count) + + # dispersy statistics + self.log("scenario-connection", + connection_type=statistics.connection_type, + lan_address=statistics.lan_address, + wan_address=statistics.wan_address) + + # communities + for community in statistics.communities: + self.log("scenario-community", + hex_cid=community.hex_cid, + classification=community.classification, + global_time=community.global_time, + sync_bloom_new=community.sync_bloom_new, + sync_bloom_reuse=community.sync_bloom_reuse, + candidates=[dict(zip(["lan_address", "wan_address", "global_time"], tup)) for tup in community.candidates]) + + # wait + yield self.enable_statistics + + def parse_scenario(self): + """ + Returns a list with (TIMESTAMP, FUNC, ARGS) tuples, where TIMESTAMP is the time when FUNC + must be called. + + [@+][H:]M:S[-[H:]M:S] METHOD [ARG1 [ARG2 ..]] [{PEERNR1 [, PEERNR2, ...] [, PEERNR3-PEERNR6, ...]}] + ^^^^ + use @ to schedule events based on experiment startstamp + use + to schedule events based on peer startstamp + ^^^^^^^^^^^^^^^^^ + schedule event hours:minutes:seconds after @ or + + or add another hours:minutes:seconds pair to schedule uniformly chosen between the two + ^^^^^^^^^^^^^^^^^^^^^^^ + calls script.schedule_METHOD(ARG1, ARG2) + the arguments are passed as strings + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + apply event only to peer 1 and 2, and peers in + range 3-6 (including both 3 and 6) + """ + scenario = [] + re_line = re_compile("".join(("^", + "(?P[@+])", + "\s*", + "(?:(?P\d+):)?(?P\d+):(?P\d+)", + "(?:\s*-\s*", + "(?:(?P\d+):)?(?P\d+):(?P\d+)", + ")?", + "\s+", + "(?P\w+)(?P\s+(.+?))??", + "(?:\s*{(?P\s*!?\d+(?:-\d+)?(?:\s*,\s*!?\d+(?:-\d+)?)*\s*)})?", + "\s*(?:\n)?$"))) + peernumber = int(self._kargs["peernumber"]) + filename = self._kargs["scenario"] + origin = {"@": float(self._kargs["startstamp"]) if "startstamp" in self._kargs else time(), + "+": time()} + + for lineno, line in enumerate(open(filename, "r")): + match = re_line.match(line) + if match: + # remove all entries that are None (allows us to get default per key) + dic = dict((key, value) for key, value in match.groupdict().iteritems() if not value is None) + + # get the peers, if any, for which this line applies + yes_peers = set() + no_peers = set() + for peer in dic.get("peers", "").split(","): + peer = peer.strip() + if peer: + # if the peer number (or peer number pair) is preceded by '!' it negates the result + if peer.startswith("!"): + peer = peer[1:] + peers = no_peers + else: + peers = yes_peers + # parse the peer number (or peer number pair) + if "-" in peer: + low, high = peer.split("-") + peers.update(xrange(int(low), int(high) + 1)) + else: + peers.add(int(peer)) + + if not (yes_peers or no_peers) or (yes_peers and peernumber in yes_peers) or (no_peers and not peernumber in no_peers): + begin = int(dic.get("beginH", 0)) * 3600.0 + int(dic.get("beginM", 0)) * 60.0 + int(dic.get("beginS", 0)) + end = int(dic.get("endH", 0)) * 3600.0 + int(dic.get("endM", 0)) * 60.0 + int(dic.get("endS", 0)) + assert end == 0.0 or begin <= end, "when end time is given it must be at or after the start time" + scenario.append((origin[dic.get("origin", "@")] + begin + (random() * (end - begin) if end else 0.0), + lineno, + getattr(self, "scenario_" + dic.get("method", "print")), + tuple(dic.get("args", "").split()))) + + assert scenario, "scenario is empty" + assert any(func.__name__ == "scenario_end" for _, _, func, _ in scenario), "scenario end is not defined" + assert any(func.__name__ == "scenario_start" for _, _, func, _ in scenario), "scenario start is not defined" + scenario.sort() + + for deadline, _, func, args in scenario: + logger.debug("scenario: @%.2fs %s", int(deadline - origin["@"]), func.__name__) + self.log("scenario-schedule", deadline=int(deadline - origin["@"]), func=func.__name__, args=args) + + return scenario + + def has_community(self, load=False, auto_load=False): + try: + return self._dispersy.get_community(self._cid, load=load, auto_load=auto_load) + except KeyError: + return None + + def scenario_start(self, filepath=""): + if self._my_member or self._master_member: + raise RuntimeError("scenario_start must be called only once") + if self._is_joined: + raise RuntimeError("scenario_start must be called BEFORE scenario_churn") + + if filepath: + # clone the database from filepath instead of using a new one + origional_database_filename = path.join(self._kargs["localcodedir"], filepath) + database_filename = self._dispersy.database.file_path + self.log("scenario-start-clone", source=origional_database_filename, destination=database_filename) + + # HACK: close the old database, copy the original database file, and open the new file + self._dispersy._database.close() + self._dispersy._database = None + copyfile(origional_database_filename, database_filename) + self._dispersy._database = DispersyDatabase(database_filename) + + ec = ec_generate_key(self.my_member_security) + self._my_member = self._dispersy.get_member(ec_to_public_bin(ec), ec_to_private_bin(ec)) + self._master_member = self._dispersy.get_member(self.master_member_public_key) + self._is_joined = True + + self._dispersy.database.execute(u"UPDATE community SET member = ? WHERE master = ? AND classification = ?", + (self._my_member.database_id, self._master_member.database_id, self.community_class.get_classification())) + assert self._dispersy.database.changes == 1 + community = self.community_class.load_community(self._dispersy, self._master_member, *self.community_args, **self.community_kargs) + community.auto_load = False + community.create_dispersy_identity() + community.unload_community() + self.log("scenario-start-clone-complete") + + else: + ec = ec_generate_key(self.my_member_security) + self._my_member = self._dispersy.get_member(ec_to_public_bin(ec), ec_to_private_bin(ec)) + self._master_member = self._dispersy.get_member(self.master_member_public_key) + + self.log("scenario-start", my_member=self._my_member.mid, master_member=self._master_member.mid, classification=self.community_class.get_classification()) + + def scenario_end(self): + logger.debug("END") + self.log("scenario-end") + return "END" + + def scenario_print(self, *args): + logger.info(" ".join(str(arg) for arg in args)) + + def scenario_churn(self, state, duration=None): + assert isinstance(state, str), type(state) + assert state in ("online", "offline"), state + assert duration is None or isinstance(duration, (str, float)), type(duration) + + duration = None if duration == None else float(duration) + community = self.has_community() + + if state == "online": + if community is None: + logger.debug("online for the next %.2f seconds", duration) + self.log("scenario-churn", state="online", duration=duration) + + if self._is_joined: + self.community_class.load_community(self._dispersy, self._master_member, *self.community_args, **self.community_kargs) + + else: + logger.debug("join community %s as %s", self._master_member.mid.encode("HEX"), self._my_member.mid.encode("HEX")) + community = self.community_class.join_community(self._dispersy, self._master_member, self._my_member, *self.community_args, **self.community_kargs) + community.auto_load = False + self._is_joined = True + + else: + logger.debug("online for the next %.2f seconds (we are already online)", duration) + self.log("scenario-churn", state="stay-online", duration=duration) + + elif state == "offline": + if community is None: + logger.debug("offline (we are already offline)") + self.log("scenario-churn", state="stay-offline") + + else: + logger.debug("offline") + self.log("scenario-churn", state="offline") + community.unload_community() + + else: + raise ValueError("state must be either 'online' or 'offline'") + +if poisson: + class ScenarioPoisson(object): + + def __init__(self, *args, **kargs): + self.__poisson_online_mu = 0.0 + self.__poisson_offline_mu = 0.0 + + def __poisson_churn(self): + while True: + delay = float(poisson.rvs(self.__poisson_online_mu)) + self.scenario_churn("online", delay) + yield delay + + delay = float(poisson.rvs(self.__poisson_offline_mu)) + self.scenario_churn("offline", delay) + yield delay + + def scenario_poisson_churn(self, online_mu, offline_mu): + self.__poisson_online_mu = float(online_mu) + self.__poisson_offline_mu = float(offline_mu) + self.log("scenario-poisson-churn", online_mu=self.__poisson_online_mu, offline_mu=self.__poisson_offline_mu) + self._dispersy.callback.persistent_register("scenario-poisson-identifier", self.__poisson_churn) + +if expon: + class ScenarioExpon(object): + + def __init__(self, *args, **kargs): + self.__expon_online_beta = 0.0 + self.__expon_offline_beta = 0.0 + self.__expon_online_threshold = 0.0 + self.__expon_min_online = 0.0 + self.__expon_max_online = 0.0 + self.__expon_offline_threshold = 0.0 + self.__expon_max_offline = 0.0 + self.__expon_min_offline = 0.0 + + def __expon_churn(self): + while True: + delay = expon.rvs(scale=self.__expon_online_beta) + if delay >= self.__expon_online_threshold: + delay = float(min(self.__expon_max_online, max(self.__expon_min_online, delay))) + self.scenario_churn("online", delay) + yield delay + + delay = expon.rvs(scale=self.__expon_offline_beta) + if delay >= self.__expon_offline_threshold: + delay = float(min(self.__expon_max_offline, max(self.__expon_min_offline, delay))) + self.scenario_churn("offline", delay) + yield delay + + def scenario_expon_churn(self, online_beta, offline_beta, online_threshold="DEF", min_online="DEF", max_online="DEF", offline_threshold="DEF", min_offline="DEF", max_offline="DEF"): + self.__expon_online_beta = float(online_beta) + self.__expon_offline_beta = float(offline_beta) + self.__expon_online_threshold = float("5.0" if online_threshold == "DEF" else online_threshold) + self.__expon_min_online = float("5.0" if min_online == "DEF" else min_online) + self.__expon_max_online = float(maxsize if max_online == "DEF" else max_online) + self.__expon_offline_threshold = float("5.0" if offline_threshold == "DEF" else offline_threshold) + self.__expon_min_offline = float("5.0" if min_offline == "DEF" else min_offline) + self.__expon_max_offline = float(maxsize if max_offline == "DEF" else max_offline) + self.log("scenario-expon-churn", online_beta=self.__expon_online_beta, offline_beta=self.__expon_offline_beta, online_threshold=self.__expon_online_threshold, min_online=self.__expon_min_online, max_online=self.__expon_max_online, offline_threshold=self.__expon_offline_threshold, min_offline=self.__expon_min_offline, max_offline=self.__expon_max_offline) + self._dispersy.callback.persistent_register("scenario-expon-identifier", self.__expon_churn) + + +class ScenarioUniform(object): + + def __init__(self, *args, **kargs): + self.__uniform_online_low = 0.0 + self.__uniform_online_high = 0.0 + self.__uniform_offline_low = 0.0 + self.__uniform_offline_high = 0.0 + + def __uniform_churn(self): + while True: + delay = float(uniform(self.__uniform_online_low, self.__uniform_online_high)) + self.scenario_churn("online", delay) + yield delay + + delay = float(uniform(self.__uniform_offline_low, self.__uniform_offline_high)) + self.scenario_churn("offline", delay) + yield float(delay) + + def scenario_uniform_churn(self, online_mean, online_mod="DEF", offline_mean="DEF", offline_mod="DEF"): + online_mean = float(online_mean) + online_mod = float("0.50" if online_mod == "DEF" else online_mod) + offline_mean = float("120.0" if offline_mean == "DEF" else offline_mean) + offline_mod = float("0.0" if offline_mod == "DEF" else offline_mod) + self.__uniform_online_low = online_mean * (1.0 - online_mod) + self.__uniform_online_high = online_mean * (1.0 + online_mod) + self.__uniform_offline_low = offline_mean * (1.0 - offline_mod) + self.__uniform_offline_high = offline_mean * (1.0 + offline_mod) + self.log("scenario-uniform-churn", online_low=self.__uniform_online_low, online_high=self.__uniform_online_high, offline_low=self.__uniform_offline_low, offline_high=self.__uniform_offline_high) + self._dispersy.callback.persistent_register("scenario-uniform-identifier", self.__uniform_churn) + + +class ScenarioParser1(Parser): + + def __init__(self, database): + super(ScenarioParser1, self).__init__() + + self.db = database + self.cur = database.cursor() + self.cur.execute(u"CREATE TABLE peer (id INTEGER PRIMARY KEY, hostname TEXT, mid BLOB)") + + self.peer_id = 0 + + self.mapto(self.scenario_init, "scenario-init") + self.mapto(self.scenario_start, "scenario-start") + + def scenario_init(self, timestamp, name, peernumber, hostname): + self.peer_id = peernumber + self.cur.execute(u"INSERT INTO peer (id, hostname) VALUES (?, ?)", (peernumber, hostname)) + + def scenario_start(self, timestamp, name, my_member, master_member, classification): + self.cur.execute(u"UPDATE peer SET mid = ? WHERE id = ?", (buffer(my_member), self.peer_id)) + raise NextFile() + + def parse_directory(self, *args, **kargs): + try: + super(ScenarioParser1, self).parse_directory(*args, **kargs) + finally: + self.db.commit() + + +class ScenarioParser2(Parser): + + def __init__(self, database): + super(ScenarioParser2, self).__init__() + + self.db = database + self.cur = database.cursor() + self.cur.execute(u"CREATE TABLE cpu (timestamp FLOAT, peer INTEGER, percentage FLOAT)") + self.cur.execute(u"CREATE TABLE memory (timestamp FLOAT, peer INTEGER, rss INTEGER, vms INTEGER)") + self.cur.execute(u"CREATE TABLE bandwidth (timestamp FLOAT, peer INTEGER, up INTEGER, down INTEGER, drop_count INTEGER, delay_count INTEGER, delay_send INTEGER, delay_success INTEGER, delay_timeout INTEGER, success_count INTEGER, received_count INTEGER)") + self.cur.execute(u"CREATE TABLE bandwidth_rate (timestamp FLOAT, peer INTEGER, up INTEGER, down INTEGER)") + self.cur.execute(u"CREATE TABLE churn (peer INTEGER, online FLOAT, offline FLOAT)") + self.cur.execute(u"CREATE TABLE community (timestamp FLOAT, peer INTEGER, hex_cid TEXT, classification TEXT, global_time INTEGER, sync_bloom_new INTEGER, sync_bloom_reuse INTEGER, candidate_count INTEGER)") + + self.mid_cache = {} + self.hostname = "" + self.mid = "" + self.peer_id = 0 + + self.online_timestamp = 0.0 + self.bandwidth_timestamp = 0 + self.bandwidth_up = 0 + self.bandwidth_down = 0 + + self.io_timestamp = 0.0 + self.io_read_bytes = 0 + self.io_read_count = 0 + self.io_write_bytes = 0 + self.io_write_count = 0 + + self.mapto(self.scenario_init, "scenario-init") + self.mapto(self.scenario_start, "scenario-start") + self.mapto(self.scenario_end, "scenario-end") + self.mapto(self.scenario_churn, "scenario-churn") + self.mapto(self.scenario_cpu, "scenario-cpu") + self.mapto(self.scenario_memory, "scenario-memory") + self.mapto(self.scenario_bandwidth, "scenario-bandwidth") + self.mapto(self.scenario_community, "scenario-community") + + def start_parser(self, filename): + """Called once before starting to parse FILENAME""" + super(ScenarioParser2, self).start_parser(filename) + + self.online_timestamp = 0.0 + self.bandwidth_timestamp = 0 + self.bandwidth_up = 0 + self.bandwidth_down = 0 + + def get_peer_id_from_mid(self, mid): + try: + return self.mid_cache[mid] + except KeyError: + try: + peer_id, = self.cur.execute(u"SELECT id FROM peer WHERE mid = ?", (buffer(mid),)).next() + except StopIteration: + self.cur.execute(u"INSERT INTO peer (mid) VALUES (?)", (buffer(mid),)) + return self.cur.lastrowid + else: + if peer_id is None: + raise ValueError(mid.encode("HEX")) + else: + self.mid_cache[mid] = peer_id + return peer_id + + def scenario_init(self, timestamp, _, peernumber, hostname): + self.hostname = hostname + self.peer_id = peernumber + self.bandwidth_timestamp = timestamp + + def scenario_start(self, timestamp, _, my_member, master_member, classification): + self.mid = my_member + + def scenario_end(self, timestamp, _): + if self.online_timestamp: + self.cur.execute(u"INSERT INTO churn (peer, online, offline) VALUES (?, ?, ?)", (self.peer_id, self.online_timestamp, timestamp)) + + def scenario_churn(self, timestamp, _, state, **kargs): + if state == "online": + self.online_timestamp = timestamp + + elif state == "offline": + assert self.online_timestamp + self.cur.execute(u"INSERT INTO churn (peer, online, offline) VALUES (?, ?, ?)", (self.peer_id, self.online_timestamp, timestamp)) + self.online_timestamp = 0.0 + + def scenario_cpu(self, timestamp, _, percentage): + self.cur.execute(u"INSERT INTO cpu (timestamp, peer, percentage) VALUES (?, ?, ?)", (timestamp, self.peer_id, sum(percentage) / len(percentage))) + + def scenario_memory(self, timestamp, _, vms, rss): + self.cur.execute(u"INSERT INTO memory (timestamp, peer, rss, vms) VALUES (?, ?, ?, ?)", (timestamp, self.peer_id, rss, vms)) + + def scenario_bandwidth(self, timestamp, _, up, down, drop_count, delay_count, delay_send, delay_success, delay_timeout, success_count, received_count): + self.cur.execute(u"INSERT INTO bandwidth (timestamp, peer, up, down, drop_count, delay_count, delay_send, delay_success, delay_timeout, success_count, received_count) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", + (timestamp, self.peer_id, up, down, drop_count, delay_count, delay_send, delay_success, delay_timeout, success_count, received_count)) + + delta = timestamp - self.bandwidth_timestamp + self.cur.execute(u"INSERT INTO bandwidth_rate (timestamp, peer, up, down) VALUES (?, ?, ?, ?)", + (timestamp, self.peer_id, (up - self.bandwidth_up) / delta, (down-self.bandwidth_down)/delta)) + self.bandwidth_timestamp = timestamp + self.bandwidth_up = up + self.bandwidth_down = down + + def scenario_community(self, timestamp, _, hex_cid, classification, global_time, sync_bloom_new, sync_bloom_reuse, candidates): + self.cur.execute(u"INSERT INTO community (timestamp, peer, hex_cid, classification, global_time, sync_bloom_new, sync_bloom_reuse, candidate_count) VALUES (?, ?, ?, ?, ?, ?, ?, ?)", + (timestamp, self.peer_id, hex_cid, classification, global_time, sync_bloom_new, sync_bloom_reuse, len(candidates))) + + def parse_directory(self, *args, **kargs): + try: + super(ScenarioParser2, self).parse_directory(*args, **kargs) + finally: + self.db.commit() diff -Nru tribler-6.2.0/Tribler/dispersy/tool/test tribler-6.2.0/Tribler/dispersy/tool/test --- tribler-6.2.0/Tribler/dispersy/tool/test 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/test 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,57 @@ +#!/bin/bash + +DISPERSY=$1 +if [ ! -f "$DISPERSY/tool/main.py" ]; then + echo "usage: $0 dispersy_trunk" + exit 1 +fi + +echo "================================================================================" +echo "Testcases in __debug__ mode" +echo "================================================================================" + +rm -f sqlite/dispersy.db* +rm -f dispersy.log + +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBatchScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBootstrapServers || exit 1 +# python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBootstrapServersStresstest || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyClassificationScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyCryptoScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyDestroyCommunityScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyDynamicSettings || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyIdenticalPayloadScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyMemberTagScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyMissingMessageScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersySignatureScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersySyncScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyTimelineScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyUndoScript || exit 1 +python -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.tool.callbackscript.DispersyCallbackScript || exit 1 + +echo "================================================================================" +echo "Testcases in optimized mode" +echo "================================================================================" + +rm -f sqlite/dispersy.db* +rm -f dispersy.log + +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBatchScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBootstrapServers || exit 1 +# python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyBootstrapServersStresstest || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyClassificationScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyCryptoScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyDestroyCommunityScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyDynamicSettings || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyIdenticalPayloadScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyMemberTagScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyMissingMessageScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersySignatureScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersySyncScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyTimelineScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.script.DispersyUndoScript || exit 1 +python -O -c "from $DISPERSY.tool.main import main; main()" --script $DISPERSY.tool.callbackscript.DispersyCallbackScript || exit 1 + +echo "================================================================================" +echo "Finished testcases successfully" +echo "================================================================================" diff -Nru tribler-6.2.0/Tribler/dispersy/tool/tracecommunity.py tribler-6.2.0/Tribler/dispersy/tool/tracecommunity.py --- tribler-6.2.0/Tribler/dispersy/tool/tracecommunity.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/tracecommunity.py 2013-08-07 13:06:57.000000000 +0000 @@ -0,0 +1,92 @@ +from ..dispersy import Dispersy +from ..callback import Callback + +import sys +import os +from time import time +import cProfile + +from ...community.channel.payload import TorrentPayload, CommentPayload,\ + MarkTorrentPayload, ModerationPayload, ModificationPayload +from ...community.channel.community import ChannelCommunity +from collections import Counter + +def main(): + if len(sys.argv) < 3: + print >> sys.stderr, "Must specify the path of the dispersy database and the cid" + sys.exit(1) + + profile = False + if len(sys.argv) == 4: + profile = bool(sys.argv[3]) + + db_file = sys.argv[1] + cid = sys.argv[2] + cid = cid.decode("hex") + assert len(cid) == 20, len(cid) + + full_db_file = os.path.abspath(db_file) + state_dir, db_filename = os.path.split(full_db_file) + db_filename = "../"+db_filename + + print >> sys.stderr, "Using %s as statedir, and %s as db_filename"%(state_dir, db_filename) + + dispersy = Dispersy.get_instance(Callback(), unicode(state_dir), unicode(db_filename)) + dispersy._database.commit = lambda: True + dispersy.define_auto_load(ChannelCommunity, kargs = {'integrate_with_tribler':False}) + + community = dispersy.get_community(cid, True) + packets = [str(packet) for packet, in dispersy._database.execute(u'SELECT packet FROM sync WHERE community = %d'%community._database_id)] + + if profile: + cProfile.runctx('do_trace(dispersy, community, packets)', globals(), {'dispersy':dispersy, 'community':community, 'packets':packets}) + else: + do_trace(dispersy, community, packets) + +def do_trace(dispersy, community, packets): + message_trace = {} + message_types = set() + members = set() + + print >> sys.stderr, "Found %d packets, attempting to convert them..."%len(packets) + for i, packet in enumerate(packets): + if i > 0 and i % 10000 == 0: + print >> sys.stderr, i, + + message = dispersy.convert_packet_to_message(packet, community, load=False, auto_load=False, verify = False) + payload = message.payload + mid = message.authentication.member.mid + + if hasattr(payload, 'timestamp'): + timestamp = payload.timestamp + timestamp = int((timestamp / 60.0)) * 60 + + message_trace.setdefault(timestamp, set()).add(payload) + message_types.add(str(type(payload._meta))) + members.add(mid) + + print >> sys.stderr, "\nConverted all packets, creating trace now" + + keys = message_trace.keys() + keys.sort() + + print "#packets were created by %d users"%len(members) + print "#time messagetype diff+ messagetype cumul+" + print "#", " ".join(message_types), " ".join(message_types) + + total_messagetypes = Counter() + for key in keys: + sum_messagetypes = Counter() + for payload in message_trace[key]: + message_type = str(type(payload._meta)) + sum_messagetypes[message_type] += 1 + total_messagetypes[message_type] += 1 + + print key, + for message_type in message_types: + print sum_messagetypes[message_type], + + for message_type in message_types: + print total_messagetypes[message_type], + + print "" diff -Nru tribler-6.2.0/Tribler/dispersy/tool/tracker.py tribler-6.2.0/Tribler/dispersy/tool/tracker.py --- tribler-6.2.0/Tribler/dispersy/tool/tracker.py 1970-01-01 00:00:00.000000000 +0000 +++ tribler-6.2.0/Tribler/dispersy/tool/tracker.py 2013-07-31 12:17:59.000000000 +0000 @@ -0,0 +1,403 @@ +""" +Run Dispersy in standalone tracker mode. + +Outputs statistics every 300 seconds: +- BANDWIDTH BYTES-UP BYTES-DOWN +- COMMUNITY COUNT(OVERLAYS) COUNT(KILLED-OVERLAYS) +- CANDIDATE COUNT(ALL_CANDIDATES) 18/07/13 no longer used +- CANDIDATE2 COUNT(VERIFIED_CANDIDATES) 18/07/13 replaces CANDIDATE + +Outputs active peers whenever encountered: +- REQ_IN2 HEX(COMMUNITY) hex(MEMBER) DISPERSY-VERSION OVERLAY-VERSION ADDRESS PORT +- RES_IN2 HEX(COMMUNITY) hex(MEMBER) DISPERSY-VERSION OVERLAY-VERSION ADDRESS PORT + +Outputs destroyed communities whenever encountered: +- DESTROY_IN HEX(COMMUNITY) hex(MEMBER) DISPERSY-VERSION OVERLAY-VERSION ADDRESS PORT +- DESTROY_OUT HEX(COMMUNITY) hex(MEMBER) DISPERSY-VERSION OVERLAY-VERSION ADDRESS PORT + +Note that there is no output for REQ_IN2 for destroyed overlays. Instead a DESTROY_OUT is given +whenever a introduction request is received for a destroyed overlay. +""" + +import logging.config +try: + logging.config.fileConfig("logger.conf") +except: + print "Unable to load logging config from 'logger.conf' file." +logging.basicConfig(format="%(asctime)-15s [%(levelname)s] %(message)s") +logger = logging.getLogger(__name__) + +if __name__ == "__main__": + # Concerning the relative imports, from PEP 328: + # http://www.python.org/dev/peps/pep-0328/ + # + # Relative imports use a module's __name__ attribute to determine that module's position in + # the package hierarchy. If the module's name does not contain any package information + # (e.g. it is set to '__main__') then relative imports are resolved as if the module were a + # top level module, regardless of where the module is actually located on the file system. + print "Usage: python -c \"from dispersy.tool.tracker import main; main()\" [--statedir DIR] [--ip ADDR] [--port PORT]" + exit(1) + +from time import time +import os +import errno +# optparse is deprecated since python 2.7 +import optparse +import signal +import sys + +from ..candidate import BootstrapCandidate, LoopbackCandidate +from ..community import Community, HardKilledCommunity +from ..conversion import BinaryConversion +from ..crypto import ec_generate_key, ec_to_public_bin, ec_to_private_bin +from ..dispersy import Dispersy +from ..endpoint import StandaloneEndpoint +from ..message import Message, DropMessage +from .mainthreadcallback import MainThreadCallback + +if sys.platform == 'win32': + SOCKET_BLOCK_ERRORCODE = 10035 # WSAEWOULDBLOCK +else: + SOCKET_BLOCK_ERRORCODE = errno.EWOULDBLOCK + + +class BinaryTrackerConversion(BinaryConversion): + + def decode_message(self, candidate, data, _=None): + # disable verify + return self._decode_message(candidate, data, False, False) + + +class TrackerHardKilledCommunity(HardKilledCommunity): + + def __init__(self, *args, **kargs): + super(TrackerHardKilledCommunity, self).__init__(*args, **kargs) + # communities are cleaned based on a 'strike' rule. periodically, we will check is there + # are active candidates, when there are 'strike' is set to zero, otherwise it is incremented + # by one. once 'strike' reaches a predefined value the community is cleaned + self._strikes = 0 + + def update_strikes(self, now): + # does the community have any active candidates + self._strikes += 1 + return self._strikes + + def dispersy_on_introduction_request(self, messages): + hex_cid = messages[0].community.cid.encode("HEX") + for message in messages: + host, port = message.candidate.sock_addr + print "DESTROY_OUT", hex_cid, message.authentication.member.mid.encode("HEX"), ord(message.conversion.dispersy_version), ord(message.conversion.community_version), host, port + return super(TrackerHardKilledCommunity, self).dispersy_on_introduction_request(messages) + + +class TrackerCommunity(Community): + + """ + This community will only use dispersy-candidate-request and dispersy-candidate-response messages. + """ + def __init__(self, *args, **kargs): + super(TrackerCommunity, self).__init__(*args, **kargs) + # communities are cleaned based on a 'strike' rule. periodically, we will check is there + # are active candidates, when there are 'strike' is set to zero, otherwise it is incremented + # by one. once 'strike' reaches a predefined value the community is cleaned + self._strikes = 0 + + self._walked_stumbled_candidates = self._iter_categories([u'walk', u'stumble']) + + def _initialize_meta_messages(self): + super(TrackerCommunity, self)._initialize_meta_messages() + + # remove all messages that we should not be using + meta_messages = self._meta_messages + self._meta_messages = {} + for name in [u"dispersy-introduction-request", + u"dispersy-introduction-response", + u"dispersy-puncture-request", + u"dispersy-puncture", + u"dispersy-identity", + u"dispersy-missing-identity", + + u"dispersy-authorize", + u"dispersy-revoke", + u"dispersy-missing-proof", + u"dispersy-destroy-community"]: + self._meta_messages[name] = meta_messages[name] + + @property + def dispersy_auto_download_master_member(self): + return False + + @property + def dispersy_sync_bloom_filter_strategy(self): + # disable sync bloom filter + return lambda: None + + @property + def dispersy_acceptable_global_time_range(self): + # we will accept the full 64 bit global time range + return 2 ** 64 - self._global_time + + def update_strikes(self, now): + # does the community have any active candidates + if any(self.dispersy_yield_verified_candidates()): + self._strikes = 0 + else: + self._strikes += 1 + return self._strikes + + def initiate_meta_messages(self): + return [] + + def initiate_conversions(self): + return [BinaryTrackerConversion(self, "\x00")] + + def get_conversion_for_packet(self, packet): + try: + return super(TrackerCommunity, self).get_conversion_for_packet(packet) + + except KeyError: + # the dispersy version MUST BE available. Currently we only support \x00: BinaryConversion + if packet[0] == "\x00": + self.add_conversion(BinaryConversion(self, packet[1])) + + # try again + return super(TrackerCommunity, self).get_conversion_for_packet(packet) + + def dispersy_cleanup_community(self, message): + # since the trackers use in-memory databases, we need to store the destroy-community + # message, and all associated proof, separately. + host, port = message.candidate.sock_addr + print "DESTROY_IN", self._cid.encode("HEX"), message.authentication.member.mid.encode("HEX"), ord(message.conversion.dispersy_version), ord(message.conversion.community_version), host, port + + write = open(self._dispersy.persistent_storage_filename, "a+").write + write("# received dispersy-destroy-community from %s\n" % (str(message.candidate),)) + + identity_id = self._meta_messages[u"dispersy-identity"].database_id + execute = self._dispersy.database.execute + messages = [message] + stored = set() + while messages: + message = messages.pop() + + if not message.packet in stored: + stored.add(message.packet) + write(" ".join((message.name, message.packet.encode("HEX"), "\n"))) + + if not message.authentication.member.public_key in stored: + try: + packet, = execute(u"SELECT packet FROM sync WHERE meta_message = ? AND member = ?", (identity_id, message.authentication.member.database_id)).next() + except StopIteration: + pass + else: + write(" ".join(("dispersy-identity", str(packet).encode("HEX"), "\n"))) + + _, proofs = self._timeline.check(message) + messages.extend(proofs) + + return TrackerHardKilledCommunity + + def dispersy_get_introduce_candidate(self, exclude_candidate=None): + """ + Get an active candidate that is part of this community in Round Robin (Not random anymore). + """ + assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates" + first_candidate = None + while True: + result = self._walked_stumbled_candidates.next() + if result == first_candidate: + result = None + + if not first_candidate: + first_candidate = result + + if result and exclude_candidate: + # same candidate as requesting the introduction + if result == exclude_candidate: + continue + + # cannot introduce a non-tunnelled candidate to a tunneled candidate (it's swift instance will not + # get it) + if not exclude_candidate.tunnel and result.tunnel: + continue + + # cannot introduce two nodes that are behind a different symmetric NAT + if (exclude_candidate.connection_type == u"symmetric-NAT" and + result.connection_type == u"symmetric-NAT" and + not exclude_candidate.wan_address[0] == result.wan_address[0]): + continue + + return result + +class TrackerDispersy(Dispersy): + + def __init__(self, callback, endpoint, working_directory, silent=False): + super(TrackerDispersy, self).__init__(callback, endpoint, working_directory, u":memory:") + + # non-autoload nodes + self._non_autoload = set() + self._non_autoload.update(host for host, _ in self._bootstrap_candidates.iterkeys()) + # leaseweb machines, some are running boosters, they never unload a community + self._non_autoload.update(["95.211.105.65", "95.211.105.67", "95.211.105.69", "95.211.105.71", "95.211.105.73", "95.211.105.75", "95.211.105.77", "95.211.105.79", "95.211.105.81", "85.17.81.36"]) + + # location of persistent storage + self._persistent_storage_filename = os.path.join(working_directory, "persistent-storage.data") + self._silent = silent + self._my_member = None + + callback.register(self._create_my_member) + callback.register(self._load_persistent_storage) + callback.register(self._unload_communities) + + if not self._silent: + callback.register(self._report_statistics) + + def _create_my_member(self): + # generate a new my-member + ec = ec_generate_key(u"very-low") + self._my_member = self.get_member(ec_to_public_bin(ec), ec_to_private_bin(ec)) + + @property + def persistent_storage_filename(self): + return self._persistent_storage_filename + + def get_community(self, cid, load=False, auto_load=True): + try: + return super(TrackerDispersy, self).get_community(cid, True, True) + except KeyError: + self._communities[cid] = TrackerCommunity.join_community(self, self.get_temporary_member_from_id(cid), self._my_member) + return self._communities[cid] + + def _load_persistent_storage(self): + # load all destroyed communities + try: + packets = [packet.decode("HEX") for _, packet in (line.split() for line in open(self._persistent_storage_filename, "r") if not line.startswith("#"))] + except IOError: + pass + else: + candidate = LoopbackCandidate() + for packet in reversed(packets): + try: + self.on_incoming_packets([(candidate, packet)], cache=False, timestamp=time()) + except: + logger.exception("Error while loading from persistent-destroy-community.data") + + def _convert_packets_into_batch(self, packets): + """ + Ensure that communities are loaded when the packet is received from a non-bootstrap node, + otherwise, load and auto-load are disabled. + """ + def filter_non_bootstrap_nodes(): + for candidate, packet in packets: + cid = packet[2:22] + + if not cid in self._communities and False: # candidate.sock_addr[0] in self._non_autoload: + if __debug__: + logger.warn("drop a %d byte packet (received from non-autoload node) from %s", len(packet), candidate) + self._statistics.dict_inc(self._statistics.drop, "_convert_packets_into_batch:from bootstrap node for unloaded community") + continue + + yield candidate, packet + + packets = list(filter_non_bootstrap_nodes()) + if packets: + return super(TrackerDispersy, self)._convert_packets_into_batch(packets) + + else: + return [] + + def _unload_communities(self): + def is_active(community, now): + # check 1: does the community have any active candidates + if community.update_strikes(now) < 3: + return True + + # check 2: does the community have any cached messages waiting to be processed + for meta in self._batch_cache.iterkeys(): + if meta.community == community: + return True + + # the community is inactive + return False + + while True: + yield 180.0 + now = time() + inactive = [community for community in self._communities.itervalues() if not is_active(community, now)] + logger.debug("cleaning %d/%d communities", len(inactive), len(self._communities)) + for community in inactive: + community.unload_community() + + def _report_statistics(self): + while True: + yield 300.0 + mapping = {TrackerCommunity: 0, TrackerHardKilledCommunity: 0} + for community in self._communities.itervalues(): + mapping[type(community)] += 1 + + print "BANDWIDTH", self._endpoint.total_up, self._endpoint.total_down + print "COMMUNITY", mapping[TrackerCommunity], mapping[TrackerHardKilledCommunity] + print "CANDIDATE2", sum(len(list(community.dispersy_yield_verified_candidates())) for community in self._communities.itervalues()) + + if self._statistics.outgoing: + for key, value in self._statistics.outgoing.iteritems(): + print "OUTGOING", key, value + + def create_introduction_request(self, community, destination, allow_sync, forward=True): + # prevent steps towards other trackers + if not isinstance(destination, BootstrapCandidate): + return super(TrackerDispersy, self).create_introduction_request(community, destination, allow_sync, forward) + + def check_introduction_request(self, messages): + for message in super(TrackerDispersy, self).check_introduction_request(messages): + if isinstance(message, Message.Implementation) and isinstance(message.candidate, BootstrapCandidate): + yield DropMessage(message, "drop dispersy-introduction-request from bootstrap peer") + continue + + yield message + + def on_introduction_request(self, messages): + if not self._silent: + hex_cid = messages[0].community.cid.encode("HEX") + for message in messages: + host, port = message.candidate.sock_addr + print "REQ_IN2", hex_cid, message.authentication.member.mid.encode("HEX"), ord(message.conversion.dispersy_version), ord(message.conversion.community_version), host, port + return super(TrackerDispersy, self).on_introduction_request(messages) + + def on_introduction_response(self, messages): + if not self._silent: + hex_cid = messages[0].community.cid.encode("HEX") + for message in messages: + host, port = message.candidate.sock_addr + print "RES_IN2", hex_cid, message.authentication.member.mid.encode("HEX"), ord(message.conversion.dispersy_version), ord(message.conversion.community_version), host, port + return super(TrackerDispersy, self).on_introduction_response(messages) + + +def setup_dispersy(dispersy): + dispersy.define_auto_load(TrackerCommunity) + dispersy.define_auto_load(TrackerHardKilledCommunity) + + +def main(): + command_line_parser = optparse.OptionParser() + command_line_parser.add_option("--profiler", action="store_true", help="use cProfile on the Dispersy thread", default=False) + command_line_parser.add_option("--memory-dump", action="store_true", help="use meliae to dump the memory periodically", default=False) + command_line_parser.add_option("--statedir", action="store", type="string", help="Use an alternate statedir", default=".") + command_line_parser.add_option("--ip", action="store", type="string", default="0.0.0.0", help="Dispersy uses this ip") + command_line_parser.add_option("--port", action="store", type="int", help="Dispersy uses this UDL port", default=6421) + command_line_parser.add_option("--silent", action="store_true", help="Prevent tracker printing to console", default=False) + + # parse command-line arguments + opt, _ = command_line_parser.parse_args() + + # start Dispersy + dispersy = TrackerDispersy(MainThreadCallback("Dispersy"), StandaloneEndpoint(opt.port, opt.ip), unicode(opt.statedir), bool(opt.silent)) + dispersy.callback.register(setup_dispersy, (dispersy,)) + dispersy.start() + + def signal_handler(sig, frame): + print "Received signal '", sig, "' in", frame, "(shutting down)" + dispersy.stop(timeout=0.0) + signal.signal(signal.SIGINT, signal_handler) + + # wait forever + dispersy.callback.loop() diff -Nru tribler-6.2.0/debian/changelog tribler-6.2.0/debian/changelog --- tribler-6.2.0/debian/changelog 2013-08-07 12:35:27.000000000 +0000 +++ tribler-6.2.0/debian/changelog 2013-08-07 14:10:39.000000000 +0000 @@ -1,6 +1,11 @@ -tribler (6.2.0-0~webupd8~saucy) saucy; urgency=medium +tribler (6.2.0-0~webupd8~saucy1) saucy; urgency=medium - * New upstream release. + * New upstream release + * rules: remove override_dh_auto_clean + * control: depend on gconf2 (>= 2.28.1-2) + * control: don't depend on vlc (>= 1.1.0), instead, add it under Recommends + * to get tribler-swift: cd Tribler/SwiftEngine && svn co http://svn.tribler.org/libswift/branches/arno/swift-like-ftp. + * to get dispersy: cd Tribler/dispersy && svn co http://svn.tribler.org/dispersy/trunk . -- Alin Andrei Wed, 07 Aug 2013 13:35:29 +0200 diff -Nru tribler-6.2.0/debian/control tribler-6.2.0/debian/control --- tribler-6.2.0/debian/control 2013-08-07 12:36:55.000000000 +0000 +++ tribler-6.2.0/debian/control 2013-08-07 13:14:30.000000000 +0000 @@ -17,6 +17,7 @@ python-wxgtk2.8, tribler-swift, python-libtorrent, + gconf2 (>= 2.28.1-2), ${misc:Depends}, ${python:Depends} Recommends: vlc (>= 1.1.0) diff -Nru tribler-6.2.0/debian/rules tribler-6.2.0/debian/rules --- tribler-6.2.0/debian/rules 2013-07-31 10:45:22.000000000 +0000 +++ tribler-6.2.0/debian/rules 2013-08-07 13:14:35.000000000 +0000 @@ -50,10 +50,6 @@ rm -f $(CURDIR)/debian/tribler/usr/share/tribler/Tribler/Main/webUI/static/mootools.js dh_link -ptribler usr/share/javascript/mootools/mootools.js usr/share/tribler/Tribler/Main/webUI/static/mootools.js -override_dh_auto_clean: - make -C $(CURDIR)/Tribler/SwiftEngine clean || echo "SwiftEngine cleaned" - dh_auto_clean - #TODO: Fix this get-orig-source: set -e; if echo $(DEB_VERSION) | grep -c "svn"; \ diff -Nru tribler-6.2.0/debian/source/format tribler-6.2.0/debian/source/format --- tribler-6.2.0/debian/source/format 2013-08-07 14:13:21.036466359 +0000 +++ tribler-6.2.0/debian/source/format 2013-08-07 14:13:22.016465264 +0000 @@ -1 +1 @@ -3.0 (quilt) +3.0 (native)
' +for tostr in `grep -v '#' ../$SERVERS`; do + to=${tostr%:*} + echo '>'$to'
'$from'>' + cat $from-$to.html + if [ -e "$from-$to.big.png" ]; then + echo "" + echo "" + echo "" + fi + echo '