diff -Nru foolscap-0.5.0+dfsg/ChangeLog foolscap-0.5.1+dfsg/ChangeLog --- foolscap-0.5.0+dfsg/ChangeLog 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/ChangeLog 2010-03-26 04:28:01.000000000 +0000 @@ -1,3 +1,45 @@ +2010-03-25 Brian Warner + + * foolscap/_version.py: release Foolscap-0.5.1 + * misc/{dapper|edgy|feisty|gutsy|hardy|sarge|etch|sid}/debian/changelog: + same + +2010-03-25 Brian Warner + + * foolscap/banana.py, broker.py, copyable.py, pb.py + * foolscap/appserver/cli.py, foolscap/logging/gatherer.py + * foolscap/test/test_call.py, test_logging.py, test_observer.py, + * test_promise.py, test_reference.py, test_registration.py, + * test_schema.py, test_tub.py: clean up lots of pyflakes warnings, + revealed by the new more-strict version of pyflakes + + * foolscap/banana.py (Banana.dataReceived): apply zooko's patch to + use stringchain on the inbound data path, to fix the O(n^2) + performance in large tokens (e.g. a 10MB string token). Thanks + Zooko! Closes #149. + * foolscap/test/bench_banana.py: also zooko's performance tests + + * foolscap/stringchain.py: Zooko's utility class to efficiently + handle large strings split into several pieces, such as the + inbound socket buffers that Banana.dataReceived() winds up with. + * foolscap/test/test_stringchain.py: unit tests for the same. + Zooko has given me permission to distribute both of these under + Foolscap's MIT license. + +2010-03-14 Brian Warner + + * foolscap/constraint.py: remove maxSize/maxDepth methods, and the + related UnboundedSchema exception. As described in ticket #127, + I'm giving up on resource-exhaustion defenses, which allows for a + lot of code simplification. + * foolscap/{copyable.py|removeinterface.py|schema.py}: same + * foolscap/slicers/*.py: same + * foolscap/test/test_schema.py: remove tests + * doc/jobs.txt: remove TODO items around maxSize + + * foolscap/_version.py: bump version while between releases + * misc/*/debian/changelog: same + 2010-01-18 Brian Warner * foolscap/_version.py: release Foolscap-0.5.0 diff -Nru foolscap-0.5.0+dfsg/debian/changelog foolscap-0.5.1+dfsg/debian/changelog --- foolscap-0.5.0+dfsg/debian/changelog 2010-03-30 20:36:19.000000000 +0100 +++ foolscap-0.5.1+dfsg/debian/changelog 2010-03-30 20:36:20.000000000 +0100 @@ -1,3 +1,10 @@ +foolscap (0.5.1+dfsg-0ubuntu1) lucid; urgency=low + + * New upstream release (LP: #548993) + - Performance improvement for large data transfers + + -- Mackenzie Morgan Tue, 30 Mar 2010 15:06:29 -0400 + foolscap (0.5.0+dfsg-1) unstable; urgency=low [ Elliot Murphy ] diff -Nru foolscap-0.5.0+dfsg/debian/control foolscap-0.5.1+dfsg/debian/control --- foolscap-0.5.0+dfsg/debian/control 2010-03-30 20:36:19.000000000 +0100 +++ foolscap-0.5.1+dfsg/debian/control 2010-03-30 20:36:20.000000000 +0100 @@ -1,7 +1,8 @@ Source: foolscap Section: python Priority: optional -Maintainer: Debian Python Modules Team +XSBC-Original-Maintainer: Debian Python Modules Team +Maintainer: Ubuntu Developers Uploaders: Stephan Peijnik Build-Depends: debhelper (>= 5) Build-Depends-Indep: python-all (>= 2.5.4-1~), python-support (>= 0.6.4), python-openssl, python-twisted-core, python-twisted-web, python-setuptools diff -Nru foolscap-0.5.0+dfsg/doc/examples/git-furl~ foolscap-0.5.1+dfsg/doc/examples/git-furl~ --- foolscap-0.5.0+dfsg/doc/examples/git-furl~ 1970-01-01 01:00:00.000000000 +0100 +++ foolscap-0.5.1+dfsg/doc/examples/git-furl~ 2009-07-01 03:53:34.000000000 +0100 @@ -0,0 +1 @@ +#!/usr/bin/env python diff -Nru foolscap-0.5.0+dfsg/doc/jobs.txt foolscap-0.5.1+dfsg/doc/jobs.txt --- foolscap-0.5.0+dfsg/doc/jobs.txt 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/doc/jobs.txt 2010-03-15 03:17:37.000000000 +0000 @@ -156,177 +156,6 @@ the __implements__ list consulted) before any of the object's tokens are accepted. -* security TODOs: - -** size constraints on the set-vocab sequence - -* implement schema.maxSize() - -In newpb, schemas serve two purposes: - - a) make programs safer by reducing the surprises that can appear in their - arguments (i.e. factoring out argument-checking in a useful way) - - b) remove memory-consumption DoS attacks by putting an upper bound on the - memory consumed by any particular message. - -Each schema has a pair of methods named maxSize() and maxDepth() which -provide this upper bound. While the schema is in effect (say, during the -receipt of a particular named argument to a remotely-invokable method), at -most X bytes and Y slicer frames will be in use before either the object is -accepted and processed or the schema notes the violation and the object is -rejected (whereupon the temporary storage is released and all further bytes -in the rejected object are simply discarded). Strictly speaking, the number -returned by maxSize() is the largest string on the wire which has not yet -been rejected as violating the constraint, but it is also a reasonable -metric to describe how much internal storage must be used while processing -it. (To achieve greater accuracy would involve knowing exactly how large -each Python type is; not a sensible thing to attempt). - -The idea is that someone who is worried about an attacker throwing a really -long string or an infinitely-nested list at them can ask the schema just what -exactly their current exposure is. The tradeoff between flexibility ("accept -any object whatsoever here") and exposure to DoS attack is then user-visible -and thus user-selectable. - -To implement maxSize() for a basic schema (like a string), you simply need -to look at banana.xhtml and see how basic tokens are encoded (you will also -need to look at banana.py and see how deserialization is actually -implemented). For a schema.StringConstraint(32) (which accepts strings <= 32 -characters in length), the largest serialized form that has not yet been -either accepted or rejected is: - - 64 bytes (header indicating 0x000000..0020 with lots of leading zeros) - + 1 byte (STRING token) - + 32 bytes (string contents) - = 97 - -If the header indicates a conforming length (<=32) then just after the 32nd -byte is received, the string object is created and handed to up the stack, so -the temporary storage tops out at 97. If someone is trying to spam us with a -million-character string, the serialized form would look like: - - 64 bytes (header indicating 1-million in hex, with leading zeros) -+ 1 byte (STRING token) -= 65 - -at which point the receive parser would check the constraint, decide that -1000000 > 32, and reject the remainder of the object. - -So (with the exception of pass/fail maxSize values, see below), the following -should hold true: - - schema.StringConstraint(32).maxSize() == 97 - -Now, schemas which represent containers have size limits that are the sum of -their contents, plus some overhead (and a stack level) for the container -itself. For example, a list of two small integers is represented in newbanana -as: - - OPEN(list) - INT - INT - CLOSE() - -which really looks like: - - opencount-OPEN - len-STRING-"list" - value-INT - value-INT - opencount-CLOSE - -This sequence takes at most: - - opencount-OPEN: 64+1 - len-STRING-"list": 64+1+1000 (opentypes are confined to be <= 1k long) - value-INT: 64+1 - value-INT: 64+1 - opencount-CLOSE: 64+1 - -or 5*(64+1)+1000 = 1325, or rather: - - 3*(64+1)+1000 + N*(IntConstraint().maxSize()) - -So ListConstraint.maxSize is computed by doing some math involving the -.maxSize value of the objects that go into it (the ListConstraint.constraint -attribute). This suggests a recursive algorithm. If any constraint is -unbounded (say a ListConstraint with no limit on the length of the list), -then maxSize() raises UnboundedSchema to indicate that there is no limit on -the size of a conforming string. Clearly, if any constraint is found to -include itself, UnboundedSchema must also be raised. - -This is a loose upper bound. For example, one non-conforming input string -would be: - - opencount-OPEN: 64+1 - len-STRING-"x"*1000: 64+1+1000 - -The entire string would be accepted before checking to see which opentypes -were valid: the ListConstraint only accepts the "list" opentype and would -reject this string immediately after the 1000th "x" was received. So a -tighter upper bound would be 2*65+1000 = 1130. - -In general, the bound is computed by walking through the deserialization -process and identifying the largest string that could make it past the -validity checks. There may be later checks that will reject the string, but -if it has not yet been rejected, then it still represents exposure for a -memory consumption DoS. - -** pass/fail sizes - -I started to think that it was necessary to have each constraint provide two -maxSize numbers: one of the largest sequence that could possibly be accepted -as valid, and a second which was the largest sequence that could be still -undecided. This would provide a more accurate upper bound because most -containers will respond to an invalid object by abandoning the rest of the -container: i.e. if the current active constraint is: - - ListConstraint(StringConstraint(32), maxLength=30) - -then the first thing that doesn't match the string constraint (say an -instance, or a number, or a 33-character string) will cause the ListUnslicer -to go into discard-everything mode. This makes a significant difference when -the per-item constraint allows opentypes, because the OPEN type (a string) is -constrained to 1k bytes. The item constraint probably imposes a much smaller -limit on the set of actual strings that would be accepted, so no -kilobyte-long opentype will possibly make it past that constraint. That means -there can only be one outstanding invalid object. So the worst case (maximal -length) string that has not yet been rejected would be something like: - - OPEN(list) - validthing [0] - validthing [1] - ... - validthing [n-1] - long-invalid-thing - -because if the long-invalid thing had been received earlier, the entire list -would have been abandoned. - -This suggests that the calculation for ListConstraint.maxSize() really needs -to be like - overhead - +(len-1)*itemConstraint.maxSize(valid) - +(1)*itemConstraint.maxSize(invalid) - -I'm still not sure about this. I think it provides a significantly tighter -upper bound. The deserialization process itself does not try to achieve the -absolute minimal exposure (i.e., the opentype checker could take the set of -all known-valid open types, compute the maximum length, and then impose a -StringConstraint with that length instead of 1000), because it is, in -general, a inefficient hassle. There is a tradeoff between computational -efficiency and removing the slack in the maxSize bound, both in the -deserialization process (where the memory is actually consumed) and in -maxSize (where we estimate how much memory could be consumed). - -Anyway, maxSize() and maxDepth() (which is easier: containers add 1 to the -maximum of the maxDepth values of their possible children) need to be -implemented for all the Constraint classes. There are some tests (disabled) -in test_schema.py for this code: those tests assert specific values for -maxSize. Those values are probably wrong, so they must be updated to match -however maxSize actually works. - * decide upon what the "Shared" constraint should mean The idea of this one was to avoid some vulnerabilities by rejecting arbitrary diff -Nru foolscap-0.5.0+dfsg/foolscap/appserver/cli.py foolscap-0.5.1+dfsg/foolscap/appserver/cli.py --- foolscap-0.5.0+dfsg/foolscap/appserver/cli.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/appserver/cli.py 2010-03-26 03:37:35.000000000 +0000 @@ -181,7 +181,6 @@ def run(self, options): basedir = options.basedir stdout = options.stdout - stderr = options.stderr service_type = options.service_type service_args = options.service_args @@ -192,6 +191,7 @@ try: # validate the service args by instantiating one s = build_service(service_basedir, None, service_type, service_args) + del s except: shutil.rmtree(service_basedir) raise @@ -230,7 +230,6 @@ def run(self, options): basedir = options.basedir stdout = options.stdout - stderr = options.stderr furl_prefix = open(os.path.join(basedir, "furl_prefix")).read().strip() diff -Nru foolscap-0.5.0+dfsg/foolscap/banana.py foolscap-0.5.1+dfsg/foolscap/banana.py --- foolscap-0.5.0+dfsg/foolscap/banana.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/banana.py 2010-03-26 03:45:39.000000000 +0000 @@ -10,6 +10,7 @@ from foolscap.slicers.allslicers import RootSlicer, RootUnslicer from foolscap.slicers.allslicers import ReplaceVocabSlicer, AddVocabSlicer +import stringchain import tokens from tokens import SIZE_LIMIT, STRING, LIST, INT, NEG, \ LONGINT, LONGNEG, VOCAB, FLOAT, OPEN, CLOSE, ABORT, ERROR, \ @@ -585,7 +586,7 @@ # self.buffer with the inbound negotiation block. self.negotiated = False self.connectionAbandoned = False - self.buffer = '' + self.buffer = stringchain.StringChain() self.incomingVocabulary = {} self.skipBytes = 0 # used to discard a single long token @@ -701,26 +702,22 @@ # buffer, assemble into tokens # call self.receiveToken(token) with each if self.skipBytes: - if len(chunk) < self.skipBytes: + if len(chunk) <= self.skipBytes: # skip the whole chunk self.skipBytes -= len(chunk) return # skip part of the chunk, and stop skipping chunk = chunk[self.skipBytes:] self.skipBytes = 0 - buffer = self.buffer + chunk + self.buffer.append(chunk) # Loop through the available input data, extracting one token per # pass. - while buffer: - assert self.buffer != buffer, \ - ("Banana.handleData: no progress made: %s %s" % - (repr(buffer),)) - self.buffer = buffer + while len(self.buffer): + first65 = self.buffer.popleft(65) pos = 0 - - for ch in buffer: + for ch in first65: if ch >= HIGH_BIT_SET: break pos = pos + 1 @@ -728,20 +725,22 @@ # drop the connection. We log more of the buffer, but not # all of it, to make it harder for someone to spam our # logs. + s = first65 + self.buffer.popleft(200) raise BananaError("token prefix is limited to 64 bytes: " - "but got %r" % (buffer[:200],)) + "but got %r" % s) else: # we've run out of buffer without seeing the high bit, which # means we're still waiting for header to finish + self.buffer.appendleft(first65) return assert pos <= 64 # At this point, the header and type byte have been received. # The body may or may not be complete. - typebyte = buffer[pos] + typebyte = first65[pos] if pos: - header = b1282int(buffer[:pos]) + header = b1282int(first65[:pos]) else: header = 0 @@ -796,7 +795,7 @@ top.openerCheckToken(typebyte, header, self.opentype) else: top.checkToken(typebyte, header) - except Violation, v: + except Violation: rejected = True f = BananaFailure() if wasInOpen: @@ -811,7 +810,7 @@ # them with extreme prejudice. raise BananaError("oversized ERROR token") - rest = buffer[pos+1:] + self.buffer.appendleft(first65[pos+1:]) # determine what kind of token it is. Each clause finishes in # one of four ways: @@ -830,7 +829,6 @@ # being passed up to the current Unslicer if typebyte == OPEN: - buffer = rest self.inboundOpenCount = header if rejected: if self.debugReceive: @@ -853,7 +851,6 @@ continue elif typebyte == CLOSE: - buffer = rest count = header if self.discardCount: self.discardCount -= 1 @@ -865,7 +862,6 @@ continue elif typebyte == ABORT: - buffer = rest count = header # TODO: this isn't really a Violation, but we need something # to describe it. It does behave identically to what happens @@ -892,28 +888,26 @@ elif typebyte == ERROR: strlen = header - if len(rest) >= strlen: + if len(self.buffer) >= strlen: # the whole string is available - buffer = rest[strlen:] - obj = rest[:strlen] + obj = self.buffer.popleft(strlen) # handleError must drop the connection self.handleError(obj) return else: + self.buffer.appendleft(first65[:pos+1]) return # there is more to come elif typebyte == LIST: raise BananaError("oldbanana peer detected, " + "compatibility code not yet written") #listStack.append((header, [])) - #buffer = rest elif typebyte == STRING: strlen = header - if len(rest) >= strlen: + if len(self.buffer) >= strlen: # the whole string is available - buffer = rest[strlen:] - obj = rest[:strlen] + obj = self.buffer.popleft(strlen) # although it might be rejected else: # there is more to come @@ -922,24 +916,23 @@ # dropped if self.debugReceive: print "DROPPED some string bits" - self.skipBytes = strlen - len(rest) - self.buffer = "" + self.skipBytes = strlen - len(self.buffer) + self.buffer.clear() + else: + self.buffer.appendleft(first65[:pos+1]) return elif typebyte == INT: - buffer = rest obj = int(header) elif typebyte == NEG: - buffer = rest # -2**31 is too large for a positive int, so go through # LongType first obj = int(-long(header)) elif typebyte == LONGINT or typebyte == LONGNEG: strlen = header - if len(rest) >= strlen: + if len(self.buffer) >= strlen: # the whole number is available - buffer = rest[strlen:] - obj = bytes_to_long(rest[:strlen]) + obj = bytes_to_long(self.buffer.popleft(strlen)) if typebyte == LONGNEG: obj = -obj # although it might be rejected @@ -948,33 +941,32 @@ if rejected: # drop all we have and note how much more should be # dropped - self.skipBytes = strlen - len(rest) - self.buffer = "" + self.skipBytes = strlen - len(self.buffer) + self.buffer.clear() + else: + self.buffer.appendleft(first65[:pos+1]) return elif typebyte == VOCAB: - buffer = rest obj = self.incomingVocabulary[header] # TODO: bail if expanded string is too big # this actually means doing self.checkToken(VOCAB, len(obj)) # but we have to make sure we handle the rejection properly elif typebyte == FLOAT: - if len(rest) >= 8: - buffer = rest[8:] - obj = struct.unpack("!d", rest[:8])[0] + if len(self.buffer) >= 8: + obj = struct.unpack("!d", self.buffer.popleft(8))[0] else: # this case is easier than STRING, because it is only 8 # bytes. We don't bother skipping anything. + self.buffer.appendleft(first65[:pos+1]) return elif typebyte == PING: - buffer = rest self.sendPONG(header) continue # otherwise ignored elif typebyte == PONG: - buffer = rest continue # otherwise ignored else: @@ -997,7 +989,9 @@ # while loop ends here - self.buffer = '' + # note: this is redundant, as there are no 'break' statements in that + # loop, and the loop exit condition is 'while len(self.buffer)' + self.buffer.clear() def handleOpen(self, openCount, objectCount, indexToken): @@ -1015,7 +1009,7 @@ return # they want more index tokens, leave .inOpen=True if self.debugReceive: print " opened[%d] with %s" % (openCount, child) - except Violation, v: + except Violation: # must discard the rest of the child object. There is no new # unslicer pushed yet, so we don't use abandonUnslicer self.inOpen = False @@ -1031,7 +1025,7 @@ self.receiveStack.append(child) try: child.start(objectCount) - except Violation, v: + except Violation: # the child is now on top, so use abandonUnslicer to discard the # rest of the child f = BananaFailure() @@ -1045,7 +1039,7 @@ assert isinstance(ready_deferred, defer.Deferred) try: top.receiveChild(token, ready_deferred) - except Violation, v: + except Violation: # this is how the child says "I've been contaminated". We don't # pop them automatically: if they want that, they should return # back the failure in their reportViolation method. @@ -1063,7 +1057,7 @@ try: obj, ready_deferred = child.receiveClose() - except Violation, v: + except Violation: # the child is contaminated. However, they're finished, so we # don't have to discard anything. Just give an Failure to the # parent instead of the object they would have returned. @@ -1074,7 +1068,7 @@ try: child.finish() - except Violation, v: + except Violation: # .finish could raise a Violation if an object that references # the child is just now deciding that they don't like it # (perhaps their TupleConstraint couldn't be asserted until the diff -Nru foolscap-0.5.0+dfsg/foolscap/broker.py foolscap-0.5.1+dfsg/foolscap/broker.py --- foolscap-0.5.0+dfsg/foolscap/broker.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/broker.py 2010-03-26 03:38:22.000000000 +0000 @@ -407,7 +407,6 @@ was registered with our Factory. """ - obj = None assert isinstance(clid, (int, long)) if clid == 0: return self diff -Nru foolscap-0.5.0+dfsg/foolscap/constraint.py foolscap-0.5.1+dfsg/foolscap/constraint.py --- foolscap-0.5.0+dfsg/foolscap/constraint.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/constraint.py 2010-03-15 03:17:37.000000000 +0000 @@ -29,9 +29,6 @@ } nothingTaster = {} -class UnboundedSchema(Exception): - pass - class IConstraint(Interface): pass class IRemoteMethodConstraint(IConstraint): @@ -177,40 +174,6 @@ # this default form passes everything return - def maxSize(self, seen=None): - """ - I help a caller determine how much memory could be consumed by the - input stream while my constraint is in effect. - - My constraint will be enforced against the bytes that arrive over - the wire. Eventually I will either accept the incoming bytes and my - Unslicer will provide an object to its parent (including any - subobjects), or I will raise a Violation exception which will kick - my Unslicer into 'discard' mode. - - I define maxSizeAccept as the maximum number of bytes that will be - received before the stream is accepted as valid. maxSizeReject is - the maximum that will be received before a Violation is raised. The - max of the two provides an upper bound on single objects. For - container objects, the upper bound is probably (n-1)*accept + - reject, because there can only be one outstanding - about-to-be-rejected object at any time. - - I return (maxSizeAccept, maxSizeReject). - - I raise an UnboundedSchema exception if there is no bound. - """ - raise UnboundedSchema - - def maxDepth(self): - """I return the greatest number Slicer objects that might exist on - the SlicerStack (or Unslicers on the UnslicerStack) while processing - an object which conforms to this constraint. This is effectively the - maximum depth of the object tree. I raise UnboundedSchema if there is - no bound. - """ - raise UnboundedSchema - COUNTERBYTES = 64 # max size of opencount def OPENBYTES(self, dummy): @@ -265,13 +228,6 @@ if not self.regexp.search(obj): raise Violation("regexp failed to match") - def maxSize(self, seen=None): - if self.maxLength == None: - raise UnboundedSchema - return 64+1+self.maxLength - def maxDepth(self, seen=None): - return 1 - class IntegerConstraint(Constraint): opentypes = [] # redundant # taster set in __init__ @@ -297,15 +253,6 @@ if abs(obj) >= 2**(8*self.maxBytes): raise Violation("number too large") - def maxSize(self, seen=None): - if self.maxBytes == None: - raise UnboundedSchema - if self.maxBytes == -1: - return 64+1 - return 64+1+self.maxBytes - def maxDepth(self, seen=None): - return 1 - class NumberConstraint(IntegerConstraint): """I accept floats, ints, and longs.""" name = "NumberConstraint" @@ -320,14 +267,6 @@ return IntegerConstraint.checkObject(self, obj, inbound) - def maxSize(self, seen=None): - # floats are packed into 8 bytes, so the shortest FLOAT token is - # 64+1+8 - intsize = IntegerConstraint.maxSize(self, seen) - return max(64+1+8, intsize) - def maxDepth(self, seen=None): - return 1 - #TODO @@ -337,18 +276,6 @@ def __init__(self, constraint, refLimit=None): self.constraint = IConstraint(constraint) self.refLimit = refLimit - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxDepth(seen) #TODO: might be better implemented with a .optional flag class Optional(Constraint): @@ -357,15 +284,3 @@ def __init__(self, constraint, default): self.constraint = IConstraint(constraint) self.default = default - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxDepth(seen) diff -Nru foolscap-0.5.0+dfsg/foolscap/copyable.py foolscap-0.5.1+dfsg/foolscap/copyable.py --- foolscap-0.5.0+dfsg/foolscap/copyable.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/copyable.py 2010-03-26 03:36:44.000000000 +0000 @@ -9,8 +9,7 @@ import slicer, tokens from tokens import BananaError, Violation -from foolscap.constraint import OpenerConstraint, IConstraint, \ - ByteStringConstraint, UnboundedSchema, Optional +from foolscap.constraint import OpenerConstraint, IConstraint, Optional Interface = interface.Interface @@ -373,27 +372,6 @@ assert name not in self.keys.keys() self.keys[name] = IConstraint(constraint) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - total = self.OPENBYTES("attributedict") - for name, constraint in self.keys.iteritems(): - total += ByteStringConstraint(len(name)).maxSize(seen) - total += constraint.maxSize(seen[:]) - return total - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - # all the attribute names are 1-deep, so the min depth of the dict - # items is 1. The other "1" is for the AttributeDict container itself - return 1 + reduce(max, [c.maxDepth(seen[:]) - for c in self.itervalues()], 1) - def getAttrConstraint(self, attrname): c = self.keys.get(attrname) if c: diff -Nru foolscap-0.5.0+dfsg/foolscap/logging/gatherer.py foolscap-0.5.1+dfsg/foolscap/logging/gatherer.py --- foolscap-0.5.0+dfsg/foolscap/logging/gatherer.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/logging/gatherer.py 2010-03-26 03:38:53.000000000 +0000 @@ -476,7 +476,6 @@ return d # mostly for testing def new_incident(self, abs_fn, rel_fn, tubid_s, incident): - stdout = self.stdout or sys.stdout self.move_incident(rel_fn, tubid_s, incident) self.incidents_received += 1 diff -Nru foolscap-0.5.0+dfsg/foolscap/pb.py foolscap-0.5.1+dfsg/foolscap/pb.py --- foolscap-0.5.0+dfsg/foolscap/pb.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/pb.py 2010-03-26 03:39:13.000000000 +0000 @@ -397,6 +397,7 @@ # right (persistent) name even if the user never calls # tub.getLogPortFURL() directly. ignored = self.getLogPortFURL() + del ignored tubID = self.tubID if tubID is None: # RILogGatherer.logport requires a string for nodeid= @@ -425,6 +426,7 @@ return # getLogPortFURL() creates the logport-furlfile as a side-effect ignored = self.getLogPortFURL() + del ignored def getLogPortFURL(self): if not self.locationHints: diff -Nru foolscap-0.5.0+dfsg/foolscap/remoteinterface.py foolscap-0.5.1+dfsg/foolscap/remoteinterface.py --- foolscap-0.5.0+dfsg/foolscap/remoteinterface.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/remoteinterface.py 2010-03-15 03:17:37.000000000 +0000 @@ -3,7 +3,7 @@ import inspect from zope.interface import interface, providedBy, implements from foolscap.constraint import Constraint, OpenerConstraint, nothingTaster, \ - IConstraint, UnboundedSchema, IRemoteMethodConstraint, Optional, Any + IConstraint, IRemoteMethodConstraint, Optional, Any from foolscap.tokens import Violation, InvalidRemoteInterface from foolscap.schema import addToConstraintTypeMap from foolscap import ipb @@ -296,20 +296,6 @@ # location appropriately: they have more information than we do. self.responseConstraint.checkObject(results, inbound) - def maxSize(self, seen=None): - if self.acceptUnknown: - raise UnboundedSchema # there is no limit on that thing - if self.ignoreUnknown: - # for now, we ignore unknown arguments by accepting the object - # and then throwing it away. This makes us vulnerable to the - # memory consumed by that object. TODO: in the CallUnslicer, - # arrange to discard the ignored object instead of receiving it. - # When this is done, ignoreUnknown will not cause the schema to - # be unbounded and this clause should be removed. - raise UnboundedSchema - # TODO: implement the rest of maxSize, just like a dictionary - raise NotImplementedError - class UnconstrainedMethod(object): """I am a method constraint that accepts any arguments and any return value. diff -Nru foolscap-0.5.0+dfsg/foolscap/schema.py foolscap-0.5.1+dfsg/foolscap/schema.py --- foolscap-0.5.0+dfsg/foolscap/schema.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/schema.py 2010-03-15 03:17:37.000000000 +0000 @@ -59,8 +59,7 @@ # make constraints available in a single location from foolscap.constraint import Constraint, Any, ByteStringConstraint, \ - IntegerConstraint, NumberConstraint, \ - UnboundedSchema, IConstraint, Optional, Shared + IntegerConstraint, NumberConstraint, IConstraint, Optional, Shared from foolscap.slicers.unicode import UnicodeConstraint from foolscap.slicers.bool import BooleanConstraint from foolscap.slicers.dict import DictConstraint @@ -124,24 +123,6 @@ raise Violation("object type %s does not satisfy any of %s" % (type(obj), self.alternatives)) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - # TODO: if the PolyConstraint contains itself directly, the effect - # is a nop. If a descendent contains the ancestor PolyConstraint, - # then I think it's unbounded.. must draw this out - raise UnboundedSchema # recursion - seen.append(self) - return reduce(max, [c.maxSize(seen[:]) - for c in self.alternatives]) - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return reduce(max, [c.maxDepth(seen[:]) for c in self.alternatives]) - ChoiceOf = PolyConstraint def AnyStringConstraint(*args, **kwargs): diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/bool.py foolscap-0.5.1+dfsg/foolscap/slicers/bool.py --- foolscap-0.5.0+dfsg/foolscap/slicers/bool.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/bool.py 2010-03-15 03:17:37.000000000 +0000 @@ -70,11 +70,3 @@ if self.value != None: if obj != self.value: raise Violation("not %s" % self.value) - - def maxSize(self, seen=None): - if not seen: seen = [] - return self.OPENBYTES("boolean") + self._myint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - return 1+self._myint.maxDepth(seen) - diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/dict.py foolscap-0.5.1+dfsg/foolscap/slicers/dict.py --- foolscap-0.5.0+dfsg/foolscap/slicers/dict.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/dict.py 2010-03-15 03:17:37.000000000 +0000 @@ -4,7 +4,7 @@ from twisted.internet.defer import Deferred from foolscap.tokens import Violation, BananaError from foolscap.slicer import BaseSlicer, BaseUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint +from foolscap.constraint import OpenerConstraint, Any, IConstraint from foolscap.util import AsyncAND class DictSlicer(BaseSlicer): @@ -145,23 +145,3 @@ for key, value in obj.iteritems(): self.keyConstraint.checkObject(key, inbound) self.valueConstraint.checkObject(value, inbound) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxKeys == None: - raise UnboundedSchema - keySize = self.keyConstraint.maxSize(seen[:]) - valueSize = self.valueConstraint.maxSize(seen[:]) - return self.OPENBYTES("dict") + self.maxKeys * (keySize + valueSize) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - keyDepth = self.keyConstraint.maxDepth(seen[:]) - valueDepth = self.valueConstraint.maxDepth(seen[:]) - return 1 + max(keyDepth, valueDepth) - - diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/list.py foolscap-0.5.1+dfsg/foolscap/slicers/list.py --- foolscap-0.5.0+dfsg/foolscap/slicers/list.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/list.py 2010-03-15 03:17:37.000000000 +0000 @@ -4,7 +4,7 @@ from twisted.internet.defer import Deferred from foolscap.tokens import Violation from foolscap.slicer import BaseSlicer, BaseUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint +from foolscap.constraint import OpenerConstraint, Any, IConstraint from foolscap.util import AsyncAND @@ -115,9 +115,8 @@ class ListConstraint(OpenerConstraint): """The object must be a list of objects, with a given maximum length. To - accept lists of any length, use maxLength=None (but you will get a - UnboundedSchema warning). All member objects must obey the given - constraint.""" + accept lists of any length, use maxLength=None. All member objects must + obey the given constraint.""" opentypes = [("list",)] name = "ListConstraint" @@ -136,19 +135,3 @@ raise Violation("list too short") for o in obj: self.constraint.checkObject(o, inbound) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxLength == None: - raise UnboundedSchema - return (self.OPENBYTES("list") + - self.maxLength * self.constraint.maxSize(seen)) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return 1 + self.constraint.maxDepth(seen) diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/none.py foolscap-0.5.1+dfsg/foolscap/slicers/none.py --- foolscap-0.5.0+dfsg/foolscap/slicers/none.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/none.py 2010-03-15 03:17:37.000000000 +0000 @@ -32,10 +32,3 @@ def checkObject(self, obj, inbound): if obj is not None: raise Violation("'%s' is not None" % (obj,)) - def maxSize(self, seen=None): - if not seen: seen = [] - return self.OPENBYTES("none") - def maxDepth(self, seen=None): - if not seen: seen = [] - return 1 - diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/set.py foolscap-0.5.1+dfsg/foolscap/slicers/set.py --- foolscap-0.5.0+dfsg/foolscap/slicers/set.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/set.py 2010-03-15 03:17:37.000000000 +0000 @@ -26,8 +26,7 @@ from foolscap.slicers.tuple import TupleUnslicer from foolscap.slicer import BaseUnslicer from foolscap.tokens import Violation -from foolscap.constraint import OpenerConstraint, UnboundedSchema, Any, \ - IConstraint +from foolscap.constraint import OpenerConstraint, Any, IConstraint from foolscap.util import AsyncAND class SetSlicer(ListSlicer): @@ -202,24 +201,3 @@ if self.constraint: for o in obj: self.constraint.checkObject(o, inbound) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxLength == None: - raise UnboundedSchema - if not self.constraint: - raise UnboundedSchema - return (self.OPENBYTES("immutable-set") + - self.maxLength * self.constraint.maxSize(seen)) - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - if not self.constraint: - raise UnboundedSchema - seen.append(self) - return 1 + self.constraint.maxDepth(seen) diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/tuple.py foolscap-0.5.1+dfsg/foolscap/slicers/tuple.py --- foolscap-0.5.0+dfsg/foolscap/slicers/tuple.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/tuple.py 2010-03-15 03:17:37.000000000 +0000 @@ -4,7 +4,7 @@ from foolscap.tokens import Violation from foolscap.slicer import BaseUnslicer from foolscap.slicers.list import ListSlicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint +from foolscap.constraint import OpenerConstraint, Any, IConstraint from foolscap.util import AsyncAND @@ -135,21 +135,3 @@ raise Violation("wrong size tuple") for i in range(len(self.constraints)): self.constraints[i].checkObject(obj[i], inbound) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - total = self.OPENBYTES("tuple") - for c in self.constraints: - total += c.maxSize(seen[:]) - return total - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return 1 + reduce(max, [c.maxDepth(seen[:]) - for c in self.constraints]) - diff -Nru foolscap-0.5.0+dfsg/foolscap/slicers/unicode.py foolscap-0.5.1+dfsg/foolscap/slicers/unicode.py --- foolscap-0.5.0+dfsg/foolscap/slicers/unicode.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/slicers/unicode.py 2010-03-15 03:17:37.000000000 +0000 @@ -4,7 +4,7 @@ from twisted.internet.defer import Deferred from foolscap.tokens import BananaError, STRING, VOCAB, Violation from foolscap.slicer import BaseSlicer, LeafUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema +from foolscap.constraint import OpenerConstraint, Any class UnicodeSlicer(BaseSlicer): opentype = ("unicode",) @@ -83,11 +83,3 @@ if self.regexp: if not self.regexp.search(obj): raise Violation("regexp failed to match") - - def maxSize(self, seen=None): - if self.maxLength == None: - raise UnboundedSchema - return self.OPENBYTES("unicode") + self.maxLength * 6 - - def maxDepth(self, seen=None): - return 1+1 diff -Nru foolscap-0.5.0+dfsg/foolscap/stringchain.py foolscap-0.5.1+dfsg/foolscap/stringchain.py --- foolscap-0.5.0+dfsg/foolscap/stringchain.py 1970-01-01 01:00:00.000000000 +0100 +++ foolscap-0.5.1+dfsg/foolscap/stringchain.py 2010-03-26 03:45:39.000000000 +0000 @@ -0,0 +1,209 @@ +import copy + +from collections import deque + +# Note: when changing this class, you should un-comment all the lines that say +# "assert self._assert_invariants()". + +class StringChain(object): + def __init__(self): + self.d = deque() + self.ignored = 0 + self.tailignored = 0 + self.len = 0 + + def append(self, s): + """ Add s to the end of the chain. """ + #assert self._assert_invariants() + if not s: + return + + # First trim off any ignored tail bytes. + if self.tailignored: + self.d[-1] = self.d[-1][:-self.tailignored] + self.tailignored = 0 + + self.d.append(s) + self.len += len(s) + #assert self._assert_invariants() + + def appendleft(self, s): + """ Add s to the beginning of the chain. """ + #assert self._assert_invariants() + if not s: + return + + # First trim off any ignored bytes. + if self.ignored: + self.d[0] = self.d[0][self.ignored:] + self.ignored = 0 + + self.d.appendleft(s) + self.len += len(s) + #assert self._assert_invariants() + + def __str__(self): + """ Return the entire contents of this chain as a single + string. (Obviously this requires copying all of the bytes, so don't do + this unless you need to.) This has a side-effect of collecting all the + bytes in this StringChain object into a single string which is stored + in the first element of its internal deque. """ + self._collapse() + if self.d: + return self.d[0] + else: + return '' + + def popleft_new_stringchain(self, bytes): + """ Remove some of the leading bytes of the chain and return them as a + new StringChain object. (Use str() on it if you want the bytes in a + string, or call popleft() instead of popleft_new_stringchain().) """ + #assert self._assert_invariants() + if not bytes or not self.d: + return self.__class__() + + assert bytes >= 0, bytes + + # We need to add at least this many bytes to the new StringChain. + bytesleft = bytes + self.ignored + n = self.__class__() + n.ignored = self.ignored + + while bytesleft > 0 and self.d: + s = self.d.popleft() + self.len -= (len(s) - self.ignored) + n.d.append(s) + n.len += (len(s)-self.ignored) + self.ignored = 0 + bytesleft -= len(s) + + overrun = - bytesleft + + if overrun > 0: + self.d.appendleft(s) + self.len += overrun + self.ignored = len(s) - overrun + n.len -= overrun + n.tailignored = overrun + else: + self.ignored = 0 + + # Either you got exactly how many you asked for, or you drained self entirely and you asked for more than you got. + #assert (n.len == bytes) or ((not self.d) and (bytes > self.len)), (n.len, bytes, len(self.d)) + + #assert self._assert_invariants() + #assert n._assert_invariants() + return n + + def popleft(self, bytes): + """ Remove some of the leading bytes of the chain and return them as a + string. """ + #assert self._assert_invariants() + if not bytes or not self.d: + return '' + + assert bytes >= 0, bytes + + # We need to add at least this many bytes to the result. + bytesleft = bytes + resstrs = [] + + s = self.d.popleft() + if self.ignored: + s = s[self.ignored:] + self.ignored = 0 + self.len -= len(s) + resstrs.append(s) + bytesleft -= len(s) + + while bytesleft > 0 and self.d: + s = self.d.popleft() + self.len -= len(s) + resstrs.append(s) + bytesleft -= len(s) + + overrun = - bytesleft + + if overrun > 0: + self.d.appendleft(s) + self.ignored = (len(s) - overrun) + self.len += overrun + resstrs[-1] = resstrs[-1][:-overrun] + + resstr = ''.join(resstrs) + + # Either you got exactly how many you asked for, or you drained self entirely and you asked for more than you got. + #assert (len(resstr) == bytes) or ((not self.d) and (bytes > self.len)), (len(resstr), bytes, len(self.d), overrun) + + #assert self._assert_invariants() + + return resstr + + def __len__(self): + #assert self._assert_invariants() + return self.len + + def trim(self, bytes): + """ Trim off some of the leading bytes. """ + #assert self._assert_invariants() + self.ignored += bytes + self.len -= bytes + while self.d and self.ignored >= len(self.d[0]): + s = self.d.popleft() + self.ignored -= len(s) + if self.len < 0: + self.len = 0 + if not self.d: + self.ignored = 0 + #assert self._assert_invariants() + + def clear(self): + """ Empty it out. """ + #assert self._assert_invariants() + self.d.clear() + self.ignored = 0 + self.tailignored = 0 + self.len = 0 + #assert self._assert_invariants() + + def copy(self): + n = self.__class__() + n.ignored = self.ignored + n.tailignored = self.tailignored + n.len = self.len + n.d = copy.copy(self.d) + #assert n._assert_invariants() + return n + + def _assert_invariants(self): + assert self.ignored >= 0, self.ignored + assert self.tailignored >= 0, self.tailignored + assert self.len >= 0, self.len + assert (not self.d) or (self.d[0]), \ + ("First element is required to be non-empty.", self.d and self.d[0]) + assert (not self.d) or (self.ignored < len(self.d[0])), \ + (self.ignored, self.d and len(self.d[0])) + assert (not self.d) or (self.tailignored < len(self.d[-1])), \ + (self.tailignored, self.d and len(self.d[-1])) + assert self.ignored+self.len+self.tailignored == sum([len(x) for x in self.d]), \ + (self.ignored, self.len, self.tailignored, sum([len(x) for x in self.d])) + return True + + def _collapse(self): + """ Concatenate all of the strings into one string and make that string + be the only element of the chain. (Obviously this requires copying all + of the bytes, so don't do this unless you need to.) """ + #assert self._assert_invariants() + # First trim off any leading ignored bytes. + if self.ignored: + self.d[0] = self.d[0][self.ignored:] + self.ignored = 0 + # Then any tail ignored bytes. + if self.tailignored: + self.d[-1] = self.d[-1][:-self.tailignored] + self.tailignored = 0 + if len(self.d) > 1: + newstr = ''.join(self.d) + self.d.clear() + self.d.append(newstr) + #assert self._assert_invariants() diff -Nru foolscap-0.5.0+dfsg/foolscap/test/bench_banana.py foolscap-0.5.1+dfsg/foolscap/test/bench_banana.py --- foolscap-0.5.0+dfsg/foolscap/test/bench_banana.py 1970-01-01 01:00:00.000000000 +0100 +++ foolscap-0.5.1+dfsg/foolscap/test/bench_banana.py 2010-03-26 03:45:39.000000000 +0000 @@ -0,0 +1,46 @@ +import StringIO +from foolscap import storage + +class TestTransport(StringIO.StringIO): + disconnectReason = None + def loseConnection(self): + pass + +class B(object): + def setup_huge_string(self, N): + """ This is actually a test for acceptable performance, and it needs to + be made more explicit, perhaps by being moved into a separate + benchmarking suite instead of living in this test suite. """ + self.banana = storage.StorageBanana() + self.banana.slicerClass = storage.UnsafeStorageRootSlicer + self.banana.unslicerClass = storage.UnsafeStorageRootUnslicer + self.banana.transport = TestTransport() + self.banana.connectionMade() + d = self.banana.send("a"*N) + d.addCallback(lambda res: self.banana.transport.getvalue()) + def f(o): + self._encoded_huge_string = o + d.addCallback(f) + reactor.runUntilCurrent() + + def bench_huge_string_decode(self, N): + """ This is actually a test for acceptable performance, and it needs to + be made more explicit, perhaps by being moved into a separate + benchmarking suite instead of living in this test suite. """ + o = self._encoded_huge_string + # results = [] + self.banana.prepare() + # d.addCallback(results.append) + CHOMP = 4096 + for i in range(0, len(o), CHOMP): + self.banana.dataReceived(o[i:i+CHOMP]) + # print results + +import sys +from twisted.internet import reactor +from pyutil import benchutil +b = B() +for N in 10**3, 10**4, 10**5, 10**6, 10**7: + print "%8d" % N, + sys.stdout.flush() + benchutil.rep_bench(b.bench_huge_string_decode, N, b.setup_huge_string) diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_call.py foolscap-0.5.1+dfsg/foolscap/test/test_call.py --- foolscap-0.5.0+dfsg/foolscap/test/test_call.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_call.py 2010-03-26 03:39:52.000000000 +0000 @@ -201,7 +201,6 @@ # propagating the maxLength= attribute of the StringConstraint to the # children (using the default of 1000 bytes instead). rr, target = self.setupTarget(HelperTarget()) - t = 4 d = rr.callRemote("choice1", 4) d.addCallback(lambda res: self.failUnlessEqual(res, None)) d.addCallback(lambda res: rr.callRemote("choice1", "a"*2000)) diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_logging.py foolscap-0.5.1+dfsg/foolscap/test/test_logging.py --- foolscap-0.5.0+dfsg/foolscap/test/test_logging.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_logging.py 2010-03-26 03:42:26.000000000 +0000 @@ -254,7 +254,7 @@ if runtime.platformType == "posix": events = self._read_logfile(os.path.join(got_logdir, files[0])) self.failUnlessEqual(len(events), 1+3) - header = events[0] + #header = events[0] self.failUnless("header" in events[0]) self.failUnlessEqual(events[0]["header"]["trigger"]["message"], "3-trigger") @@ -1284,7 +1284,6 @@ gatherer.d = defer.Deferred() gatherer.setServiceParent(self.parent) # that will start the gatherer - fn = gatherer._savefile_name gatherer_furl = gatherer.my_furl starting_timestamp = gatherer._starting_timestamp @@ -1319,7 +1318,6 @@ gatherer.d = defer.Deferred() gatherer.setServiceParent(self.parent) # that will start the gatherer - fn = gatherer._savefile_name gatherer_furl = gatherer.my_furl starting_timestamp = gatherer._starting_timestamp @@ -1350,7 +1348,6 @@ gatherer.d = defer.Deferred() gatherer.setServiceParent(self.parent) # that will start the gatherer - fn = gatherer._savefile_name gatherer_furlfile = os.path.join(basedir, gatherer.furlFile) starting_timestamp = gatherer._starting_timestamp @@ -1381,7 +1378,6 @@ gatherer.d = defer.Deferred() gatherer.setServiceParent(self.parent) # that will start the gatherer - fn = gatherer._savefile_name gatherer_furlfile = os.path.join(basedir, gatherer.furlFile) starting_timestamp = gatherer._starting_timestamp @@ -1416,7 +1412,6 @@ gatherer1.d = defer.Deferred() gatherer1.setServiceParent(self.parent) # that will start the gatherer - fn1 = gatherer1._savefile_name gatherer1_furl = gatherer1.my_furl starting_timestamp1 = gatherer1._starting_timestamp @@ -1427,7 +1422,6 @@ gatherer2.d = defer.Deferred() gatherer2.setServiceParent(self.parent) # that will start the gatherer - fn2 = gatherer2._savefile_name gatherer2_furl = gatherer2.my_furl starting_timestamp2 = gatherer2._starting_timestamp @@ -1438,7 +1432,6 @@ gatherer3.d = defer.Deferred() gatherer3.setServiceParent(self.parent) # that will start the gatherer - fn3 = gatherer3._savefile_name gatherer3_furl = gatherer3.my_furl starting_timestamp3 = gatherer3._starting_timestamp @@ -1480,15 +1473,13 @@ # leave the furlfile empty: use no gatherer t = GoodEnoughTub() - expected_tubid = t.tubID - if t.tubID is None: - expected_tubid = "" t.setServiceParent(self.parent) l = t.listenOn("tcp:0:interface=127.0.0.1") t.setLocation("127.0.0.1:%d" % l.getPortnum()) t.setOption("log-gatherer-furlfile", gatherer_fn) lp_furl = t.getLogPortFURL() + del lp_furl t.log("this message shouldn't make anything explode") test_log_gatherer_empty_furlfile.timeout = 20 @@ -1501,15 +1492,13 @@ # leave the furlfile missing: use no gatherer t = GoodEnoughTub() - expected_tubid = t.tubID - if t.tubID is None: - expected_tubid = "" t.setServiceParent(self.parent) l = t.listenOn("tcp:0:interface=127.0.0.1") t.setLocation("127.0.0.1:%d" % l.getPortnum()) t.setOption("log-gatherer-furlfile", gatherer_fn) lp_furl = t.getLogPortFURL() + del lp_furl t.log("this message shouldn't make anything explode") test_log_gatherer_missing_furlfile.timeout = 20 @@ -1592,6 +1581,7 @@ "message": "howdy", }) outmsg = out.getvalue() + del outmsg lp.saver.disconnected() # cause the file to be closed f = open(saveto_filename, "rb") data = pickle.load(f) # header @@ -1645,7 +1635,6 @@ config.parseOptions(argv) command = config.subCommand if command == "flogtool": - so = config.subOptions return cli.run_flogtool(argv[1:]) class CLI(unittest.TestCase): @@ -1671,7 +1660,7 @@ self.failUnless("to launch the daemon" in out, out) def test_create_gatherer_badly(self): - basedir = "logging/CLI/create_gatherer" + #basedir = "logging/CLI/create_gatherer" argv = ["flogtool", "create-gatherer", "--bogus-arg"] self.failUnlessRaises(usage.UsageError, cli.run_flogtool, argv[1:], run_by_human=False) diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_observer.py foolscap-0.5.1+dfsg/foolscap/test/test_observer.py --- foolscap-0.5.0+dfsg/foolscap/test/test_observer.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_observer.py 2010-03-26 03:42:38.000000000 +0000 @@ -18,6 +18,7 @@ d1.addCallback(_addmore) ol.fire("result") rep = repr(ol) + del rep d4 = ol.whenFired() dl = defer.DeferredList([d1,d2,d4]) return dl diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_promise.py foolscap-0.5.1+dfsg/foolscap/test/test_promise.py --- foolscap-0.5.0+dfsg/foolscap/test/test_promise.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_promise.py 2010-03-26 03:43:01.000000000 +0000 @@ -130,7 +130,6 @@ r(t) def testResolveFailure(self): - t = Target() p,r = makePromise() p = send(p).one(2) def _check(res): @@ -163,7 +162,6 @@ r(t) def testResolveFailure(self): - t = Target() p1,r = makePromise() p2 = p1.one(2) def _check(res): diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_reference.py foolscap-0.5.1+dfsg/foolscap/test/test_reference.py --- foolscap-0.5.0+dfsg/foolscap/test/test_reference.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_reference.py 2010-03-26 03:44:24.000000000 +0000 @@ -62,6 +62,7 @@ good_broker = broker.Broker(referenceable.TubRef(good_tubid)) good_tracker = referenceable.RemoteReferenceTracker(good_broker, 0, good_furl, ri) + del good_tracker self.failUnlessRaises(api.BananaError, referenceable.RemoteReferenceTracker, good_broker, 0, bad_furl, ri) diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_registration.py foolscap-0.5.1+dfsg/foolscap/test/test_registration.py --- foolscap-0.5.0+dfsg/foolscap/test/test_registration.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_registration.py 2010-03-26 03:43:29.000000000 +0000 @@ -13,6 +13,7 @@ tub = UnauthenticatedTub() tub.setLocation("bogus:1234567") u1 = tub.registerReference(t1) + del u1 results = [] w1 = weakref.ref(t1, results.append) del t1 @@ -32,6 +33,7 @@ tub.setLocation("bogus:1234567") name = tub._assignName(t1) url = tub.buildURL(name) + del url results = [] w1 = weakref.ref(t1, results.append) del t1 @@ -50,6 +52,7 @@ tub = UnauthenticatedTub() tub.setLocation("bogus:1234567") url = tub.registerReference(target) + del url def test_duplicate(self): basedir = "test_registration" diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_schema.py foolscap-0.5.1+dfsg/foolscap/test/test_schema.py --- foolscap-0.5.0+dfsg/foolscap/test/test_schema.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_schema.py 2010-03-26 03:43:44.000000000 +0000 @@ -33,26 +33,10 @@ c.checkObject(obj, False) def violates(self, c, obj): self.assertRaises(schema.Violation, c.checkObject, obj, False) - def assertSize(self, c, maxsize): - return - self.assertEquals(c.maxSize(), maxsize) - def assertDepth(self, c, maxdepth): - self.assertEquals(c.maxDepth(), maxdepth) - def assertUnboundedSize(self, c): - self.assertRaises(schema.UnboundedSchema, c.maxSize) - def assertUnboundedDepth(self, c): - self.assertRaises(schema.UnboundedSchema, c.maxDepth) - - def testAny(self): - c = schema.Constraint() - self.assertUnboundedSize(c) - self.assertUnboundedDepth(c) def testInteger(self): # s_int32_t c = schema.IntegerConstraint() - self.assertSize(c, INTSIZE) - self.assertDepth(c, 1) self.conforms(c, 123) self.violates(c, 2**64) self.conforms(c, 0) @@ -66,8 +50,6 @@ def testLargeInteger(self): c = schema.IntegerConstraint(64) - self.assertSize(c, INTSIZE+64) - self.assertDepth(c, 1) self.conforms(c, 123) self.violates(c, "123") self.violates(c, None) @@ -78,9 +60,6 @@ def testByteString(self): c = schema.ByteStringConstraint(10) - self.assertSize(c, STR10) - self.assertSize(c, STR10) # twice to test seen=[] logic - self.assertDepth(c, 1) self.conforms(c, "I'm short") self.violates(c, "I am too long") self.conforms(c, "a" * 10) @@ -117,9 +96,6 @@ def testUnicode(self): c = schema.UnicodeConstraint(10) - #self.assertSize(c, USTR10) - #self.assertSize(c, USTR10) # twice to test seen=[] logic - self.assertDepth(c, 2) self.violates(c, "I'm a bytestring") self.conforms(c, u"I'm short") self.violates(c, u"I am too long") @@ -151,8 +127,6 @@ def testBool(self): c = schema.BooleanConstraint() - self.assertSize(c, 147) - self.assertDepth(c, 2) self.conforms(c, False) self.conforms(c, True) self.violates(c, 0) @@ -164,8 +138,11 @@ def testPoly(self): c = schema.PolyConstraint(schema.ByteStringConstraint(100), schema.IntegerConstraint()) - self.assertSize(c, 165) - self.assertDepth(c, 1) + self.conforms(c, "string") + self.conforms(c, 123) + self.violates(c, u"unicode") + self.violates(c, 123.4) + self.violates(c, ["not", "a", "list"]) def testTuple(self): c = schema.TupleConstraint(schema.ByteStringConstraint(10), @@ -176,18 +153,12 @@ self.violates(c, ("string", "string", "NaN")) self.violates(c, ("string that is too long", "string", 1)) self.violates(c, ["Are tuples", "and lists the same?", 0]) - self.assertSize(c, 72+75+165+73) - self.assertDepth(c, 2) def testNestedTuple(self): inner = schema.TupleConstraint(schema.ByteStringConstraint(10), schema.IntegerConstraint()) - self.assertSize(inner, 72+75+73) - self.assertDepth(inner, 2) outer = schema.TupleConstraint(schema.ByteStringConstraint(100), inner) - self.assertSize(outer, 72+165 + 72+75+73) - self.assertDepth(outer, 3) self.conforms(inner, ("hi", 2)) self.conforms(outer, ("long string here", ("short", 3))) @@ -195,53 +166,25 @@ self.violates(outer, (("long string here", ("too long string", 3)))) outer2 = schema.TupleConstraint(inner, inner) - self.assertSize(outer2, 72+ 2*(72+75+73)) - self.assertDepth(outer2, 3) self.conforms(outer2, (("hi", 1), ("there", 2)) ) self.violates(outer2, ("hi", 1, "flat", 2) ) - def testUnbounded(self): - big = schema.ByteStringConstraint(None) - self.assertUnboundedSize(big) - self.assertDepth(big, 1) - self.conforms(big, "blah blah blah blah blah" * 1024) - self.violates(big, 123) - - bag = schema.TupleConstraint(schema.IntegerConstraint(), - big) - self.assertUnboundedSize(bag) - self.assertDepth(bag, 2) - - polybag = schema.PolyConstraint(schema.IntegerConstraint(), - bag) - self.assertUnboundedSize(polybag) - self.assertDepth(polybag, 2) - def testRecursion(self): # we have to fiddle with PolyConstraint's innards value = schema.ChoiceOf(schema.ByteStringConstraint(), schema.IntegerConstraint(), # will add 'value' here ) - self.assertSize(value, 1065) - self.assertDepth(value, 1) self.conforms(value, "key") self.conforms(value, 123) self.violates(value, []) mapping = schema.TupleConstraint(schema.ByteStringConstraint(10), value) - self.assertSize(mapping, 72+75+1065) - self.assertDepth(mapping, 2) self.conforms(mapping, ("name", "key")) self.conforms(mapping, ("name", 123)) value.alternatives = value.alternatives + (mapping,) - self.assertUnboundedSize(value) - self.assertUnboundedDepth(value) - self.assertUnboundedSize(mapping) - self.assertUnboundedDepth(mapping) - # but note that the constraint can still be applied self.conforms(mapping, ("name", 123)) self.conforms(mapping, ("name", "key")) @@ -254,8 +197,6 @@ def testList(self): l = schema.ListOf(schema.ByteStringConstraint(10)) - self.assertSize(l, 71 + 30*75) - self.assertDepth(l, 2) self.conforms(l, ["one", "two", "three"]) self.violates(l, ("can't", "fool", "me")) self.violates(l, ["but", "perspicacity", "is too long"]) @@ -263,14 +204,10 @@ self.conforms(l, ["short", "sweet"]) l2 = schema.ListOf(schema.ByteStringConstraint(10), 3) - self.assertSize(l2, 71 + 3*75) - self.assertDepth(l2, 2) self.conforms(l2, ["the number", "shall be", "three"]) self.violates(l2, ["five", "is", "...", "right", "out"]) l3 = schema.ListOf(schema.ByteStringConstraint(10), None) - self.assertUnboundedSize(l3) - self.assertDepth(l3, 2) self.conforms(l3, ["long"] * 35) self.violates(l3, ["number", 1, "rule", "is", 0, "numbers"]) @@ -281,7 +218,6 @@ def testSet(self): l = schema.SetOf(schema.IntegerConstraint(), 3) - self.assertDepth(l, 2) self.conforms(l, sets.Set([])) self.conforms(l, sets.Set([1])) self.conforms(l, sets.Set([1,2,3])) @@ -356,7 +292,6 @@ schema.IntegerConstraint(), maxKeys=4) - self.assertDepth(d, 2) self.conforms(d, {"a": 1, "b": 2}) self.conforms(d, {"foo": 123, "bar": 345, "blah": 456, "yar": 789}) self.violates(d, None) @@ -503,14 +438,14 @@ def violates_inbound(self, obj, constraint): try: constraint.checkObject(obj, True) - except Violation, f: + except Violation: return self.fail("constraint wasn't violated") def violates_outbound(self, obj, constraint): try: constraint.checkObject(obj, False) - except Violation, f: + except Violation: return self.fail("constraint wasn't violated") diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_stringchain.py foolscap-0.5.1+dfsg/foolscap/test/test_stringchain.py --- foolscap-0.5.0+dfsg/foolscap/test/test_stringchain.py 1970-01-01 01:00:00.000000000 +0100 +++ foolscap-0.5.1+dfsg/foolscap/test/test_stringchain.py 2010-03-26 03:45:39.000000000 +0000 @@ -0,0 +1,188 @@ + +from twisted.trial import unittest +from foolscap.stringchain import StringChain as BufClass + +class T(unittest.TestCase): + def test_al(self): + c = BufClass() + c.append("ab") + self.failUnlessEqual(len(c), 2) + c.append("") + self.failUnlessEqual(len(c), 2) + c.append("c") + self.failUnlessEqual(len(c), 3) + + def test_str(self): + c = BufClass() + c.append("ab") + c.append("c") + self.failUnlessEqual(str(c), "abc") + + def test_trim(self): + c = BufClass() + c.append("ab") + c.append("c") + c.trim(1) + self.failUnlessEqual(str(c.copy()), "bc") + c.trim(1) + self.failUnlessEqual(str(c.copy()), "c") + c.trim(1) + self.failUnlessEqual(str(c.copy()), "") + c.append("ab") + c.append("c") + c.trim(2) + self.failUnlessEqual(str(c.copy()), "c") + c.trim(1) + self.failUnlessEqual(str(c.copy()), "") + c.append("a") + c.append("bc") + c.trim(2) + self.failUnlessEqual(str(c.copy()), "c") + c.trim(1) + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + c.trim(4) # We just silently trim all. + self.failUnlessEqual(str(c.copy()), "") + + def test_popleft_new_stringchain(self): + c = BufClass() + c.append("ab") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "a") + self.failUnlessEqual(str(c.copy()), "b") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "b") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "a") + self.failUnlessEqual(str(c.copy()), "bc") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "b") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft_new_stringchain(2) + self.failUnlessEqual(str(s), "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("ab") + c.append("c") + s = c.popleft_new_stringchain(2) + self.failUnlessEqual(str(s), "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("a") + c.append("bc") + s = c.popleft_new_stringchain(2) + self.failUnlessEqual(str(s), "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft_new_stringchain(1) + self.failUnlessEqual(str(s), "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft_new_stringchain(4) # We just silently pop them all. + self.failUnlessEqual(str(s), "abc") + self.failUnlessEqual(str(c.copy()), "") + + def test_popleft(self): + c = BufClass() + c.append("ab") + s = c.popleft(1) + self.failUnlessEqual(s, "a") + self.failUnlessEqual(str(c.copy()), "b") + s = c.popleft(1) + self.failUnlessEqual(s, "b") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft(1) + self.failUnlessEqual(s, "a") + self.failUnlessEqual(str(c.copy()), "bc") + s = c.popleft(1) + self.failUnlessEqual(s, "b") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft(1) + self.failUnlessEqual(s, "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft(2) + self.failUnlessEqual(s, "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft(1) + self.failUnlessEqual(s, "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("ab") + c.append("c") + s = c.popleft(2) + self.failUnlessEqual(s, "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft(1) + self.failUnlessEqual(s, "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("a") + c.append("bc") + s = c.popleft(2) + self.failUnlessEqual(s, "ab") + self.failUnlessEqual(str(c.copy()), "c") + s = c.popleft(1) + self.failUnlessEqual(s, "c") + self.failUnlessEqual(str(c.copy()), "") + + c.append("abc") + s = c.popleft(4) # We just silently pop them all. + self.failUnlessEqual(s, "abc") + self.failUnlessEqual(str(c.copy()), "") + + def test_tailignored(self): + c1 = BufClass() + c1.append("abcde") + c2 = c1.popleft_new_stringchain(2) + assert str(c2.copy()) == "ab", (str(c2.copy()),) + c2.append("f") + self.failUnlessEqual(str(c2.copy()), "abf") + + def test_appendleft(self): + c1 = BufClass() + c1.append("abcd") + c1.appendleft("ef") + self.failUnlessEqual(str(c1.copy()), "efabcd") + s = c1.popleft(1) + self.failUnlessEqual(s, "e") + s = c1.popleft(2) + self.failUnlessEqual(s, "fa") + s = c1.popleft(3) + self.failUnlessEqual(s, "bcd") + + c1 = BufClass() + c1.append("abcd") + c1.popleft(1) + c1.appendleft("ef") + self.failUnlessEqual(str(c1.copy()), "efbcd") + s = c1.popleft(1) + self.failUnlessEqual(s, "e") + s = c1.popleft(2) + self.failUnlessEqual(s, "fb") + s = c1.popleft(3) + self.failUnlessEqual(s, "cd") + + def test_clear(self): + c1 = BufClass() + c1.append("abcd") + c1.clear() + self.failUnlessEqual(str(c1.copy()), '') diff -Nru foolscap-0.5.0+dfsg/foolscap/test/test_tub.py foolscap-0.5.1+dfsg/foolscap/test/test_tub.py --- foolscap-0.5.0+dfsg/foolscap/test/test_tub.py 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/test/test_tub.py 2010-03-26 03:44:00.000000000 +0000 @@ -90,7 +90,7 @@ def test_set_location(self): t = GoodEnoughTub() - l = t.listenOn("tcp:0") + t.listenOn("tcp:0") t.setServiceParent(self.s) t.setLocation("127.0.0.1:12345") # setLocation may only be called once @@ -402,7 +402,7 @@ self.tubA.startService() self.tubB.startService() - l = self.tubB.listenOn("tcp:0") + self.tubB.listenOn("tcp:0") d = self.tubB.setLocationAutomatically() r = Receiver(self.tubB) d.addCallback(lambda res: self.tubB.registerReference(r)) diff -Nru foolscap-0.5.0+dfsg/foolscap/_version.py foolscap-0.5.1+dfsg/foolscap/_version.py --- foolscap-0.5.0+dfsg/foolscap/_version.py 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap/_version.py 2010-03-26 04:18:38.000000000 +0000 @@ -1,2 +1,2 @@ -verstr = "0.5.0" +verstr = "0.5.1" diff -Nru foolscap-0.5.0+dfsg/foolscap.egg-info/PKG-INFO foolscap-0.5.1+dfsg/foolscap.egg-info/PKG-INFO --- foolscap-0.5.0+dfsg/foolscap.egg-info/PKG-INFO 2010-01-19 01:14:46.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap.egg-info/PKG-INFO 2010-03-26 04:29:02.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.0 Name: foolscap -Version: 0.5.0 +Version: 0.5.1 Summary: Foolscap contains an RPC protocol for Twisted. Home-page: http://foolscap.lothar.com/trac Author: Brian Warner diff -Nru foolscap-0.5.0+dfsg/foolscap.egg-info/SOURCES.txt foolscap-0.5.1+dfsg/foolscap.egg-info/SOURCES.txt --- foolscap-0.5.0+dfsg/foolscap.egg-info/SOURCES.txt 2010-01-19 01:14:46.000000000 +0000 +++ foolscap-0.5.1+dfsg/foolscap.egg-info/SOURCES.txt 2010-03-26 04:29:02.000000000 +0000 @@ -22,6 +22,7 @@ doc/use-cases.txt doc/using-foolscap.xhtml doc/examples/git-clone-furl +doc/examples/git-furl~ doc/examples/git-proxy-flappclient doc/examples/git-publish-with-furl doc/examples/git-remote-add-furl @@ -61,6 +62,7 @@ foolscap/slicer.py foolscap/sslverify.py foolscap/storage.py +foolscap/stringchain.py foolscap/tokens.py foolscap/util.py foolscap/vocab.py @@ -100,6 +102,7 @@ foolscap/slicers/unicode.py foolscap/slicers/vocab.py foolscap/test/__init__.py +foolscap/test/bench_banana.py foolscap/test/common.py foolscap/test/test__versions.py foolscap/test/test_appserver.py @@ -122,6 +125,7 @@ foolscap/test/test_registration.py foolscap/test/test_schema.py foolscap/test/test_serialize.py +foolscap/test/test_stringchain.py foolscap/test/test_sturdyref.py foolscap/test/test_tub.py foolscap/test/test_util.py @@ -187,4 +191,5 @@ misc/testutils/figleaf2html misc/testutils/figleaf_htmlizer.py misc/testutils/trial_figleaf.py +misc/testutils/twisted/plugins/dropin.cache misc/testutils/twisted/plugins/figleaf_trial_plugin.py \ No newline at end of file diff -Nru foolscap-0.5.0+dfsg/misc/dapper/debian/changelog foolscap-0.5.1+dfsg/misc/dapper/debian/changelog --- foolscap-0.5.0+dfsg/misc/dapper/debian/changelog 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/dapper/debian/changelog 2010-03-26 04:19:00.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/edgy/debian/changelog foolscap-0.5.1+dfsg/misc/edgy/debian/changelog --- foolscap-0.5.0+dfsg/misc/edgy/debian/changelog 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/edgy/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/etch/debian/changelog foolscap-0.5.1+dfsg/misc/etch/debian/changelog --- foolscap-0.5.0+dfsg/misc/etch/debian/changelog 2010-01-19 01:14:20.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/etch/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/feisty/debian/changelog foolscap-0.5.1+dfsg/misc/feisty/debian/changelog --- foolscap-0.5.0+dfsg/misc/feisty/debian/changelog 2010-01-19 01:14:21.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/feisty/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/gutsy/debian/changelog foolscap-0.5.1+dfsg/misc/gutsy/debian/changelog --- foolscap-0.5.0+dfsg/misc/gutsy/debian/changelog 2010-01-19 01:14:21.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/gutsy/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/hardy/debian/changelog foolscap-0.5.1+dfsg/misc/hardy/debian/changelog --- foolscap-0.5.0+dfsg/misc/hardy/debian/changelog 2010-01-19 01:14:21.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/hardy/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/sarge/debian/changelog foolscap-0.5.1+dfsg/misc/sarge/debian/changelog --- foolscap-0.5.0+dfsg/misc/sarge/debian/changelog 2010-01-19 01:14:21.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/sarge/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/misc/sid/debian/changelog foolscap-0.5.1+dfsg/misc/sid/debian/changelog --- foolscap-0.5.0+dfsg/misc/sid/debian/changelog 2010-01-19 01:14:21.000000000 +0000 +++ foolscap-0.5.1+dfsg/misc/sid/debian/changelog 2010-03-26 04:19:27.000000000 +0000 @@ -1,3 +1,9 @@ +foolscap (0.5.1) unstable; urgency=low + + * new release + + -- Brian Warner Thu, 25 Mar 2010 21:18:54 -0700 + foolscap (0.5.0) unstable; urgency=low * new release diff -Nru foolscap-0.5.0+dfsg/NEWS foolscap-0.5.1+dfsg/NEWS --- foolscap-0.5.0+dfsg/NEWS 2010-01-19 01:14:19.000000000 +0000 +++ foolscap-0.5.1+dfsg/NEWS 2010-03-26 03:56:38.000000000 +0000 @@ -1,5 +1,20 @@ User visible changes in Foolscap (aka newpb/pb2). -*- outline -*- +* Release 0.5.1 (25 Mar 2010) + +** Bugfixes + +This release fixes a significant performance problem, causing receivers a +very long time (over 10 seconds) to process large (>10MB) messages, for +example when receiving a large string in method arguments. Receiver CPU time +was quadratic in the size of the message. (#149) + +** Other Changes + +This release removes some unused code involved in the now-abandoned +resource-exhaustion defenses. (#127) + + * Release 0.5.0 (18 Jan 2010) ** Compatibility diff -Nru foolscap-0.5.0+dfsg/PKG-INFO foolscap-0.5.1+dfsg/PKG-INFO --- foolscap-0.5.0+dfsg/PKG-INFO 2010-01-19 01:14:46.000000000 +0000 +++ foolscap-0.5.1+dfsg/PKG-INFO 2010-03-26 04:29:02.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.0 Name: foolscap -Version: 0.5.0 +Version: 0.5.1 Summary: Foolscap contains an RPC protocol for Twisted. Home-page: http://foolscap.lothar.com/trac Author: Brian Warner