4716dc6 made assert.equal() and related functions work better by
generating a better toString() from the expected, actual, and operator
values passed to fail(). Unfortunately, this was accomplished by putting
the generated message into the error's `name` property. When you passed
in a custom error message, the error would put the custom error into
`name` *and* `message`, resulting in helpful string representations like
"AssertionError: Oh no: Oh no".
This commit resolves that issue by storing the generated message in the
`message` property while leaving the error's name alone and adding
a regression test so that this doesn't pop back up later.
Closes#5292.
I broke dgram.Socket#bind(port, cb) almost a year ago in 332fea5a but
it wasn't until today that someone complained and none of the tests
caught it because they all either specify the address or omit the
callback.
Anyway, now it works again and does what you expect: it binds the
socket to the "any" address ("0.0.0.0" for IPv4 and "::" for IPv6.)
process.stdout isn't fully initialized yet by the time the test starts
when invoked with `python tools/test.py`. Use process.stdin instead and
force initialization with process.stdin.resume().
Fix a NULL pointer dereference in src/handle_wrap.cc which is really a
use-after-close bug.
The test checks that unref() after close() works on process.stdout but
this bug affects everything that derives from HandleWrap. I discovered
it because child processes would sometimes quit for no reason (that is,
no reason until I turned on core dumps.)
When LD_LIBRARY_PATH is overriden for custom builds we need to preserve
it for child processes. To be sure we preserve whole environment of
parent process and just add TEST_INIT variable to it.
When LD_LIBRARY_PATH is overriden for custom builds we need to preserve
it for forked process. There are possibly other environment variables
that could cause test failures so we preserve whole environment of
parent process.
Fix#5272
The consumption of a readable stream is a dance with 3 partners.
1. The specific stream Author (A)
2. The Stream Base class (B), and
3. The Consumer of the stream (C)
When B calls the _read() method that A implements, it sets a 'reading'
flag, so that parallel calls to _read() can be avoided. When A calls
stream.push(), B knows that it's safe to start calling _read() again.
If the consumer C is some kind of parser that wants in some cases to
pass the source stream off to some other party, but not before "putting
back" some bit of previously consumed data (as in the case of Node's
websocket http upgrade implementation). So, stream.unshift() will
generally *never* be called by A, but *only* called by C.
Prior to this patch, stream.unshift() *also* unset the state.reading
flag, meaning that C could indicate the end of a read, and B would
dutifully fire off another _read() call to A. This is inappropriate.
In the case of fs streams, and other variably-laggy streams that don't
tolerate overlapped _read() calls, this causes big problems.
Also, calling stream.shift() after the 'end' event did not raise any
kind of error, but would cause very strange behavior indeed. Calling it
after the EOF chunk was seen, but before the 'end' event was fired would
also cause weird behavior, and could lead to data being lost, since it
would not emit another 'readable' event.
This change makes it so that:
1. stream.unshift() does *not* set state.reading = false
2. stream.unshift() is allowed up until the 'end' event.
3. unshifting onto a EOF-encountered and zero-length (but not yet
end-emitted) stream will defer the 'end' event until the new data is
consumed.
4. pushing onto a EOF-encountered stream is now an error.
So, if you read(), you have that single tick to safely unshift() data
back into the stream, even if the null chunk was pushed, and the length
was 0.
Buffer.byteLength() works only for string inputs. Thus, when connection
has pending Buffer to write, it should just use it's length instead of
throwing exception.
Since 049903e, an end callback could be called before a write
callback if end() is called before the write is done. This patch
resolves the issue.
In collaboration with @gne
Fixesfelixge/node-formidable#209Fixes#5215
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
An unusual edge case, but certainly a bug.
DH_compute_secret() may return key that is smaller than input buffer,
in such cases key should be left-padded because it is a BN (big number).
fix#5239
If an http response has an 'end' handler that throws, then the socket
will never be released back into the pool.
Granted, we do NOT guarantee that throwing will never have adverse
effects on Node internal state. Such a guarantee cannot be reasonably
made in a shared-global mutable-state side-effecty language like
JavaScript. However, in this case, it's a rather trivial patch to
increase our resilience a little bit, so it seems like a win.
There is no semantic change in this case, except that some event
listeners are removed, and the `'free'` event is emitted on nextTick, so
that you can schedule another request which will re-use the same socket.
From the user's point of view, there should be no detectable difference.
Closes#5107
The tests did not agree with the test comments. Tests first and second
were both testing the !state.reading case. Now second tests the
state.reading && state.length case.
Fixesjoyent/node#5183
In cases where a stream may have data added to the read queue before the
user adds a 'readable' event, there is never any indication that it's
time to start reading.
True, there's already data there, which the user would get if they
checked However, as we use 'readable' event listening as the signal to
start the flow of data with a read(0) call internally, we ought to
trigger the same effect (ie, emitting a 'readable' event) even if the
'readable' listener is added after the first emission.
To avoid confusing weirdness, only the *first* 'readable' event listener
is granted this privileged status. After we've started the flow (or,
alerted the consumer that the flow has started) we don't need to start
it again. At that point, it's the consumer's responsibility to consume
the stream.
Closes#5141
A llvm/clang bug on Darwin ia32 makes these tests fail 100% of
the time. Since no one really seems to mind overly much, and we
can't reasonably fix this in node anyway, just accept both types
of NaN for now.
Calling `this.pair.encrypted._internallyPendingBytes()` before
handling/resetting error will result in assertion failure:
../src/node_crypto.cc:962: void node::crypto::Connection::ClearError():
Assertion `handle_->Get(String::New("error"))->BooleanValue() == false'
failed.
see #5058
Since _tickCallback and _tickDomainCallback were both called from
MakeCallback, it was possible for a callback to be called that required
a domain directly to _tickCallback.
The fix was to implement process.usingDomains(). This will set all
applicable functions to their domain counterparts, and set a flag in cc
to let MakeCallback know domain callbacks always need to be checked.
Added test in own file. It's important that the test remains isolated.
It's possible to read multiple messages off the parent/child channel.
When that happens, make sure that recvHandle is cleared after emitting
the first message so it doesn't get emitted twice.
Commit f53441a added crypto.getCiphers() as a function that returns the
names of SSL ciphers.
Commit 14a6c4e then added crypto.getHashes(), which returns the names of
digest algorithms, but that creates a subtle inconsistency: the return
values of crypto.getHashes() are valid arguments to crypto.createHash()
but that is not true for crypto.getCiphers() - the returned values are
only valid for SSL/TLS functions.
Rectify that by adding tls.getCiphers() and making crypto.getCiphers()
return proper cipher names.
In process#send() and child_process.ChildProcess#send(), use 'utf8' as
the encoding instead of 'ascii' because 'ascii' mutilates non-ASCII
input. Correctly handle partial character sequences by introducing
a StringDecoder.
Sending over UTF-8 no longer works in v0.10 because the high bit of
each byte is now cleared when converting a Buffer to ASCII. See
commit 96a314b for details.
Fixes#4999 and #5011.
Throw a TypeError if size > 0x3fffffff. Avoids the following V8 fatal
error:
FATAL ERROR: v8::Object::SetIndexedPropertiesToExternalArrayData()
length exceeds max acceptable value
Fixes#5126.
The stall is exposed in the test, though the test itself asserts before
it stalls.
The test is constructed to replicate the stalling state of a complex
Passthrough usecase since I was not able to reliable trigger the stall.
Some of the preconditions for triggering the stall are:
* rs.length >= rs.highWaterMark
* !rs.needReadable
* _transform() handler that can return empty transforms
* multiple sync write() calls
Combined this can trigger a case where rs.reading is not cleared when
further progress requires this. The fix is to always clear rs.reading.
Before this patch calling `socket.setTimeout(0xffffffff)` will result in
signed int32 overflow in C++ which resulted in assertion error:
Assertion failed: (timeout >= -1), function uv__io_poll, file
../deps/uv/src/unix/kqueue.c, line 121.
see #5101
Also, set paused=false *before* calling resume(). Otherwise,
there's an edge case where an immediately-emitted chunk might make
it call pause() again incorrectly.
Commit a804347 makes fs function rethrow errors when the callback is
omitted. While the right thing to do, it's a change from the old v0.8
behavior where such errors were silently ignored.
To give users time to upgrade, temporarily disable that and replace it
with a function that warns once about the deprecated behavior.
Close#5005
This solves the problem of calling `readable.pipe(writable)` after the
readable stream has already emitted 'end', as often is the case when
writing simple HTTP proxies.
The spirit of streams2 is that things will work properly, even if you
don't set them up right away on the first tick.
This approach breaks down, however, because pipe()ing from an ended
readable will just do nothing. No more data will ever arrive, and the
writable will hang open forever never being ended.
However, that does not solve the case of adding a `on('end')` listener
after the stream has received the EOF chunk, if it was the first chunk
received (and thus, length was 0, and 'end' got emitted). So, with
this, we defer the 'end' event emission until the read() function is
called.
Also, in pipe(), if the source has emitted 'end' already, we call the
cleanup/onend function on nextTick. Piping from an already-ended stream
is thus the same as piping from a stream that is in the process of
ending.
Updates many tests that were relying on 'end' coming immediately, even
though they never read() from the req.
Fix#4942
In the function that pre-emptively fills the Readable queue, it relies
on a recursion through:
stream.push(chunk) ->
maybeReadMore(stream, state) ->
if (not reading more and < hwm) stream.read(0) ->
stream._read() ->
stream.push(chunk) -> repeat.
Since this was only calling read() a single time, and then relying on a
future nextTick to collect more data, it ends up causing a nextTick
recursion error (and potentially a RangeError, even) if you have a very
high highWaterMark, and are getting very small chunks pushed
synchronously in _read (as happens with TLS, or many simple test
streams).
This change implements a new approach, so that read(0) is called
repeatedly as long as it is effective (that is, the length keeps
increasing), and thus quickly fills up the buffer for streams such as
these, without any stacks overflowing.
so `ee.emit('error')` doesn't throw when domains are active
create an empty error only when handled by a domain
test for when no error is provided to an error event
Fix#4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.
When calling setImmediate with extra arguments the this keyword in the
callback would refer to the global object, but when not calling
setImmediate with extra arguments this would refer to the returned
handle object.
This commit fixes that inconsistency so its always set handle object.
The handle object was chosen for performance reasons.
Also, this seems to occasionally cause some annoying file-locking
errors in Windows. Not sure if this is the best fix, but it seems
to make the warnings go away in that spot.
If you call z.flush();z.write('foo'); then it would try to write 'foo'
before the flush was done, triggering an assertion in the zlib binding.
Closes#4950
Consider the following example:
console.log(Buffer('ú').toString('ascii'));
Before this commit, the contents of the buffer was used as-is and hence it
prints 'ú'.
Now, it prints 'C:'. Perhaps not much of an improvement but it conforms to what
the documentation says it does: strip off the high bits.
Fixes#4371.
Fix#4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.
child.send can send net servers and sockets. Now that we have support
for dgram clusters this functionality should be extended to include
dgram sockets.
This adds the following to HTTP:
* server.setTimeout(msecs, callback)
Sets all new connections to time out after the specified time, at
which point it emits 'timeout' on the server, passing the socket as an
argument.
In this way, timeouts can be handled in one place consistently.
* req.setTimeout(), res.setTimeout()
Essentially an alias to req/res.socket.setTimeout(), but without
having to delve into a "buried" object. Adds a listener on the
req/res object, but not on the socket.
* server.timeout
Number of milliseconds before incoming connections time out.
(Default=1000*60*2, as before.)
Furthermore, if the user sets up their own timeout listener on either
the server, the request, or the response, then the default behavior
(destroying the socket) is suppressed.
Fix#3460
Now that highWaterMark increases when there are large reads, this
greatly reduces the number of calls necessary to _read(size), assuming
that _read actually respects the size argument.
Don't emit the 'close' event with process.nextTick.
Closing a handle is an operation that usually *but not always* completes
on the next tick of the event loop, hence using process.nextTick is not
reliable.
Use a proper handle close callback and emit the 'close' event from
inside the callback.
Update tests that depend on the intricacies of the old model.
Fixes#3459.
This commit fixes a bug where the cluster module fails to propagate
EADDRINUSE errors.
When a worker starts a (net, http) server, it requests the listen socket
from its master who then creates and binds the socket.
Now, OS X and Windows don't always signal EADDRINUSE from bind() but
instead defer the error until a later syscall. libuv mimics this
behaviour to provide consistent behaviour across platforms but that
means the worker could end up with a socket that is not actually bound
to the requested addresss.
That's why the worker now checks if the socket is bound, raising
EADDRINUSE if that's not the case.
Fixes#2721.
Strict checking for typeof types broke backwards compatibility for other
libraries. This reverts those checks.
The subclass test has been changed to ensure all operations can be
performed on the inherited EE before instantiation. Including the
ability to set event names with numbers.
When a readable listener is added, call read(0) so that data will flow in, up to
the high water mark.
Otherwise, it's somewhat confusing that you have to listen for readable,
and ALSO call read() (when it will certainly return null) just to get some
data out of the stream.
See: #4720
A typo in the variable name makes it throw a ReferenceError instead of
the expected "Unknown type" error when dns.resolve() is passed a bad
record type argument.
Fixes the following exception:
ReferenceError: type is not defined
at Object.exports.resolve (dns.js:189:40)
at /Users/bnoordhuis/src/master/test/simple/test-c-ares.js:48:9
<snip>
Calling end(data) calls write(data). Doing this after end should
raise a 'write after end' error.
However, because end() calls were previously ignored on already
ended streams, this error was confusingly suppressed, even though the
data never is written, and cannot get to the other side.
This is a re-hash of 5222d19a11, but
without assuming that the data passed to end() is valid, and thus
breaking a bunch of tests.
The try/catch in repl.js keeps any active domain from catching the
error. Since the domain may not even be enterd until the code is run,
it's not possible to avoid the try/catch, so emit on the domain when an
error is thrown.
Calling end(data) calls write(data). Doing this after end should
raise a 'write after end' error.
However, because end() calls were previously ignored on already
ended streams, this error was confusingly suppressed, even though the
data never is written, and cannot get to the other side.
The stock writable stream "write after end" message is overly vague, if
you have clearly not called end() yourself yet.
When we receive a FIN from the other side, and call destroySoon() as a
result, then generate an EPIPE error (which is what would happen if you
did actually write to the socket), with a message explaining what
actually happened.
By making sure the _events is always an object there is one less check
that needs to be performed by emit.
Use undefined instead of null. typeof checks are a lot faster than
isArray.
There are a few places where the this._events check cannot be removed
because it is possible for the user to call those methods after using
utils.extend to create their own EventEmitter, but before it has
actually been instantiated.
This makes it so that `stream.push(chunk)` is the only way to signal the
end of reading, removing the confusing disparity between the
callback-style _read method, and the fact that most real-world streams
do not have a 1:1 corollation between the "please give me data" event,
and the actual arrival of a chunk of data.
It is still possible, of course, to implement a `CallbackReadable` on
top of this. Simply provide a method like this as the callback:
function readCallback(er, chunk) {
if (er)
stream.emit('error', er);
else
stream.push(chunk);
}
However, *only* fs streams actually would behave in this way, so it
makes not a lot of sense to make TCP, TLS, HTTP, and all the rest have
to bend into this uncomfortable paradigm.