Also, this seems to occasionally cause some annoying file-locking
errors in Windows. Not sure if this is the best fix, but it seems
to make the warnings go away in that spot.
If you call z.flush();z.write('foo'); then it would try to write 'foo'
before the flush was done, triggering an assertion in the zlib binding.
Closes#4950
Consider the following example:
console.log(Buffer('ú').toString('ascii'));
Before this commit, the contents of the buffer was used as-is and hence it
prints 'ú'.
Now, it prints 'C:'. Perhaps not much of an improvement but it conforms to what
the documentation says it does: strip off the high bits.
Fixes#4371.
Fix#4948
This adds a check before setting the incoming parser
to null. Under certain circumstances it'll already be set to
null by freeParser().
Otherwise this will cause node to crash as it tries to set
null on something that is already null.
child.send can send net servers and sockets. Now that we have support
for dgram clusters this functionality should be extended to include
dgram sockets.
This adds the following to HTTP:
* server.setTimeout(msecs, callback)
Sets all new connections to time out after the specified time, at
which point it emits 'timeout' on the server, passing the socket as an
argument.
In this way, timeouts can be handled in one place consistently.
* req.setTimeout(), res.setTimeout()
Essentially an alias to req/res.socket.setTimeout(), but without
having to delve into a "buried" object. Adds a listener on the
req/res object, but not on the socket.
* server.timeout
Number of milliseconds before incoming connections time out.
(Default=1000*60*2, as before.)
Furthermore, if the user sets up their own timeout listener on either
the server, the request, or the response, then the default behavior
(destroying the socket) is suppressed.
Fix#3460
Now that highWaterMark increases when there are large reads, this
greatly reduces the number of calls necessary to _read(size), assuming
that _read actually respects the size argument.
Don't emit the 'close' event with process.nextTick.
Closing a handle is an operation that usually *but not always* completes
on the next tick of the event loop, hence using process.nextTick is not
reliable.
Use a proper handle close callback and emit the 'close' event from
inside the callback.
Update tests that depend on the intricacies of the old model.
Fixes#3459.
This commit fixes a bug where the cluster module fails to propagate
EADDRINUSE errors.
When a worker starts a (net, http) server, it requests the listen socket
from its master who then creates and binds the socket.
Now, OS X and Windows don't always signal EADDRINUSE from bind() but
instead defer the error until a later syscall. libuv mimics this
behaviour to provide consistent behaviour across platforms but that
means the worker could end up with a socket that is not actually bound
to the requested addresss.
That's why the worker now checks if the socket is bound, raising
EADDRINUSE if that's not the case.
Fixes#2721.
Strict checking for typeof types broke backwards compatibility for other
libraries. This reverts those checks.
The subclass test has been changed to ensure all operations can be
performed on the inherited EE before instantiation. Including the
ability to set event names with numbers.
When a readable listener is added, call read(0) so that data will flow in, up to
the high water mark.
Otherwise, it's somewhat confusing that you have to listen for readable,
and ALSO call read() (when it will certainly return null) just to get some
data out of the stream.
See: #4720
A typo in the variable name makes it throw a ReferenceError instead of
the expected "Unknown type" error when dns.resolve() is passed a bad
record type argument.
Fixes the following exception:
ReferenceError: type is not defined
at Object.exports.resolve (dns.js:189:40)
at /Users/bnoordhuis/src/master/test/simple/test-c-ares.js:48:9
<snip>
Calling end(data) calls write(data). Doing this after end should
raise a 'write after end' error.
However, because end() calls were previously ignored on already
ended streams, this error was confusingly suppressed, even though the
data never is written, and cannot get to the other side.
This is a re-hash of 5222d19a11, but
without assuming that the data passed to end() is valid, and thus
breaking a bunch of tests.
The try/catch in repl.js keeps any active domain from catching the
error. Since the domain may not even be enterd until the code is run,
it's not possible to avoid the try/catch, so emit on the domain when an
error is thrown.
Calling end(data) calls write(data). Doing this after end should
raise a 'write after end' error.
However, because end() calls were previously ignored on already
ended streams, this error was confusingly suppressed, even though the
data never is written, and cannot get to the other side.
The stock writable stream "write after end" message is overly vague, if
you have clearly not called end() yourself yet.
When we receive a FIN from the other side, and call destroySoon() as a
result, then generate an EPIPE error (which is what would happen if you
did actually write to the socket), with a message explaining what
actually happened.
By making sure the _events is always an object there is one less check
that needs to be performed by emit.
Use undefined instead of null. typeof checks are a lot faster than
isArray.
There are a few places where the this._events check cannot be removed
because it is possible for the user to call those methods after using
utils.extend to create their own EventEmitter, but before it has
actually been instantiated.
This makes it so that `stream.push(chunk)` is the only way to signal the
end of reading, removing the confusing disparity between the
callback-style _read method, and the fact that most real-world streams
do not have a 1:1 corollation between the "please give me data" event,
and the actual arrival of a chunk of data.
It is still possible, of course, to implement a `CallbackReadable` on
top of this. Simply provide a method like this as the callback:
function readCallback(er, chunk) {
if (er)
stream.emit('error', er);
else
stream.push(chunk);
}
However, *only* fs streams actually would behave in this way, so it
makes not a lot of sense to make TCP, TLS, HTTP, and all the rest have
to bend into this uncomfortable paradigm.
Don't emit a 'connect' event on sockets that are handed off to
net.Server 'connection' event listeners.
1. It's superfluous because the connection has already been established
at that point.
2. The implementation is arguably wrong because the event is emitted on
the same tick of the event loop while the rule of thumb is to always
emit it on the next one.
This has been tried before in commit f0a440d but was reverted again in
ede1acc because the change was incomplete (at least one test hadn't
been updated).
Fixes#1047 (again).
The output of `id -G` is unreliable on OS X. It uses an undocumented
Libsystem function called getgrouplist_2() that includes some auxiliary
groups that the POSIX getgroups() function does not return.
Or rather, not always. It leads to fun bug chases where the test fails
in one terminal but not in another.
It's cleaner to only load domain ticker logic when the domains are being
used. This makes execution slightly quicker in both cases, and simpler
from the spinner since there is no need to check if the latest callback
requires use of domains.
Not necessary, since we can handle the error properly on the first tick
now, even if there are event listeners, etc.
Additionally, this removes the unnecessary "_needTickCallback" from
startup, since Module.loadMain() will kick off a nextTick callback right
after it runs the main module.
Fix#4856
This handles the fact that stream.Writable inherits from the Stream class,
meaning that it has the legacy pipe() method. Override that with a pipe()
method that emits an error.
Ensure that Duplex streams ARE still pipe()able, however.
Since the 'readable' flag on streams is sometimes temporary, it's probably
better not to put too much weight on that. But if something is an instanceof
Writable, rather than of Readable or Duplex, then it's safe to say that
reading from it is the wrong thing to do.
Fix#3647
It is not a valid test unless you're connected to the internet, and causes
a lot of spurious failures on Linux anyway, as it's highly dependent on
timing of things that we don't have any control over.
This makes the output of simple/test-debugger-repl and
simle/test-debugger-repl-utf8 mirror an actual debugger session, so it's
a bit easier to reason about.
Also, it uses the same code for both, and fixes it so that it doesn't
leave zombie processes lying around when it crashes.
Run 1000 times without any failures or zombies.
The CI system requires that some environment variables are set so merge
our variables into the current environment instead of blindly replacing
it.
This will probably have to be repeated for other tests. C'est la vie.
Commit 9901b69c introduces a small regression where the trailing base64
padding is no longer written out when Cipher#final is called. Rectify
that.
Fixes#4837.
There are cases where a push() call would return true, even though
the thing being pushed was in fact way way larger than the high
water mark, simply because the 'needReadable' was already set, and
would not get unset until nextTick.
In some cases, this could lead to an infinite loop of pushing data
into the buffer, never getting to the 'readable' event which would
unset the needReadable flag.
Fix by splitting up the emitReadable function, so that it always
sets the flag on this tick, even if it defers until nextTick to
actually emit the event.
Also, if we're not ending or already in the process of reading, it
now calls read(0) if we're below the high water mark. Thus, the
highWaterMark value is the intended amount to buffer up to, and it
is smarter about hitting the target.
It seems like a good idea on the face of it, but lowWaterMarks are
actually not useful, and in practice should always be set to zero.
It would be worthwhile for writers if we actually did some kind of
writev() type of thing, but actually this just delays calling write()
and the overhead of doing a bunch of Buffer copies is not worth the
slight benefit of calling write() fewer times.
lib/path.js:
- throws a TypeError on the filter if the argument is not a string.
test/simple/test-path.js:
- removed the test to check if non-string types are filtered.
- added a test to check if path.join throws TypeError on arguments that
are not strings.
lib/http.js is using stream._handle.readStart/readStop to control
data-flow coming out from underlying stream. If this methods are not
present - data might be buffered regardless of whether it'll be read.
see #4657
* Callbacks from spinner now calls its own function, separate from the
tickCallback logic
* MakeCallback will call a domain specific function if a domain is
detected
* _tickCallback assumes no domains, until nextTick receives a callback
with a domain. After that _tickCallback is overridden with the domain
specific implementation.
* _needTickCallback runs in startup() instead of nextTick (isaacs)
* Fix bug in _fatalException where exit would be called twice (isaacs)
* Process.domain has a default value of null
* Manually track nextTickQueue.length (will be useful later)
* Update tests to reflect internal api changes
This reverts commit 0109a9f90a.
Also included: Port all the changes to process._makeCallback into the
C++ version. Immediate nextTick, etc.
This yields a slight boost in several benchmarks. V8 is optimizing and
deoptimizing process._makeCallback repeatedly.
Prior to v0.10, Node ignored ECONNRESET errors in many situations.
There *are* valid cases in which ECONNRESET should be ignored as a
normal part of the TCP dance, but in many others, it's a very relevant
signal that must be heeded with care.
Exacerbating this problem, if the OutgoingMessage does not have a
req.connection._handle, it assumes that it is in the process of
connecting, and thus buffers writes up in an array.
The problem happens when you reuse a socket between two requests, and it
is destroyed abruptly in between them. The writes will be buffered,
because the socket has no handle, but it's not ever going to GET a
handle, because it's not connecting, it's destroyed.
The proper fix is to treat ECONNRESET correctly. However, this is a
behavior/semantics change, and cannot land in a stable branch.
Fix#4775
Previously, we were only destroying sockets on end if their readable
side had already been ended. This causes a problem for non-readable
streams, since we don't expect to ever see an 'end' event from those.
Treat the lack of a 'readable' flag the same as if it was an ended
readable stream.
Fix#4751
node 0.9.6 introduced Buffer changes that cause the key argument of
Hmac::HmacInit (used in crypto.createHmac) to be NULL when the key is
empty. This argument is passed to OpenSSL's HMAC_Init, which does not
like NULL keys.
This change works around the issue by passing an empty string to
HMAC_Init when the key is empty, and adds crypto.createHmac tests for
the edge cases of empty keys and values.
Let ECONNRESET network errors bubble up so clients can detect them.
Commit c4454d2e suppressed and turned them into regular end-of-stream
events to fix the then-failing simple/test-regress-GH-1531 test. See
also issue #1571 for (scant) details.
It turns out that special handling is no longer necessary. Remove the
special casing and let the error bubble up naturally.
pummel/test-https-ci-reneg-attack and pummel/test-tls-ci-reneg-attack
are updated because they expected an EPIPE error code that is now an
ECONNRESET. Suppression of the ECONNRESET prevented the test from
detecting that the connection has been severed whereupon the next
write would fail with an EPIPE.
Fixes#1776.
Fix an exception that was raised when the WriteStream was closed
immediately after creating it:
TypeError: Cannot read property 'fd' of undefined
at WriteStream.close (fs.js:1537:18)
<snip>
Avoid the TypeError and make sure the file descriptor is closed.
Fixes#4745.
Don't run the 'has function been called?' checks if the test is exiting
with an error because a failed check will mask the real exception.
v0.8 doesn't have the _fatalException machinery in src/node.js and
src/node.cc so it doesn't have this issue.
Convert the Buffer to an ArrayBuffer. The typed_array.buffer property
should be an ArrayBuffer to avoid confusion: a Buffer doesn't have a
byteLength property and more importantly, its slice() method works
subtly different.
That means that before this commit:
var buf = new Buffer(1);
var arr = new Int8Array(buf);
assert.equal(arr.buffer, buf);
assert(arr.buffer instanceof Buffer);
And now:
var buf = new Buffer(1);
var arr = new Int8Array(buf);
assert.notEqual(arr.buffer, buf);
assert(arr.buffer instanceof ArrayBuffer);