This is commit 01ee551, except for the DataView type this time.
Make the behavior of DataView consistent with that of typed arrays:
make a copy of the backing store.
Otherwise sockets that are 'finish'ed won't be unpiped and `writing to
ended stream` error will arise.
This might sound unrealistic, but it happens in net.js. When
`socket.allowHalfOpen === false`, EOF will cause `.destroySoon()` call which
ends the writable side of net.Socket.
Check that having a worker bind to a port that's already taken doesn't
leave the master process in a confused state. Releasing the port and
trying again should Just Work[TM].
Follow browser behavior, only share the backing store when it's a
ArrayBuffer. That is:
var abuf = new ArrayBuffer(32);
var a = new Int8Array(abuf);
var b = new Int8Array(abuf);
a[0] = 0;
b[0] = 1;
assert(a[0] === b[0]); // a and b share memory
But:
var a = new Int8Array(32);
var b = new Int8Array(a);
a[0] = 0;
b[0] = 1;
assert(a[0] !== b[0]); // a and b don't share memory
The typed arrays spec allows both `a[0] === b[0]` and `a[0] !=== b[0]`
but Chrome and Firefox implement the behavior where memory is not
shared.
Copying the memory is less efficient but let's do it anyway for the
sake of the Principle of Least Surprise.
Fixes#4714.
This is more backwards-compatible with stream1 streams like `fs.WriteStream`
which would allow a callback function to be passed in as the only argument.
Closes#4719.
If the NODE_DEBUGGER_TIMEOUT environment variable is set, then use
that as the number of ms to wait for the debugger to start.
This is primarily to work around a race condition that almost never
happens in real usage with the debugger, but happens EVERY FRACKING
TIME when the debugger tests run as part of 'make test'.
Those values, if passed to the _read() cb, will not signal an EOF. Only
null or undefined will mark the end of data, and trigger the end event.
However, great care must be taken if you are returning an empty string
or buffer! There must be some other thing somewhere that will trigger
a read() call, because there will never be a readable event fired later.
This is in preparation for CryptoStreams being ported to streams2, where
it is safe to simply stop reading, because the crypto cycle process will
cause it to read(0) again at some future date.
Make lines ending \r\n emit one 'line' event, not two (where the second
one is an empty string).
This adds a new keypress name: 'return' (as in: 'carriage return').
Fixes#3305.
This commit breaks simple/test-stream2-stderr-sync. Need to figure out
a better way, or just accept that `(function W(){stream.write(b,W)})()`
is going to be noisy. People should really be using the `'drain'` event
for this use-case anyway.
This reverts commit 02f7d1bfd8.
Always defer the _write callback. The optimization here was only
relevant in some oddball edge cases that we don't actually care about.
Our benchmarks confirm that just always deferring the Socket._write cb
is perfectly fine to do, and in some cases, even slightly more
performant.
The refactor in b43e544140 to use
stream.push() in Transform inadvertently caused it to immediately
consume all the written data, regardless of whether or not the readable
side was being consumed.
Only pull data through the _transform() process when the readable side
is being consumed.
Fix#4667
TCPWrap::Initialize() and PipeWrap::Initialize() should be called before
any data will be read from received socket. But, because of lazy
initialization of these bindings, Initialize() method isn't called.
Init bindings manually upon socket receiving.
See #4669
Fix the following OOM error in pummel/test-net-connect-memleak
and pummel/test-tls-connect-memleak:
FATAL ERROR: CALL_AND_RETRY_0 Allocation failed - process out of
memory
Commit v8/v8@91afd39 increases the size of the deoptimization table
to the extent that a 64M float array pushes it over the brink. Switch
to SMIs so it stays below the limit.
pummel/test-net-connect-memleak is still failing albeit with a different
error this time. Needs further investigation.
=== release test-net-connect-memleak ===
Path: pummel/test-net-connect-memleak
-64 kB reclaimed
assert.js:102
throw new assert.AssertionError({
^
AssertionError: false == true
at done [as _onTimeout] (/home/bnoordhuis/src/nodejs/master/
test/pummel/test-net-connect-memleak.js:48:3)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
at process._makeCallback (node.js:306:20)
mainly to allow native addons to export single functions on
rather than being restricted to operating on an existing
object.
Init functions now receive exports as the first argument, like
before, but also the module object as the second argument, if they
support it.
Related to #4634
cc: @rvagg
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
Backport of 16bbecc from master branch. Closes#4633.
Argument checks were simplified by setting all undefined/NaN or out of
bounds values equal to their defaults.
Also copy() tests had a flaw that each buffer had the same bit pattern at
the same offset. So even if the copy failed, the bit-by-bit comparison
would have still been true. This was fixed by filling each buffer with a
unique value before copy operations.
Fix issue where SlowBuffers couldn't be passed as target to Buffer
copy().
Also included checks to see if Argument parameters are defined before
assigning their values. This offered ~3x's performance gain.
This adds a proxy for bytesWritten to the tls.CryptoStream. This
change makes the connection object more similar between HTTP and
HTTPS requests in an effort to avoid confusion.
See issue #4650 for more background information.
We detect for non-string and non-buffer values in onread and
turn the stream into an "objectMode" stream.
If we are in "objectMode" mode then howMuchToRead will
always return 1, state.length will always have 1 appended
to it when there is a new item and fromList always takes
the first value from the list.
This means that for object streams, the n in read(n) is
ignored and read() will always return a single value
Fixed a bug with unpipe where the pipe would break because
the flowing state was not reset to false.
Fixed a bug with sync cb(null, null) in _read which would
forget to end the readable stream
This is similar to commit 2cbf458 but this time for 204 No Content
instead of 304 Not Modified responses.
When the user sends a 204 response with a Transfer-Encoding: chunked
header, suppress sending the zero chunk and force the connection to
close.
Force the connection to close when the response is a 304 Not Modified
and the user has set a "Transfer-Encoding: chunked" header.
RFC 2616 mandates that 304 responses MUST NOT have a body but node.js
used to send out a zero chunk anyway to accommodate clients that don't
have special handling for 304 responses.
It was pointed out that this might confuse reverse proxies to the point
of creating security liabilities, so suppress the zero chunk and force
the connection to close.
Fix an off-by-one error introduced in 9fe3734 that caused a regression
in the default endianness used for writes in DataView::setGeneric().
Fixes#4626.
Due to the nature of asyncronous programming, it's impossible to know
what will run on the next tick. Because of this, it's not correct to
maintain domain stack state between ticks
Since the _fatalException handler is only invoked after the stack is
unwound, once it exits the tick will end. The only reasonable thing to
do in that case is to exit *all* domains.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want to receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Keeping list of all sockets that were sent to child process causes memory
leak and thus unacceptable (see #4587). However `server.close()` should
still work properly.
This commit introduces two options:
* child.send(socket, { track: true }) - will send socket and track its status.
You should use it when you want `server.connections` to be a reliable
number, and receive `close` event on sent sockets.
* child.send(socket) - will send socket without tracking it status. This
performs much better, because of smaller number of RTT between master and
child.
With both of these options `server.close()` will wait for all sent
sockets to get closed.
Unfortunately, it's just too slow to do this in events.js. Users will
just have to live with not having events named __proto__ or toString.
This reverts commit b48e303af0.
This test starts two clustered HTTP servers on the same port.
It expects the first cluster to succeed and the second cluster
to fail with EADDRINUSE.
Reapplies commit cacd3ae, accidentally reverted in a2851b6.
Reject negative offsets in SlowBuffer::MakeFastBuffer(), it allows
the creation of buffers that point to arbitrary addresses.
Reported by Trevor Norris.
Problem 1: If stream.push() triggers a 'readable' event, and the user
calls `read(n)` with some n > the highWaterMark, then the push() will
return false (indicating that they should not push any more), but no
future 'readable' event is coming (because we're above the
highWaterMark).
Solution: return true from push() when needReadable is set.
Problem 2: A read(n) for n != 0, after the stream had encountered an
EOF, would not trigger the 'end' event if the EOF was pushed in
synchronously by the _read() function.
Solution: Check for ended in stream.read() and schedule an end event if
the length now equals 0.
Fix#4585
Improvements:
* floating point operations are approx 4x's faster
* Now write quiet NaN's
* all read/write on floating point now done in C, so no more need for
lib/buffer_ieee754.js
* float values have more accurate min/max value checks
* add additional benchmarks for buffers read/write
* created benchmark/_bench_timer.js which is a simple library that
can be included into any benchmark and provides an intelligent tracker
for sync and async tests
* add benchmarks for DataView set methods
* add checks and tests to make sure offset is greater than 0
There was previously an assert() in there, but this part of the code is
so high-volume that the added cost made a measurable dent in http_simple.
Just checking inline is fine, though, and prevents a lot of potential
hazards.
Say that a stream's current read queue has 101 bytes in it, and the
underlying resource has ended (ie, reached EOF).
If you do something like this:
stream.read(100); // leave a byte behind
stream.read(0); // read(0) for some reason
then the read(0) will get 0 from the howMuchToRead function. Since the
stream was ended, this was incorrectly treating the 0 as a "there is no
more in the buffer", and emitting 'end' before that last byte was read.
Why have the read(0) in the first place? We do this in some cases to
trigger the last few bytes of a net socket (such as a child process's
stdio pipes). This was causing issues when piping a `git archive` job
to a file: the resulting tarball was incomplete, because it occasionally
was not getting the last chunk.
Fix the following exception:
http.js:974
this._httpMessage.emit('close');
^
TypeError: Cannot call method 'emit' of null
at Socket.onServerResponseClose (http.js:974:21)
at Socket.EventEmitter.emit (events.js:124:20)
at net.js:421:10
at process._tickCallback (node.js:386:13)
at process._makeCallback (node.js:304:15)
Fixes#4586.
This also slightly changes the semantics, in that a 'readable'
event may be triggered by the first write() call, even if a
user has not yet called read().
This happens because the Transform _write() handler is calling
read(0) to start the flow of data. Technically, the new behavior
is more 'correct', since it is more in line with the semantics
of the 'readable' event in other streams.
When switching into compatibility mode by setting `data` event listener,
`_read()` method will be called immediately. If method implementation
invokes callback in the same tick - all emitted `data` events will be
discarded, because `data` listener wasn't set yet.
Raise a TypeError when the argument to send() or sendto() is anything
but a Buffer.
Fixes the following assertion:
$ node -e 'require("dgram").createSocket("udp4").send("BAM")'
node: ../../src/udp_wrap.cc:220: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`Buffer::HasInstance(args[0])' failed.
Aborted (core dumped)
Fixes#4496.
The test was failing in debug mode because the timeouts were set too
low. Fix that by increasing the timeouts. Admittedly not a great fix.
If this test keeps playing up, it's probably best to remove it.
Fixes#4528.
This test is timing sensitive and hence quite unreliable with debug
builds. What's worse is that it leaves a stray child process behind
that listens on the default test port and that makes all the tests
that come after it fail with EADDRINUSE errors.
Allows for arbitrary path to executable spawned using `fork`. This
fixes some issues around running multiple versions of node with workers
and allows arbitrary IPC with compatible executables.
Fixes#3248.
In JS, the expression ".1" is a floating point number. Issue 4268 concerns the
REPL interpreting floating point numbers that lead with a "." as keywords. The
original bugfix worked for this specific case but not for the general case:
var x = [
.1,
.2,
.3
];
The attached change and test (`.1+.1` should be `.2`) fix the bug.
Closes#4513.
Don't give names of built-in libraries special treatment.
Changes the REPL's behavior from this:
> var path = 42
> path
A different "path" already exists globally
To this:
> var path = 42
> path
42
Fixes#4512.
Calling send() on an unbound socket forces an implicit bind to
a random port.
332fea5 made the 'listening' event asynchronous. Unfortunately,
it also introduced a bug where the implicit bind was tried more
than once if send() was called again before the first bind operation
completed.
Address that by keeping track of the bind status and making sure that
bind() is called only once.
Fixes#4499.
While it's true that error objects have a history of getting snake_case
properties attached by the host system, it's a point of confusion to
Node users that comes up a lot. It's still 'experimental', so best to
change this sooner rather than later.
This adds a process._fatalException method which is called into from
C++ in order to either emit the 'uncaughtException' method, or emit
'error' on the active domain.
The 'uncaughtException' event is an implementation detail that it would
be nice to deprecate one day, so exposing it as part of the domain
machinery is not ideal.
Fix#4375
socket.readyState, .readable, and .writable behavior changed as
a result of the new streaming interfaces. Updated to be backwards
compatible with current API and adds regression test.
closes#4461
Fixes#3226.
Consider a production server that uses a REPL to debug. Creating the instance
would wipe out the global cache of modules, and subsequent "require" calls in
the server would be reloaded from disk. The REPL should observe only, without
altering, its environment.
Make parser errors bubble up to the ClientRequest instead of the underlying
net.Socket object.
This is a back-port of commit c78678b from the master branch.
Fixes#3776.
Work around an issue with the glibc malloc() implementation where memory blocks
are never returned to the operating system when they are allocated with brk()
and have overlapping lifecycles.
Fixes#4283.
Streams2 style streams might have already kicked off a read() or write()
before emitting 'data' events. Make the test less dependent on ordering
of when data events occur.
This fixes the CONNECT/Upgrade HTTP functionality, which was not getting
sliced properly, because readable wasn't emitted on this tick.
Conflicts:
test/simple/test-http-connect.js
Verifies that the callback gets invoked <n> times during the lifetime of the
test script.
This is a back-port of commit d0e6c3f from the master branch.
Don't allow connections to stall indefinitely if the SSL/TLS handshake does
not complete.
Adds a new tls.Server and https.Server configuration option, handshakeTimeout.
Fixes#4355.
The test assumes the parent and the child are scheduled fairly. Probably true
most of the time but not always, making it fail spuriously.
Bad test, remove it.
V8 debug agent needs some time to be ready and no longer sends the first event
break response to a debug client. We wait some time to connect the agent and
check its break status by obtaining breakpoint list and seeing if it exists on
line 0.
Enable long stacktraces if NODE_DEBUG=fs is set in the environment. Only
applies to the default rethrow callback; it's to help you find places where
you forgot to pass in a callback.
Fix#4331
Using double negate forces values into 32bit space. Because of this
Math.ceil needs to be used. Since NaN comparisons are always false, use
that to our advantage to return 0 if it is.
Also added two tests to verify the changes.
* Added isIP method to make use of inet_pton to cares_wrap.cc
* Modified net.isIP() to make use of new C++ isIP method.
* Added new tests to test-net-isip.js.
This is a back-port of commit fb6377e from the master branch.
Disabled the following unit tests:
* test-eio-race.js
* test-eio-race2.js
* test-eio-race4.js
These tests are known to fail on busy boxes due to being timing sensitive,
and are deemed not meaningful tests.
See https://github.com/joyent/node/issues/4272Fixes#4272.
While updating the readline test cases to test both "terimal: false" and
"terminal: true" mode, it turned out that the test case testing utf8 chars
being sent over multiple write() calls was failing. The solution is to use
a string_decoder instance when parsing the "keypress" events.
Before this commit, readline was inconsistent in whether or not it would emit
"line" events with or without the trailing "\n" included. When "terminal"
mode was true, then there would be no "\n", when it was false, then the "\n"
would be present. However, the trailing "\n" doesn't add much, and most of the
time people just end up stripping it manually.
Part of #4243.
* Added isIP method to make use of inet_pton to cares_wrap.cc
* Modified net.isIP() to make use of new C++ isIP method.
* Added new tests to test-net-isip.js.
`url.format` should escape ? and # chars in pathname, and # chars in
search, because they change the semantics of the operation otherwise.
Don't escape % chars, or anything else. (see: #4082)
This is a flag to make it easier for users to upgrade through the
breaking crypto change, and easier for us to switch it back if it's a
problem.
Explicitly set default encoding to 'buffer' in other tests, in case it
ever changes back.
crypto: Hash and Hmac default to buffers
crypto: Move Cipher encoding logic to JS
crypto: Move Cipheriv encoding logic to JS
crypto: Move Decipher encoding logic to JS
crypto: Move Decipheriv into JS, default to buffers
crypto: Move Sign class to JS
crypto: Better encoding handling in Hash.update
crypto: Move Verify class to JS
crypto: Move DiffieHellman to JS, default to buffers
crypto: Move DiffieHellmanGroup to JS, default to buffers
Also, create a test for this feature
Previously, the "global" mode of REPLs was broken when created after another
non-global REPL (they would end up sharing the same context). Now that "global"
mode is fixed for that case (b1e78cef09), this
test case gets its global scope modified with "module" and other REPL-specific
properties, so disable the global check.
Before there was this weird module-scoped "context" variable which seemingly
shared the "context" of subsequent REPL instances, unless ".clear" was invoked
inside the REPL. To be proper, we need to ensure that each REPL gets its own
"context" object. I literally don't know why this "sharing" behavior was in place
before, but it was just plain wrong.
Consolidates all the formatting options into an "options" object argument.
This is so that we don't have to be constantly remembering the order of
the arguments and so that we can add more formatting options easily.
Closes#4085.
Make the 'listening' event handler in the master process see the actual port
that the worker bound to when the worker specified port 0, i.e. a random port.
Encoding failures can be somewhat confusing, especially when they are due to
control flow frameworks auto-filling parameters from the previous step output
values to functions (such as toString and write) that developers don't expect
to take an encoding parameter. By outputting the value as part of the message,
should make it easier to track down these sort of bugs.
This reverts commit 790d651f0d.
This makes Duplex streams unworkable, and would only ever be a special
case for HTTP responses, which is not ideal.
Intead, we're going to just bless the 'finish' event for all Writable
streams in 0.10
Just as the 'WWW-Authenticate' HTTP header the 'Proxy-Authenticate' header might
be received several times as well. Currently only one value is preserved. This
change allows to receive multiple values concatenated by space and comma.
Just as the 'WWW-Authenticate' HTTP header the 'Proxy-Authenticate' header might
be received several times as well. Currently only one value is preserved. This
change allows to receive multiple values concatenated by space and comma.
A child process created with .fork() needed to call `process.exit()` explicitly
because the communication channel with the parent kept the event loop alive.
Fix that by only ref'ing the channel when there are 'message' event listeners.
Fixes#3799.
This addresses #4034. There are two problems happening:
1. The domain is not exited automatically when calling dispose() on it.
Then, since the domain is disposed, attempting to exit it again will do
nothing.
2. The active domain is stored on process.domain. Since thrown errors
call `process.emit('uncaughtException', er)`, and the process is an
event emitter with a `.domain` member, it re-enters the domain a second
time before calling the error handler, pushing it onto the stack again.
Thus, if the handler calls `domain.dispose()`, then the domain is now on
the stack twice, and cannot be exited properly. Since the domain is
disposed, any subsequent IO will be no-op'ed, since we've declared that
this context is done and best forgotten.
The solution here is twofold:
1. In EventEmitter.emit, do not enter the domain if `this===process`.
2. Automatically exit the domain when calling `domain.dispose()`.
Make sure the deletion event gets reported in the following scenario:
1. Watch a file.
2. The initial stat() goes okay.
3. Something deletes the watched file.
4. The second stat() fails with ENOENT.
The second stat() translates into the first 'change' event but a logic error
stopped it from getting emitted.
Fixes#4027.
Update the tls and https tests to explicitly set rejectUnauthorized instead of
relying on the NODE_TLS_REJECT_UNAUTHORIZED environment variable getting set.
This commit changes the default value of the rejectUnauthorized option from
false to true.
What that means is that tls.connect(), https.get() and https.request() will
reject invalid server certificates from now on, including self-signed
certificates.
There is an escape hatch: if you set the NODE_TLS_REJECT_UNAUTHORIZED
environment variable to the literal string "0", node.js reverts to its
old behavior.
Fixes#3949.
Check that the calls to Integer::New() and Date::New() succeed and bail out if
they don't.
V8 returns an empty handle on stack overflow. Trying to set the empty handle as
a property on an object results in a NULL pointer dereference in release builds
and an assert in debug builds.
Fixes#4015.
A HTTP/1.0 client does not support 'Transfer-Encoding: chunked' unless it
explicitly requests it by sending a 'TE: chunked' header.
Before this commit, node.js always disabled chunked encoding for HTTP/1.0
clients. Now it will scan for the TE header and turn on chunked encoding if
requested and applicable.
Fixes#940.
Don't execute the callback in the context of the global object.
MakeCallback() tries to apply the active domain to the callback. If the user
polluted the global object with a 'domain' property, as in the code example
below, MakeCallback() will try to apply that.
Example:
domain = {}; // missing var keyword is intentional
crypto.randomBytes(8, cb); // TypeError: undefined is not a function
Fixes#3956.
With this patch the IPC socket is no longer available in the
ChildProcess.stdio array. This shouldn't be very problematic, since
this socket was effectively non-functional; it would never emit any
events.
Throw an exception in the tls.Server constructor when the options object
doesn't contain either a PFX or a key/certificate combo.
Said change exposed a bug in simple/test-tls-junk-closes-server. Addressed.
Fixes#3941.