Don't set the oncomplete property in src/cares_wrap.cc, we can do it
just as easily in lib/dns.js.
Switch two closures to the 'function with _this_ object' model. Makes
it impossible for an overzealous closure to capture too much context
and accidentally hold on to too much memory.
Change process.domain to use a getter/setter and access that property
via an array index. These are much faster to get from c++, and it can be
passed to _setupDomainUse and stored as a Persistent<Array>.
InDomain() and GetDomain() as trivial ways to access the domain
information in the native layer. Important because we'll be able to
quickly access if a domain is active. Instead of just whether the domain
module has been loaded.
Don't use v8::Object::SetHiddenValue() to keep a reference alive to the
buffer, we can just as easily do that from JS land and it's a lot faster
to boot.
Because the buffer is now a visible property of the write request
object, it's essential that we do *not* log it - we'd be effectively
serializing the whole buffer to a pretty-printed string.
v0.10 allows strings for the offset, length and port arguments to
dgram.send() and dgram.sendto() but master before this commit would
abort with the following assert:
node: ../../src/udp_wrap.cc:227: static void
node::UDPWrap::DoSend(const v8::FunctionCallbackInfo<v8::Value>&,
int): Assertion `args[2]->IsUint32()' failed.
Go beyond what v0.10 does and also add range checks: offset and length
should be >= 0, port should be between 1 and 65535.
That particular change needs to be back-ported to v0.10 because passing
a negative offset or length number aborts with the following assertions:
node: ../../src/udp_wrap.cc:264: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`offset < Buffer::Length(buffer_obj)' failed.
Or:
node: ../../src/udp_wrap.cc:265: static v8::Handle<v8::Value>
node::UDPWrap::DoSend(const v8::Arguments&, int): Assertion
`length <= Buffer::Length(buffer_obj) - offset' failed.
Interestingly enough, a negative port number is accepted in v0.10 but
is silently ignored.
This commit exposed a bug in the simple/test-dgram-close test which
has also been fixed.
When a stream is flowing, and not in the middle of a sync read, and
the read buffer currently has a length of 0, we can just emit a 'data'
event rather than push it onto the array, emit 'readable', and then
automatically call read().
As it happens, this is quite a frequent occurrence! Making this change
brings the HTTP benchmarks back into a good place after the removal of
the .ondata/.onend socket kludge methods.
smalloc.alloc now accepts an optional third argument which allows
specifying the type of array that should be allocated. All available
types are now located on smalloc.Types.
* Moved the ToObject check out of smalloc::Alloc and into JS. Direct
usage of that method is for internal use only and so can bypass the
possible coercion.
* Same has been done with smalloc::SliceOnto.
* smalloc::CopyOnto will now throw if passed argument is not an object.
* Remove extra TargetFreeCallback function. There was a use for it when
it was working with a Local<T>, but that code has been removed making
the function superfluous.
There are some agent subclasses using this today.
Despite the addRequest function being undocumented internal API, it's
easy enough to just support the old signature for backwards
compatibility.
`server.SNICallback` was initialized with `SNICallback.bind(this)`, and
therefore check `this.SNICallback === SNICallback` was always false, and
`_tls_wrap.js` always thought that it was a custom callback instead of
default one. Which in turn was causing clienthello parser to be enabled
regardless of presence of SNI contexts.
If an error listener is added to a stream using once() before it is
piped, it is invoked and removed during pipe() but before pipe() sees it
which causes it to be emitted again.
Fixes#4155#4978
It shouldn't ignore it!
There're two possibile cases, which should be handled properly:
1. Having a default `SNICallback` which is using contexts, added with
`server.addContext(...)` routine
2. Having a custom `SNICallback`.
In first case we may want to opt-out setting `.onsniselect` method (and
thus save some CPU time), if there're no contexts added. But, if custom
`SNICallback` is used, `.onsniselect` should always be set, because
server contexts don't affect it.
* Numeric values passed to alloc were converted to int32, not uint32
before the range check, which allows wrap around on ToUint32. This
would cause massive malloc calls and v8 fatal errors.
* dispose would not check if value was an Object, causing segfault if a
Primitive was passed.
* kMaxLength was not enumerable.
Before this commit, events were set to undefined rather than deleted
from the EventEmitter's backing dictionary for performance reasons:
`delete obj.key` causes a transition of the dictionary's hidden class
and that can be costly.
Unfortunately, that introduces a memory leak when many events are added
and then removed again. The strings containing the event names are never
reclaimed by the garbage collector because they remain part of the
dictionary.
That's why this commit makes EventEmitter delete events again. This
effectively reverts commit 0397223.
Fixes#5970.
Avoid a costly buffer-to-string operation. Instead, allocate a new
buffer, copy the chunk header and data into it and send that.
The speed difference is negligible on small payloads but it really
shines with larger (10+ kB) chunks. benchmark/http/end-vs-write-end
with 64 kB chunks gives 45-50% higher throughput. With 1 MB chunks,
the difference is a staggering 590%.
Of course, YMMV will vary with real workloads and networks but this
commit should have a positive impact on CPU and memory consumption.
Big kudos to Wyatt Preul (@wpreul) for reporting the issue and providing
the initial patch.
Fixes#5941 and #5944.
Don't throw an exception when the argument to %j is an object that
contains circular references, it's not helpful. Catch the exception
and return the string '[Circular]'.
Prior, strings would first be converted to a Buffer before being written
to disk. Now the intermediary step has been removed.
Other changes of note:
* Class member "must_free" was added to req_wrap so to track if the
memory needs to be manually cleaned up after use.
* External String Resource support, so the memory will be used directly
instead of copying out the data.
* Docs have been updated to reflect that if position is not a number
then it will assume null. Previously it specified the argument must be
null, but that was not how the code worked. An attempt was made to
only support == null, but there were too many tests that assumed !=
number would be enough.
* Docs update show some of the write/writeSync arguments are optional.
Passing the number of sent bytes to the callback is superfluous;
datagram sockets operate in atomic mode: either the sendmsg() system
call succeeds or it fails but it never does partial writes.
Instead, report send errors to the callback. UDP error reporting is
fairly haphazard on most platforms. You should not expect reliable
delivery of anything besides EMSGSIZE and (possibly) ENETDOWN and
ENETUNREACH.
Fixes#2608.
This prevents the following sort of thing from being confusing:
```javascript
stream.on('data', function() { console.error('got data'); });
stream.pause(); // stop reading
// turns out no data is available
stream.push(null);
// Hand the stream to someone else, who does stuff...
setTimeout(function() {
// too late! 'end' is already emitted!
stream.on('end', function() { console.error('got end'); });
});
```
With this change, the `end` event is not emitted until you call `read()`
*past* the EOF null. So, a paused stream will not swallow the `end`
event and emit it before you `resume()` the stream.
If `obj` given to `cluster._getServer` has `_setServerData` or
`_getServerData` methods, the data will be synchronized across workers
and stored in master.
Includes:
* No need for `typeof` when checking undefined.
* length is coerced to uint so no need to check if < 0.
* Stay consistent and always throw `new` errors.
* Returning offset + magic number in every write is error prone. Instead
return the central write function which returns the correct offset.
In a rush to implement the fix 35e0d60 I overlooked the logic that
causes 0-length buffer instantiation to automatically not assign the
parent regardless.
SlowBuffer(0) passes NULL instead of doing malloc(0). So when someone
attempted to SlowBuffer(0).slice(0, 1) an assert would fail in
smalloc::SliceOnto.
It's important that the check go where it is because the resulting
Buffer needs to have external array data allocated. In the case a user
tries to slice a zero length Buffer it will also have NULL passed as the
data argument.
Also fixed where the .parent attribute was set for zero length Buffers.
There is no need to track the source of slice if the slice isn't
actually occurring.
Closes#5860
In streams2, there is an "old mode" for compatibility. Once switched
into this mode, there is no going back.
With this change, there is a "flowing mode" and a "paused mode". If you
add a data listener, then this will start the flow of data. However,
hitting the `pause()` method will switch *back* into a non-flowing mode,
where the `read()` method will pull data out.
Every time `read()` returns a data chunk, it also emits a `data` event.
In this way, a passive data listener can be added, and the stream passed
off to some other reader, for use with progress bars and the like.
There is no API change beyond this added flexibility.
Libuv now returns errors directly. Make everything in src/ and lib/
follow suit.
The changes to lib/ are not strictly necessary but they remove the need
for the abominations that are process._errno and node::SetErrno().
It will be confusing if later on we add Buffer#dispose(), and smalloc is
its own cpp api anyways. So instead create a new require('smalloc') to
expose the previous Buffer.alloc/dispose methods, and expose copyOnto
and kMaxLength as well.
Other changes:
* Added documentation and additional tests.
* smalloc::CopyOnto has changed from using assert() to throwing errors
on bad argument values because it is not exposed to the user.
* Minor style fixes.
When using url.parse(), path and pathname usually return '/' when there
is no path available. However when you have a protocol that contains
non-lowercase letters and the input string does not have a trailing
slash, both path and pathname will be undefined.
A pooled https agent may get a Connection: close, but never finish
destroying the socket as the prior request had already emitted finish
likely from a pipe.
Since the socket is not marked as destroyed it may get reused by the
agent pool and result in an ECONNRESET.
re: #5712#5739
Instead of destroying sockets when there are no pending requests, put
them in a freeSockets list, and unref() them so that they do not keep
the event loop open.
Also, set the default max sockets to Infinity, to prevent the awful
surprising deadlocks that happen when more connections are made.
When creating a slice, make sure to propagate the originating parent.
This is to prevent a buf.parent.parent.(etc) scenario.
Also speed up the constructor by preventing lookup of non-existant
properties by setting them beforehand in the prototype. (see
https://github.com/joyent/node/commit/7ce5a31#commitcomment-3332779)
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.
This is a big commit that touches just about every file in the src/
directory. The V8 API has changed in significant ways. The most
important changes are:
* Binding functions take a const v8::FunctionCallbackInfo<T>& argument
rather than a const v8::Arguments& argument.
* Binding functions return void rather than v8::Handle<v8::Value>. The
return value is returned with the args.GetReturnValue().Set() family
of functions.
* v8::Persistent<T> no longer derives from v8::Handle<T> and no longer
allows you to directly dereference the object that the persistent
handle points to. This means that the common pattern of caching
oft-used JS values in a persistent handle no longer quite works,
you first need to reconstruct a v8::Local<T> from the persistent
handle with the Local<T>::New(isolate, persistent) factory method.
A handful of (internal) convenience classes and functions have been
added to make dealing with the new API a little easier.
The most visible one is node::Cached<T>, which wraps a v8::Persistent<T>
with some template sugar. It can hold arbitrary types but so far it's
exclusively used for v8::Strings (which was by far the most commonly
cached handle type.)
If a transform stream has objectMode = true, it should
allow falsey values other than (null) like 0, false, ''.
null is reserved to indicate stream eof but other falsey
values should flow through properly.
There was previously up to a second exit delay when exiting node
right after an http request/response, due to the utcDate() function
doing a setTimeout to update the cached date/time.
Fixing this should increase the performance of our http tests.
While the new Buffer implementation is much faster we still have the
necessity of using Buffer pools. This is undesirable because it may
still lead to unwanted memory retention, but for the time being this is
the best solution.
Because of this re-introduction, and since there is no more SlowBuffer
type, the SlowBuffer method has been re-purposed to return a non-pooled
Buffer instance. This will be helpful for developers to store data for
indeterminate lengths of time without introducing a memory leak.
Another change to Buffer pools was that they are only allocated if the
requested chunk is < poolSize / 2. This was done because allocations are
much quicker now, and it's a better use of the pool.
Memory allocations are now done through smalloc. The Buffer cc class has
been removed completely, but for backwards compatibility have left the
namespace as Buffer.
The .parent attribute is only set if the Buffer is a slice of an
allocation. Which is then set to the alloc object (not a Buffer).
The .offset attribute is now a ReadOnly set to 0, for backwards
compatibility. I'd like to remove it in the future (pre v1.0).
A few alterations have been made to how arguments are either coerced or
thrown. All primitives will now be coerced to their respective values,
and (most) all out of range index requests will throw.
The indexes that are coerced were left for backwards compatibility. For
example: Buffer slice operates more like Array slice, and coerces
instead of throwing out of range indexes. This may change in the future.
The reason for wanting to throw for out of range indexes is because
giving js access to raw memory has high potential risk. To mitigate that
it's easier to make sure the developer is always quickly alerted to the
fact that their code is attempting to access beyond memory bounds.
Because SlowBuffer will be deprecated, and simply returns a new Buffer
instance, all tests on SlowBuffer have been removed.
Heapdumps will now show usage under "smalloc" instead of "Buffer".
ParseArrayIndex was added to node_internals to support proper uint
argument checking/coercion for external array data indexes.
SlabAllocator had to be updated since handle_ no longer exists.
Split `tls.js` into `_tls_legacy.js`, containing legacy
`createSecurePair` API, and `_tls_wrap.js` containing new code based on
`tls_wrap` binding.
Remove tests that are no longer useful/valid.
This reverts commit a40133d10c.
Unfortunately, this breaks socket.io. Even though it's not strictly an
API change, it is too subtle and in too brittle an area of node, to be
done in a stable branch.
Conflicts:
doc/api/http.markdown
Normalize the encoding in getEncoding() before using it. Fixes a
"AssertionError: Cannot change encoding" exception when the caller
mixes "utf8" and "utf-8".
Fixes#5655.
This is only relevant for CentOS 6.3 using kernel version 2.6.32.
On other linuxes and darwin, the `read` call gets an ECONNRESET in that
case. On sunos, the `write` call fails with EPIPE.
However, old CentOS will occasionally send an EOF instead of a
ECONNRESET or EPIPE when the client has been destroyed abruptly.
Make sure we don't keep trying to write or read more in that case.
Fixes#5504
However, there is still the question of what libuv should do when it
gets an EOF. Apparently in this case, it will continue trying to read,
which is almost certainly the wrong thing to do.
That should be fixed in libuv, even though this works around the issue.
In cases where there are multiple @-chars in a url, Node currently
parses the hostname and auth sections differently than web browsers.
This part of the bug is serious, and should be landed in v0.10, and
also ported to v0.8, and releases made as soon as possible.
The less serious issue is that there are many other sorts of malformed
urls which Node either accepts when it should reject, or interprets
differently than web browsers. For example, `http://a.com*foo` is
interpreted by Node like `http://a.com/*foo` when web browsers treat
this as `http://a.com%3Bfoo/`.
In general, *only* the `hostEndingChars` should be the characters that
delimit the host portion of the URL. Most of the current `nonHostChars`
that appear in the hostname should be escaped, but some of them (such as
`;` and `%` when it does not introduce a hex pair) should raise an
error.
We need to have a broader discussion about whether it's best to throw in
these cases, and potentially break extant programs, or return an object
that has every field set to `null` so that any attempt to read the
hostname/auth/etc. will appear to be empty.
In some cases, the http CONNECT/Upgrade API is unshifting an empty
bodyHead buffer onto the socket.
Normally, stream.unshift(chunk) does not set state.reading=false.
However, this check was not being done for the case when the chunk was
empty (either `''` or `Buffer(0)`), and as a result, it was causing the
socket to think that a read had completed, and to stop providing data.
This bug is not limited to http or web sockets, but rather would affect
any parser that unshifts data back onto the source stream without being
very careful to never unshift an empty chunk. Since the intent of
unshift is to *not* change the state.reading property, this is a bug.
Fixes#5557FixesLearnBoost/socket.io#1242
Before this, entering something like:
> JSON.parse('066');
resulted in the "..." prompt instead of displaying the expected
"SyntaxError: Unexpected number"
1. Emit `sslOutEnd` only when `_internallyPendingBytes() === 0`.
2. Read before checking `._halfRead`, otherwise we'll see only previous
value, and will invoke `._write` callback improperly.
3. Wait for both `end` and `finish` events in `.destroySoon`.
4. Unpipe encrypted stream from socket to prevent write after destroy.
Stream's `._write()` callback should be invoked only after it's opposite
stream has finished processing incoming data, otherwise `finish` event
fires too early and connection might be closed while there's some data
to send to the client.
see #5544
Quote from SSL_shutdown man page:
The output of SSL_get_error(3) may be misleading,
as an erroneous SSL_ERROR_SYSCALL may be flagged even though
no error occurred.
Also, handle all other errors to prevent assertion in `ClearError()`.
When writing bad data to EncryptedStream it'll first get to the
ClientHello parser, and, only after it will refuse it, to the OpenSSL.
But ClientHello parser has limited buffer and therefore write could
return `bytes_written` < `incoming_bytes`, which is not the case when
working with OpenSSL.
After such errors ClientHello parser disables itself and will
pass-through all data to the OpenSSL. So just trying to write data one
more time will throw the rest into OpenSSL and let it handle it.
Otherwise, writing an empty string causes the whole program to grind to
a halt when piping data into http messages.
This wasn't as much of a problem (though it WAS a bug) in 0.8 and
before, because our hyperactive 'drain' behavior would mean that some
*previous* write() would probably have a pending drain event, and cause
things to start moving again.
This saves a few calls to gettimeofday which can be expensive, and
potentially subject to clock drift. Instead use the loop time which
uses hrtime internally.
fixes#5497
This commit adds an optimization to the HTTP client that makes it
possible to:
* Pack the headers and the first chunk of the request body into a
single write().
* Pack the chunk header and the chunk itself into a single write().
Because only one write() system call is issued instead of several,
the chances of data ending up in a single TCP packet are phenomenally
higher: the benchmark with `type=buf size=32` jumps from 50 req/s to
7,500 req/s, a 150-fold increase.
This commit removes the check from e4b716ef that pushes binary encoded
strings into the slow path. The commit log mentions that:
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
For hex and base64 strings that's certainly true but binary strings
are 'das Ding an sich': string.length is the same before and after
decoding.
Fixes#5528.
When an internal api needs a timeout, they should use
timers._unrefActive since that won't hold the loop open. This solves
the problem where you might have unref'd the socket handle but the
timeout for the socket was still active.
Previously one could write anywhere in a buffer pool if they accidently
got their offset wrong. Mainly because the cc level checks only test
against the parent slow buffer and not against the js object properties.
So now we check to make sure values won't go beyond bounds without
letting the dev know.
Add localAddress and localPort properties to tls.CleartextStream.
Like remoteAddress and localPort, delegate to the backing net.Socket
object.
Refs #5502.
Test case:
var t = setInterval(function() {}, 1);
process.nextTick(t.unref);
Output:
Assertion failed: (args.Holder()->InternalFieldCount() > 0),
function Unref, file ../src/handle_wrap.cc, line 78.
setInterval() returns a binding layer object. Make it stop doing that,
wrap the raw process.binding('timer_wrap').Timer object in a Timeout
object.
Fixes#4261.
Commit 38149bb changes http.get() and http.request() to escape unsafe
characters. However, that creates an incompatibility with v0.10 that
is difficult to work around: if you escape the path manually, then in
v0.11 it gets escaped twice. Change lib/http.js so it no longer tries
to fix up bad request paths, simply reject them with an exception.
The actual check is rather basic right now. The full check for illegal
characters is difficult to implement efficiently because it requires a
few characters of lookahead. That's why it currently only checks for
spaces because those are guaranteed to create an invalid request.
Fixes#5474.
getServers returns an array of ips that are currently being used for
resolution
setServers takes an array of ips that are to be used for resolution,
this will throw if there's invalid input but preserve the original
configuration
Pretty much everything assumes strings to be utf-8, but crypto
traditionally used binary strings, so we need to keep the default
that way until most users get off of that pattern.
If there is an encoding, and we do 'stream.push(chunk, enc)', and the
encoding argument matches the stated encoding, then we're converting from
a string, to a buffer, and then back to a string. Of course, this is a
completely pointless bit of work, so it's best to avoid it when we know
that we can do so safely.
Empirical evidence suggests that OS-level load balancing (that is,
having multiple processes listen on a socket and have the operating
system wake up one when a connection comes in) produces skewed load
distributions on Linux, Solaris and possibly other operating systems.
The observed behavior is that a fraction of the listening processes
receive the majority of the connections. From the perspective of the
operating system, that somewhat makes sense: a task switch is expensive,
to be avoided whenever possible. That's why the operating system likes
to give preferential treatment to a few processes, because it reduces
the number of switches.
However, that rather subverts the purpose of the cluster module, which
is to distribute the load as evenly as possible. That's why this commit
adds (and defaults to) round-robin support, meaning that the master
process accepts connections and distributes them to the workers in a
round-robin fashion, effectively bypassing the operating system.
Round-robin is currently disabled on Windows due to how IOCP is wired
up. It works and you can select it manually but it probably results in
a heavy performance hit.
Fixes#4435.
Commit 9352c19 ("child_process: don't emit same handle twice") trades
one bug for another.
Before said commit, a handle was sometimes delivered with messages it
didn't belong to.
The bug fix introduced another bug that needs some explaining. On UNIX
systems, handles are basically file descriptors that are passed around
with the sendmsg() and recvmsg() system calls, using auxiliary data
(SCM_RIGHTS) as the transport.
node.js and libuv depend on the fact that none of the supported systems
ever emit more than one SCM_RIGHTS message from a recvmsg() syscall.
That assumption is something we should probably address someday for the
sake of portability but that's a separate discussion.
So, SCM_RIGHTS messages are never coalesced. SCM_RIGHTS and normal
messages however _are_ coalesced. That is, recvmsg() might return this:
recvmsg(); // { "message-with-fd", "message", "message" }
The operating system implicitly breaks pending messages along
SCM_RIGHTS boundaries. Most Unices break before such messages but Linux
also breaks _after_ them. When the sender looks like this:
sendmsg("message");
sendmsg("message-with-fd");
sendmsg("message");
Then on most Unices the receiver sees messages arriving like this:
recvmsg(); // { "message" }
recvmsg(); // { "message-with-fd", "message" }
The bug fix in commit 9352c19 assumes this behavior. On Linux however,
those messages can also come in like this:
recvmsg(); // { "message", "message-with-fd" }
recvmsg(); // { "message" }
In other words, it's incorrect to assume that the file descriptor is
always attached to the first message. This commit makes node wise up.
Fixes#5330.
This adds proper support for the following situation:
w.cork();
w.write(...);
w.cork();
w.write(...);
w.uncork();
w.write(...);
w.uncork();
This is relevant when you have a function (as we do in HTTP) that wants
to use cork, but in some cases, want to have a cork/uncork *around*
that function, without losing the benefits of writev.
In synchronous Writable streams (where the _write cb is called on the
current tick), the 'finish' event (and thus the end() callback) can in
some cases be called before all the write() callbacks are called.
Use a counter, and have stream.Transform rely on the 'prefinish' event
instead of the 'finish' event.
This has zero effect on most streams, but it corrects an edge case and
makes it perform more deterministically, which is a Good Thing.
Implement support for debugging cluster workers. Each worker process
is assigned a new debug port in an increasing sequence.
I.e. when master process uses port 5858, then worker 1 uses port 5859,
worker 2 uses port 5860, and so on.
Introduce new command-line parameter '--debug-port=' which sets debug_port
but does not start debugger. This option works for all node processes, it
is not specific to cluster workers.
Fixesjoyent/node#5318.
When developer calls setBreakpoint with an unknown script name,
we convert the script name into regular expression matching all
paths ending with given name (name can be a relative path too).
To create such breakpoint in V8, we use type `scriptRegEx`
instead of `scriptId` for `setbreakpoint` request.
To restore such breakpoint, we save the original script name
send by the user. We use this original name to set (restore)
breakpoint in the new child process.
This is a back-port of commit 5db936d from the master branch.
Add a watchdog class which executes a timer in a separate event loop in
a separate thread that will terminate v8 execution if it expires.
Add timeout argument to functions in vm module which use the watchdog
if a non-zero timeout is specified.
Forward-port the comments from commit 01e2920 (v0.10) to the master
branch. Everything else from that patch already exists in master.
It didn't merge cleanly because lib/http.js has been split up in
several files.
When developer calls setBreakpoint with an unknown script name,
we convert the script name into regular expression matching all
paths ending with given name (name can be a relative path too).
To create such breakpoint in V8, we use type `scriptRegEx`
instead of `scriptId` for `setbreakpoint` request.
To restore such breakpoint, we save the original script name
send by the user. We use this original name to set (restore)
breakpoint in the new child process.
Fixed a bug in debugger repl where `restart` command did not work
when a custom debug port was specified via command-line option
--port={number}.
File test/simple/helper-debugger-repl.js was extracted
from test/simple/test-debugger-repl.js
Fixed a bug in debugger repl where `restart` command did not work
when a custom debug port was specified via command-line option
--port={number}.
File test/simple/helper-debugger-repl.js was extracted
from test/simple/test-debugger-repl.js
Fixes#3740
In the case of pipelined requests, you can have a situation where
the socket gets destroyed via one req/res object, but then trying
to destroy *another* req/res on the same socket will cause it to
call undefined.destroy(), since it was already removed from that
message.
Add a guard to OutgoingMessage.destroy and IncomingMessage.destroy
to prevent this error.
Fixes#3740
In the case of pipelined requests, you can have a situation where
the socket gets destroyed via one req/res object, but then trying
to destroy *another* req/res on the same socket will cause it to
call undefined.destroy(), since it was already removed from that
message.
Add a guard to OutgoingMessage.destroy and IncomingMessage.destroy
to prevent this error.
It needs to apply the Transform class when the _readableState,
_writableState, or _transformState properties are accessed,
otherwise things like setEncoding and on('data') don't work
properly.
Also, the methods wrappers are no longer needed, since they're only
problematic because they access the undefined properties.
Clean up and DRY the cluster source code. Fix a few bugs while we're
here:
* Short-lived handles in long-lived worker processes were never
reclaimed, resulting in resource leaks.
* Handles in the master process are now closed when the last worker
that holds a reference to them quits. Previously, they were only
closed at cluster shutdown.
* The cluster object no longer exposes functions/properties that are
only valid in the 'other' process, e.g. cluster.fork() is no longer
exported in worker processes.
So much goodness and still manages to reduce the line count from 590
to 320.
An absolute path will always open the same location regardless of your
current working directory. For posix, this just means path.charAt(0) ===
'/', but on Windows it's a little more complicated.
Fixesjoyent/node#5299.
4716dc6 made assert.equal() and related functions work better by
generating a better toString() from the expected, actual, and operator
values passed to fail(). Unfortunately, this was accomplished by putting
the generated message into the error's `name` property. When you passed
in a custom error message, the error would put the custom error into
`name` *and* `message`, resulting in helpful string representations like
"AssertionError: Oh no: Oh no".
This commit resolves that issue by storing the generated message in the
`message` property while leaving the error's name alone and adding
a regression test so that this doesn't pop back up later.
Closes#5292.
I broke dgram.Socket#bind(port, cb) almost a year ago in 332fea5a but
it wasn't until today that someone complained and none of the tests
caught it because they all either specify the address or omit the
callback.
Anyway, now it works again and does what you expect: it binds the
socket to the "any" address ("0.0.0.0" for IPv4 and "::" for IPv6.)
Make http.request() and friends escape unsafe characters in the request
path. That is, a request for '/foo bar' is now escaped as '/foo%20bar'.
Before this commit, the path was used as-is in the request status line,
creating an invalid HTTP request ("GET /foo bar HTTP/1.1").
Fixes#4381.
Fix#5272
The consumption of a readable stream is a dance with 3 partners.
1. The specific stream Author (A)
2. The Stream Base class (B), and
3. The Consumer of the stream (C)
When B calls the _read() method that A implements, it sets a 'reading'
flag, so that parallel calls to _read() can be avoided. When A calls
stream.push(), B knows that it's safe to start calling _read() again.
If the consumer C is some kind of parser that wants in some cases to
pass the source stream off to some other party, but not before "putting
back" some bit of previously consumed data (as in the case of Node's
websocket http upgrade implementation). So, stream.unshift() will
generally *never* be called by A, but *only* called by C.
Prior to this patch, stream.unshift() *also* unset the state.reading
flag, meaning that C could indicate the end of a read, and B would
dutifully fire off another _read() call to A. This is inappropriate.
In the case of fs streams, and other variably-laggy streams that don't
tolerate overlapped _read() calls, this causes big problems.
Also, calling stream.shift() after the 'end' event did not raise any
kind of error, but would cause very strange behavior indeed. Calling it
after the EOF chunk was seen, but before the 'end' event was fired would
also cause weird behavior, and could lead to data being lost, since it
would not emit another 'readable' event.
This change makes it so that:
1. stream.unshift() does *not* set state.reading = false
2. stream.unshift() is allowed up until the 'end' event.
3. unshifting onto a EOF-encountered and zero-length (but not yet
end-emitted) stream will defer the 'end' event until the new data is
consumed.
4. pushing onto a EOF-encountered stream is now an error.
So, if you read(), you have that single tick to safely unshift() data
back into the stream, even if the null chunk was pushed, and the length
was 0.
Don't scan the whole string for a "NODE_" substring, just check that
the string starts with the expected prefix.
This is a reprise of dbbfbe7 but this time for the child_process
module.
Don't scan the whole string for a "NODE_CLUSTER_" substring, just check
that the string starts with the expected prefix. The linear scan was
causing a noticeable (but unsurprising) slowdown on messages with a
large .cmd string property.
Buffer.byteLength() works only for string inputs. Thus, when connection
has pending Buffer to write, it should just use it's length instead of
throwing exception.
Since 049903e, an end callback could be called before a write
callback if end() is called before the write is done. This patch
resolves the issue.
In collaboration with @gne
Fixesfelixge/node-formidable#209Fixes#5215
We were assuming that any string can be concatenated safely to
CRLF. However, for hex, base64, or binary encoded writes, this
is not the case, and results in sending the incorrect response.
An unusual edge case, but certainly a bug.
RFC 6125 explicitly states that a client "MUST NOT seek a match
for a reference identifier of CN-ID if the presented identifiers
include a DNS-ID, SRV-ID, URI-ID, or any application-specific
identifier types supported by the client", but it MAY do so if
none of the mentioned identifier types (but others) are present.
If an http response has an 'end' handler that throws, then the socket
will never be released back into the pool.
Granted, we do NOT guarantee that throwing will never have adverse
effects on Node internal state. Such a guarantee cannot be reasonably
made in a shared-global mutable-state side-effecty language like
JavaScript. However, in this case, it's a rather trivial patch to
increase our resilience a little bit, so it seems like a win.
There is no semantic change in this case, except that some event
listeners are removed, and the `'free'` event is emitted on nextTick, so
that you can schedule another request which will re-use the same socket.
From the user's point of view, there should be no detectable difference.
Closes#5107
The v0.8 Stream.pipe() method automatically destroyed the destination
stream whenever the src stream closed. However, this caused a lot of
problems, and was removed by popular demand. (Many userland modules
still have a no-op destroy() method just because of this.) It was also
very hazardous because this would be done even if { end: false } was
passed in the pipe options.
In v0.10, we decided that the 'close' event and destroy() method are
application-specific, and pipe() doesn't automatically call destroy().
However, TLS actually depended (silently) on this behavior. So, in this
case, we should just go ahead and destroy the thing when close happens.
Closes#5145
Expand the JSON representation of Buffer to include type information
so that it can be deserialized in JSON.parse() without context.
Fixes#5110.
Fixes#5143.