Array reference versus value semantics

Any programming language which provides arrays (lists, vectors, tuples etc.) must decide whether they have reference or value semantics, with the usual/obvious choice being reference semantics for mutable arrays and value semantics for immutable ones. JavaScript which provides mutable arrays appears to have chosen reference semantics e.g. given

var a = [1, 2, 3] var b = [1, 2, 3] 

then a != b , as expected because though they have the same contents, they are different arrays. However when you use them as keys in an object, the picture changes; if you set obj[a] to a value, then obj[b] gets the same value. Furthermore, this remains true if you change the contents of the arrays; at least when I tested it in Rhino, it behaves as though the interpreter were recursively comparing the full contents of the supplied and stored key arrays on every lookup, complete with a check for the infinite loop that would occur if one of the arrays were made to point to itself. Is this the intended/specified behavior in all implementations? Does it also apply to objects used as keys? Is there any way to get the other behavior, i.e. to look up values using arrays as keys with reference semantics?