Revert "Upgrade V8 to 2.4.5"
This reverts commit e2274412488ab310decb8494ab41009342b3c2f6. Build fails on mac
This commit is contained in:
parent
893ebe7230
commit
4df999f85f
148
deps/v8/ChangeLog
vendored
148
deps/v8/ChangeLog
vendored
@ -1,25 +1,12 @@
|
||||
2010-09-22: Version 2.4.5
|
||||
Changed the RegExp benchmark to exercise the regexp engine on different
|
||||
inputs by scrambling the input strings.
|
||||
|
||||
Fixed a bug in keyed loads on strings.
|
||||
|
||||
Fixed a bug with loading global function prototypes.
|
||||
|
||||
Fixed a bug with profiling RegExp calls (issue http://crbug.com/55999).
|
||||
|
||||
Performance improvements on all platforms.
|
||||
|
||||
|
||||
2010-09-15: Version 2.4.4
|
||||
|
||||
Fixed bug with hangs on very large sparse arrays.
|
||||
Fix bug with hangs on very large sparse arrays.
|
||||
|
||||
Now tries harder to free up memory when running out of space.
|
||||
Try harder to free up memory when running out of space.
|
||||
|
||||
Added heap snapshots to JSON format to API.
|
||||
Add heap snapshots to JSON format to API.
|
||||
|
||||
Recalibrated benchmarks.
|
||||
Recalibrate benchmarks.
|
||||
|
||||
|
||||
2010-09-13: Version 2.4.3
|
||||
@ -55,33 +42,33 @@
|
||||
|
||||
2010-09-01: Version 2.4.0
|
||||
|
||||
Fixed bug in Object.freeze and Object.seal when Array.prototype or
|
||||
Object.prototype are changed (issue 842).
|
||||
Fix bug in Object.freeze and Object.seal when Array.prototype or
|
||||
Object.prototype is changed (issue 842).
|
||||
|
||||
Updated Array.splice to follow Safari and Firefox when called
|
||||
Update Array.splice to follow Safari and Firefox when called
|
||||
with zero arguments.
|
||||
|
||||
Fixed a missing live register when breaking at keyed loads on ARM.
|
||||
Fix a missing live register when breaking at keyed loads on ARM.
|
||||
|
||||
Performance improvements on all platforms.
|
||||
|
||||
|
||||
2010-08-25: Version 2.3.11
|
||||
|
||||
Fixed bug in RegExp related to copy-on-write arrays.
|
||||
Fix bug in RegExp related to copy-on-write arrays.
|
||||
|
||||
Refactored tools/test.py script, including the introduction of
|
||||
Refactoring of tools/test.py script, including the introduction of
|
||||
VARIANT_FLAGS that allows specification of sets of flags with which
|
||||
all tests should be run.
|
||||
|
||||
Fixed a bug in the handling of debug breaks in CallIC.
|
||||
Fix a bug in the handling of debug breaks in CallIC.
|
||||
|
||||
Performance improvements on all platforms.
|
||||
|
||||
|
||||
2010-08-23: Version 2.3.10
|
||||
|
||||
Fixed bug in bitops on ARM.
|
||||
Fix bug in bitops on ARM.
|
||||
|
||||
Build fixes for unusual compilers.
|
||||
|
||||
@ -92,7 +79,7 @@
|
||||
|
||||
2010-08-18: Version 2.3.9
|
||||
|
||||
Fixed compilation for ARMv4 on OpenBSD/FreeBSD.
|
||||
Fix compilation for ARMv4 on OpenBSD/FreeBSD.
|
||||
|
||||
Removed specialized handling of GCC 4.4 (issue 830).
|
||||
|
||||
@ -133,7 +120,7 @@
|
||||
Fixed handling of JSObject::elements in CalculateNetworkSize
|
||||
(issue 822).
|
||||
|
||||
Allowed compiling with strict aliasing enabled on GCC 4.4 (issue 463).
|
||||
Allow compiling with strict aliasing enabled on GCC 4.4 (issue 463).
|
||||
|
||||
|
||||
2010-08-09: Version 2.3.6
|
||||
@ -143,7 +130,7 @@
|
||||
|
||||
Object.seal and Object.freeze return the modified object (issue 809).
|
||||
|
||||
Fixed building using GCC 4.4.4.
|
||||
Fix building using GCC 4.4.4.
|
||||
|
||||
|
||||
2010-08-04: Version 2.3.5
|
||||
@ -152,7 +139,7 @@
|
||||
dot-notation property access now allows keywords. Also allowed
|
||||
non-identifiers after "get" or "set" in an object initialiser.
|
||||
|
||||
Randomized the addresses of allocated executable memory on Windows.
|
||||
Randomize the addresses of allocated executable memory on Windows.
|
||||
|
||||
|
||||
2010-08-02: Version 2.3.4
|
||||
@ -264,15 +251,15 @@
|
||||
|
||||
2010-06-30: Version 2.2.21
|
||||
|
||||
Fixed bug in externalizing some ASCII strings (Chromium issue 47824).
|
||||
Fix bug in externalizing some ASCII strings (Chromium issue 47824).
|
||||
|
||||
Updated JSON.stringify to floor the space parameter (issue 753).
|
||||
Update JSON.stringify to floor the space parameter (issue 753).
|
||||
|
||||
Updated the Mozilla test expectations to the newest version.
|
||||
Update the Mozilla test expectations to the newest version.
|
||||
|
||||
Updated the ES5 Conformance Test expectations to the latest version.
|
||||
Update the ES5 Conformance Test expectations to the latest version.
|
||||
|
||||
Updated the V8 benchmark suite.
|
||||
Update the V8 benchmark suite.
|
||||
|
||||
Provide actual breakpoints locations in response to setBreakpoint
|
||||
and listBreakpoints requests.
|
||||
@ -280,13 +267,13 @@
|
||||
|
||||
2010-06-28: Version 2.2.20
|
||||
|
||||
Fixed bug with for-in on x64 platform (issue 748).
|
||||
Fix bug with for-in on x64 platform (issue 748).
|
||||
|
||||
Fixed crash bug on x64 platform (issue 756).
|
||||
Fix crash bug on x64 platform (issue 756).
|
||||
|
||||
Fixed bug in Object.getOwnPropertyNames. (chromium issue 41243).
|
||||
Fix bug in Object.getOwnPropertyNames. (chromium issue 41243).
|
||||
|
||||
Fixed a bug on ARM that caused the result of 1 << x to be
|
||||
Fix a bug on ARM that caused the result of 1 << x to be
|
||||
miscalculated for some inputs.
|
||||
|
||||
Performance improvements on all platforms.
|
||||
@ -294,7 +281,7 @@
|
||||
|
||||
2010-06-23: Version 2.2.19
|
||||
|
||||
Fixed bug that causes the build to break when profillingsupport=off
|
||||
Fix bug that causes the build to break when profillingsupport=off
|
||||
(issue 738).
|
||||
|
||||
Added expose-externalize-string flag for testing extensions.
|
||||
@ -302,7 +289,7 @@
|
||||
Resolve linker issues with using V8 as a DLL causing a number of
|
||||
problems with unresolved symbols.
|
||||
|
||||
Fixed build failure for cctests when ENABLE_DEBUGGER_SUPPORT is not
|
||||
Fix build failure for cctests when ENABLE_DEBUGGER_SUPPORT is not
|
||||
defined.
|
||||
|
||||
Performance improvements on all platforms.
|
||||
@ -313,11 +300,11 @@
|
||||
Added API functions to retrieve information on indexed properties
|
||||
managed by the embedding layer. Fixes bug 737.
|
||||
|
||||
Made ES5 Object.defineProperty support array elements. Fixes bug 619.
|
||||
Make ES5 Object.defineProperty support array elements. Fixes bug 619.
|
||||
|
||||
Added heap profiling to the API.
|
||||
Add heap profiling to the API.
|
||||
|
||||
Removed old named property query from the API.
|
||||
Remove old named property query from the API.
|
||||
|
||||
Incremental performance improvements.
|
||||
|
||||
@ -343,12 +330,12 @@
|
||||
|
||||
2010-06-07: Version 2.2.15
|
||||
|
||||
Added an API to control the disposal of external string resources.
|
||||
Add an API to control the disposal of external string resources.
|
||||
|
||||
Added missing initialization of a couple of variables which makes
|
||||
Add missing initialization of a couple of variables which makes
|
||||
some compilers complaint when compiling with -Werror.
|
||||
|
||||
Improved performance on all platforms.
|
||||
Improve performance on all platforms.
|
||||
|
||||
|
||||
2010-06-02: Version 2.2.14
|
||||
@ -362,12 +349,12 @@
|
||||
|
||||
2010-05-31: Version 2.2.13
|
||||
|
||||
Implemented Object.getOwnPropertyDescriptor for element indices and
|
||||
Implement Object.getOwnPropertyDescriptor for element indices and
|
||||
strings (issue 599).
|
||||
|
||||
Fixed bug for windows 64 bit C calls from generated code.
|
||||
Fix bug for windows 64 bit C calls from generated code.
|
||||
|
||||
Added new scons flag unalignedaccesses for arm builds.
|
||||
Add new scons flag unalignedaccesses for arm builds.
|
||||
|
||||
Performance improvements on all platforms.
|
||||
|
||||
@ -382,7 +369,7 @@
|
||||
|
||||
2010-05-21: Version 2.2.11
|
||||
|
||||
Fixed crash bug in liveedit on 64 bit.
|
||||
Fix crash bug in liveedit on 64 bit.
|
||||
|
||||
Use 'full compiler' when debugging is active. This should increase
|
||||
the density of possible break points, making single step more fine
|
||||
@ -392,11 +379,11 @@
|
||||
|
||||
Misc. fixes to the Solaris build.
|
||||
|
||||
Added new flags --print-cumulative-gc-stat and --trace-gc-nvp.
|
||||
Add new flags --print-cumulative-gc-stat and --trace-gc-nvp.
|
||||
|
||||
Added filtering of CPU profiles by security context.
|
||||
Add filtering of CPU profiles by security context.
|
||||
|
||||
Fixed crash bug on ARM when running without VFP2 or VFP3.
|
||||
Fix crash bug on ARM when running without VFP2 or VFP3.
|
||||
|
||||
Incremental performance improvements in all backends.
|
||||
|
||||
@ -408,12 +395,12 @@
|
||||
|
||||
2010-05-10: Version 2.2.9
|
||||
|
||||
Allowed Object.create to be called with a function (issue 697).
|
||||
Allow Object.create to be called with a function (issue 697).
|
||||
|
||||
Fixed bug with Date.parse returning a non-NaN value when called on a
|
||||
non date string (issue 696).
|
||||
|
||||
Allowed unaligned memory accesses on ARM targets that support it (by
|
||||
Allow unaligned memory accesses on ARM targets that support it (by
|
||||
Subrato K De of CodeAurora <subratokde@codeaurora.org>).
|
||||
|
||||
C++ API for retrieving JavaScript stack trace information.
|
||||
@ -567,9 +554,9 @@
|
||||
|
||||
2010-02-23: Version 2.1.2
|
||||
|
||||
Fixed a crash bug caused by wrong assert.
|
||||
Fix a crash bug caused by wrong assert.
|
||||
|
||||
Fixed a bug with register names on 64-bit V8 (issue 615).
|
||||
Fix a bug with register names on 64-bit V8 (issue 615).
|
||||
|
||||
Performance improvements on all platforms.
|
||||
|
||||
@ -605,13 +592,13 @@
|
||||
Solaris support by Erich Ocean <erich.ocean@me.com> and Ryan Dahl
|
||||
<ry@tinyclouds.org>.
|
||||
|
||||
Fixed a bug that Math.round() returns incorrect results for huge
|
||||
Fix a bug that Math.round() returns incorrect results for huge
|
||||
integers.
|
||||
|
||||
Fixed enumeration order for objects created from some constructor
|
||||
Fix enumeration order for objects created from some constructor
|
||||
functions (isue http://crbug.com/3867).
|
||||
|
||||
Fixed arithmetic on some integer constants (issue 580).
|
||||
Fix arithmetic on some integer constants (issue 580).
|
||||
|
||||
Numerous performance improvements including porting of previous IA-32
|
||||
optimizations to x64 and ARM architectures.
|
||||
@ -750,11 +737,11 @@
|
||||
|
||||
X64: Convert smis to holding 32 bits of payload.
|
||||
|
||||
Introduced v8::Integer::NewFromUnsigned method.
|
||||
Introduce v8::Integer::NewFromUnsigned method.
|
||||
|
||||
Added missing null check in Context::GetCurrent.
|
||||
Add missing null check in Context::GetCurrent.
|
||||
|
||||
Added trim, trimLeft and trimRight methods to String
|
||||
Add trim, trimLeft and trimRight methods to String
|
||||
Patch by Jan de Mooij <jandemooij@gmail.com>
|
||||
|
||||
Implement ES5 Array.isArray
|
||||
@ -762,15 +749,14 @@
|
||||
|
||||
Skip access checks for hidden properties.
|
||||
|
||||
Added String::Concat(Handle<String> left, Handle<String> right) to the
|
||||
V8 API.
|
||||
Add String::Concat(Handle<String> left, Handle<String> right) to the V8 API.
|
||||
|
||||
Fixed GYP-based builds of V8.
|
||||
Fix GYP-based builds of V8.
|
||||
|
||||
|
||||
2009-10-07: Version 1.3.15
|
||||
|
||||
Expanded the maximum size of the code space to 512MB for 64-bit mode.
|
||||
Expand the maximum size of the code space to 512MB for 64-bit mode.
|
||||
|
||||
Fixed a crash bug happening when starting profiling (issue
|
||||
http://crbug.com/23768).
|
||||
@ -782,10 +768,10 @@
|
||||
located on the object or in the prototype chain skipping any
|
||||
interceptors.
|
||||
|
||||
Fixed the stack limits setting API to work correctly with threads. The
|
||||
Fix the stack limits setting API to work correctly with threads. The
|
||||
stack limit now needs to be set to each thread thich is used with V8.
|
||||
|
||||
Removed the high-priority flag from IdleNotification()
|
||||
Remove the high-priority flag from IdleNotification()
|
||||
|
||||
Ensure V8 is initialized before locking and unlocking threads.
|
||||
|
||||
@ -853,7 +839,7 @@
|
||||
Implemented missing pieces of debugger infrastructure on ARM. The
|
||||
debugger is now fully functional on ARM.
|
||||
|
||||
Made 'hidden' the default visibility for gcc.
|
||||
Make 'hidden' the default visibility for gcc.
|
||||
|
||||
|
||||
2009-09-09: Version 1.3.10
|
||||
@ -908,9 +894,9 @@
|
||||
|
||||
2009-08-21: Version 1.3.6
|
||||
|
||||
Added support for forceful termination of JavaScript execution.
|
||||
Add support for forceful termination of JavaScript execution.
|
||||
|
||||
Added low memory notification to the API. The embedding host can signal
|
||||
Add low memory notification to the API. The embedding host can signal
|
||||
a low memory situation to V8.
|
||||
|
||||
Changed the handling of global handles (persistent handles in the API
|
||||
@ -924,9 +910,9 @@
|
||||
|
||||
2009-08-19: Version 1.3.5
|
||||
|
||||
Optimized initialization of some arrays in the builtins.
|
||||
Optimize initialization of some arrays in the builtins.
|
||||
|
||||
Fixed mac-nm script to support filenames with spaces.
|
||||
Fix mac-nm script to support filenames with spaces.
|
||||
|
||||
Support for using the V8 profiler when V8 is embedded in a Windows DLL.
|
||||
|
||||
@ -939,7 +925,7 @@
|
||||
|
||||
Added API for getting object mirrors.
|
||||
|
||||
Made sure that SSE3 instructions are used whenever possible even when
|
||||
Make sure that SSE3 instructions are used whenever possible even when
|
||||
running off a snapshot generated without using SSE3 instructions.
|
||||
|
||||
Tweaked the handling of the initial size and growth policy of the heap.
|
||||
@ -961,20 +947,20 @@
|
||||
|
||||
2009-08-12: Version 1.3.3
|
||||
|
||||
Fixed issue 417: incorrect %t placeholder expansion.
|
||||
Fix issue 417: incorrect %t placeholder expansion.
|
||||
|
||||
Added .gitignore file similar to Chromium's one.
|
||||
Add .gitignore file similar to Chromium's one.
|
||||
|
||||
Fixed SConstruct file to build with new logging code for Android.
|
||||
Fix SConstruct file to build with new logging code for Android.
|
||||
|
||||
API: added function to find instance of template in prototype
|
||||
chain. Inlined Object::IsInstanceOf.
|
||||
|
||||
Land change to notify valgrind when we modify code on x86.
|
||||
|
||||
Added api call to determine whether a string can be externalized.
|
||||
Add api call to determine whether a string can be externalized.
|
||||
|
||||
Added a write() command to d8.
|
||||
Add a write() command to d8.
|
||||
|
||||
|
||||
2009-08-05: Version 1.3.2
|
||||
@ -1257,7 +1243,7 @@
|
||||
|
||||
Added EcmaScript 5 JSON object.
|
||||
|
||||
Fixed bug in preemption support on ARM.
|
||||
Fix bug in preemption support on ARM.
|
||||
|
||||
|
||||
2009-04-23: Version 1.2.0
|
||||
|
4
deps/v8/benchmarks/README.txt
vendored
4
deps/v8/benchmarks/README.txt
vendored
@ -70,9 +70,7 @@ Removed dead code from the RayTrace benchmark and fixed a couple of
|
||||
typos in the DeltaBlue implementation. Changed the Splay benchmark to
|
||||
avoid converting the same numeric key to a string over and over again
|
||||
and to avoid inserting and removing the same element repeatedly thus
|
||||
increasing pressure on the memory subsystem. Changed the RegExp
|
||||
benchmark to exercise the regular expression engine on different
|
||||
input strings.
|
||||
increasing pressure on the memory subsystem.
|
||||
|
||||
Furthermore, the benchmark runner was changed to run the benchmarks
|
||||
for at least a few times to stabilize the reported numbers on slower
|
||||
|
562
deps/v8/benchmarks/regexp.js
vendored
562
deps/v8/benchmarks/regexp.js
vendored
@ -1,4 +1,4 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Copyright 2009 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
@ -25,51 +25,21 @@
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// Automatically generated on 2009-01-30. Manually updated on 2010-09-17.
|
||||
// Automatically generated on 2009-01-30.
|
||||
|
||||
// This benchmark is generated by loading 50 of the most popular pages
|
||||
// on the web and logging all regexp operations performed. Each
|
||||
// operation is given a weight that is calculated from an estimate of
|
||||
// the popularity of the pages where it occurs and the number of times
|
||||
// it is executed while loading each page. Furthermore the literal
|
||||
// it is executed while loading each page. Finally the literal
|
||||
// letters in the data are encoded using ROT13 in a way that does not
|
||||
// affect how the regexps match their input. Finally the strings are
|
||||
// scrambled to exercise the regexp engine on different input strings.
|
||||
// affect how the regexps match their input.
|
||||
|
||||
|
||||
var RegExp = new BenchmarkSuite('RegExp', 910985, [
|
||||
new Benchmark("RegExp", RegExpRun, RegExpSetup, RegExpTearDown)
|
||||
var RegRxp = new BenchmarkSuite('RegExp', 910985, [
|
||||
new Benchmark("RegExp", runRegExpBenchmark)
|
||||
]);
|
||||
|
||||
var regExpBenchmark = null;
|
||||
|
||||
function RegExpSetup() {
|
||||
regExpBenchmark = new RegExpBenchmark();
|
||||
RegExpRun(); // run once to get system initialized
|
||||
}
|
||||
|
||||
function RegExpRun() {
|
||||
regExpBenchmark.run();
|
||||
}
|
||||
|
||||
function RegExpTearDown() {
|
||||
regExpBenchmark = null;
|
||||
}
|
||||
|
||||
// Returns an array of n different variants of the input string str.
|
||||
// The variants are computed by randomly rotating one random
|
||||
// character.
|
||||
function computeInputVariants(str, n) {
|
||||
var variants = [ str ];
|
||||
for (var i = 1; i < n; i++) {
|
||||
var pos = Math.floor(Math.random() * str.length);
|
||||
var chr = String.fromCharCode((str.charCodeAt(pos) + Math.floor(Math.random() * 128)) % 128);
|
||||
variants[i] = str.substring(0, pos) + chr + str.substring(pos + 1, str.length);
|
||||
}
|
||||
return variants;
|
||||
}
|
||||
|
||||
function RegExpBenchmark() {
|
||||
function runRegExpBenchmark() {
|
||||
var re0 = /^ba/;
|
||||
var re1 = /(((\w+):\/\/)([^\/:]*)(:(\d+))?)?([^#?]*)(\?([^#]*))?(#(.*))?/;
|
||||
var re2 = /^\s*|\s*$/g;
|
||||
@ -89,105 +59,77 @@ function RegExpBenchmark() {
|
||||
var re14 = /\s+/g;
|
||||
var re15 = /^\s*(\S*(\s+\S+)*)\s*$/;
|
||||
var re16 = /(-[a-z])/i;
|
||||
|
||||
var s0 = computeInputVariants('pyvpx', 6511);
|
||||
var s1 = computeInputVariants('uggc://jjj.snprobbx.pbz/ybtva.cuc', 1844);
|
||||
var s2 = computeInputVariants('QBZPbageby_cynprubyqre', 739);
|
||||
var s3 = computeInputVariants('uggc://jjj.snprobbx.pbz/', 598);
|
||||
var s4 = computeInputVariants('uggc://jjj.snprobbx.pbz/fepu.cuc', 454);
|
||||
var s5 = computeInputVariants('qqqq, ZZZ q, llll', 352);
|
||||
var s6 = computeInputVariants('vachggrkg QBZPbageby_cynprubyqre', 312);
|
||||
var s7 = computeInputVariants('/ZlFcnprUbzrcntr/Vaqrk-FvgrUbzr,10000000', 282);
|
||||
var s8 = computeInputVariants('vachggrkg', 177);
|
||||
var s9 = computeInputVariants('528.9', 170);
|
||||
var s10 = computeInputVariants('528', 170);
|
||||
var s11 = computeInputVariants('VCPhygher=ra-HF', 156);
|
||||
var s12 = computeInputVariants('CersreerqPhygher=ra-HF', 156);
|
||||
var s13 = computeInputVariants('xrlcerff', 144);
|
||||
var s14 = computeInputVariants('521', 139);
|
||||
var s15 = computeInputVariants(str0, 139);
|
||||
var s16 = computeInputVariants('qvi .so_zrah', 137);
|
||||
var s17 = computeInputVariants('qvi.so_zrah', 137);
|
||||
var s18 = computeInputVariants('uvqqra_ryrz', 117);
|
||||
var s19 = computeInputVariants('sevraqfgre_naba=nvq%3Qn6ss9p85n868ro9s059pn854735956o3%26ers%3Q%26df%3Q%26vpgl%3QHF', 95);
|
||||
var s20 = computeInputVariants('uggc://ubzr.zlfcnpr.pbz/vaqrk.psz', 93);
|
||||
var s21 = computeInputVariants(str1, 92);
|
||||
var s22 = computeInputVariants('svefg', 85);
|
||||
var s23 = computeInputVariants('uggc://cebsvyr.zlfcnpr.pbz/vaqrk.psz', 85);
|
||||
var s24 = computeInputVariants('ynfg', 85);
|
||||
var s25 = computeInputVariants('qvfcynl', 85);
|
||||
|
||||
function runBlock0() {
|
||||
for (var i = 0; i < 6511; i++) {
|
||||
re0.exec(s0[i]);
|
||||
re0.exec('pyvpx');
|
||||
}
|
||||
for (var i = 0; i < 1844; i++) {
|
||||
re1.exec(s1[i]);
|
||||
re1.exec('uggc://jjj.snprobbx.pbz/ybtva.cuc');
|
||||
}
|
||||
for (var i = 0; i < 739; i++) {
|
||||
s2[i].replace(re2, '');
|
||||
'QBZPbageby_cynprubyqre'.replace(re2, '');
|
||||
}
|
||||
for (var i = 0; i < 598; i++) {
|
||||
re1.exec(s3[i]);
|
||||
re1.exec('uggc://jjj.snprobbx.pbz/');
|
||||
}
|
||||
for (var i = 0; i < 454; i++) {
|
||||
re1.exec(s4[i]);
|
||||
re1.exec('uggc://jjj.snprobbx.pbz/fepu.cuc');
|
||||
}
|
||||
for (var i = 0; i < 352; i++) {
|
||||
/qqqq|qqq|qq|q|ZZZZ|ZZZ|ZZ|Z|llll|ll|l|uu|u|UU|U|zz|z|ff|f|gg|g|sss|ss|s|mmm|mm|m/g.exec(s5[i]);
|
||||
/qqqq|qqq|qq|q|ZZZZ|ZZZ|ZZ|Z|llll|ll|l|uu|u|UU|U|zz|z|ff|f|gg|g|sss|ss|s|mmm|mm|m/g.exec('qqqq, ZZZ q, llll');
|
||||
}
|
||||
for (var i = 0; i < 312; i++) {
|
||||
re3.exec(s6[i]);
|
||||
re3.exec('vachggrkg QBZPbageby_cynprubyqre');
|
||||
}
|
||||
for (var i = 0; i < 282; i++) {
|
||||
re4.exec(s7[i]);
|
||||
re4.exec('/ZlFcnprUbzrcntr/Vaqrk-FvgrUbzr,10000000');
|
||||
}
|
||||
for (var i = 0; i < 177; i++) {
|
||||
s8[i].replace(re5, '');
|
||||
'vachggrkg'.replace(re5, '');
|
||||
}
|
||||
for (var i = 0; i < 170; i++) {
|
||||
s9[i].replace(re6, '');
|
||||
re7.exec(s10[i]);
|
||||
'528.9'.replace(re6, '');
|
||||
re7.exec('528');
|
||||
}
|
||||
for (var i = 0; i < 156; i++) {
|
||||
re8.exec(s11[i]);
|
||||
re8.exec(s12[i]);
|
||||
re8.exec('VCPhygher=ra-HF');
|
||||
re8.exec('CersreerqPhygher=ra-HF');
|
||||
}
|
||||
for (var i = 0; i < 144; i++) {
|
||||
re0.exec(s13[i]);
|
||||
re0.exec('xrlcerff');
|
||||
}
|
||||
for (var i = 0; i < 139; i++) {
|
||||
s14[i].replace(re6, '');
|
||||
re7.exec(s14[i]);
|
||||
'521'.replace(re6, '');
|
||||
re7.exec('521');
|
||||
re9.exec('');
|
||||
/JroXvg\/(\S+)/.exec(s15[i]);
|
||||
/JroXvg\/(\S+)/.exec(str0);
|
||||
}
|
||||
for (var i = 0; i < 137; i++) {
|
||||
s16[i].replace(re10, '');
|
||||
s16[i].replace(/\[/g, '');
|
||||
s17[i].replace(re11, '');
|
||||
'qvi .so_zrah'.replace(re10, '');
|
||||
'qvi .so_zrah'.replace(/\[/g, '');
|
||||
'qvi.so_zrah'.replace(re11, '');
|
||||
}
|
||||
for (var i = 0; i < 117; i++) {
|
||||
s18[i].replace(re2, '');
|
||||
'uvqqra_ryrz'.replace(re2, '');
|
||||
}
|
||||
for (var i = 0; i < 95; i++) {
|
||||
/(?:^|;)\s*sevraqfgre_ynat=([^;]*)/.exec(s19[i]);
|
||||
/(?:^|;)\s*sevraqfgre_ynat=([^;]*)/.exec('sevraqfgre_naba=nvq%3Qn6ss9p85n868ro9s059pn854735956o3%26ers%3Q%26df%3Q%26vpgl%3QHF');
|
||||
}
|
||||
for (var i = 0; i < 93; i++) {
|
||||
s20[i].replace(re12, '');
|
||||
re13.exec(s20[i]);
|
||||
'uggc://ubzr.zlfcnpr.pbz/vaqrk.psz'.replace(re12, '');
|
||||
re13.exec('uggc://ubzr.zlfcnpr.pbz/vaqrk.psz');
|
||||
}
|
||||
for (var i = 0; i < 92; i++) {
|
||||
s21[i].replace(/([a-zA-Z]|\s)+/, '');
|
||||
str1.replace(/([a-zA-Z]|\s)+/, '');
|
||||
}
|
||||
for (var i = 0; i < 85; i++) {
|
||||
s22[i].replace(re14, '');
|
||||
s22[i].replace(re15, '');
|
||||
s23[i].replace(re12, '');
|
||||
s24[i].replace(re14, '');
|
||||
s24[i].replace(re15, '');
|
||||
re16.exec(s25[i]);
|
||||
re13.exec(s23[i]);
|
||||
'svefg'.replace(re14, '');
|
||||
'svefg'.replace(re15, '');
|
||||
'uggc://cebsvyr.zlfcnpr.pbz/vaqrk.psz'.replace(re12, '');
|
||||
'ynfg'.replace(re14, '');
|
||||
'ynfg'.replace(re15, '');
|
||||
re16.exec('qvfcynl');
|
||||
re13.exec('uggc://cebsvyr.zlfcnpr.pbz/vaqrk.psz');
|
||||
}
|
||||
}
|
||||
var re17 = /(^|[^\\])\"\\\/Qngr\((-?[0-9]+)\)\\\/\"/g;
|
||||
@ -203,98 +145,64 @@ function RegExpBenchmark() {
|
||||
var str7 = ';;jvaqbj.IjPurpxZbhfrCbfvgvbaNQ_VQ=shapgvba(r){vs(!r)ine r=jvaqbj.rirag;ine c=-1;vs(d1)c=d1.EbyybssCnary;ine bo=IjTrgBow("IjCnayNQ_VQ_"+c);vs(bo&&bo.fglyr.ivfvovyvgl=="ivfvoyr"){ine fns=IjFns?8:0;ine pheK=r.pyvragK+IjBOFpe("U")+fns,pheL=r.pyvragL+IjBOFpe("I")+fns;ine y=IjBOEC(NQ_VQ,bo,"Y"),g=IjBOEC(NQ_VQ,bo,"G");ine e=y+d1.Cnaryf[c].Jvqgu,o=g+d1.Cnaryf[c].Urvtug;vs((pheK<y)||(pheK>e)||(pheL<g)||(pheL>o)){vs(jvaqbj.IjBaEbyybssNQ_VQ)IjBaEbyybssNQ_VQ(c);ryfr IjPybfrNq(NQ_VQ,c,gehr,"");}ryfr erghea;}IjPnapryZbhfrYvfgrareNQ_VQ();};;jvaqbj.IjFrgEbyybssCnaryNQ_VQ=shapgvba(c){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;c=IjTc(NQ_VQ,c);vs(d1&&d1.EbyybssCnary>-1)IjPnapryZbhfrYvfgrareNQ_VQ();vs(d1)d1.EbyybssCnary=c;gel{vs(q.nqqRiragYvfgrare)q.nqqRiragYvfgrare(z,s,snyfr);ryfr vs(q.nggnpuRirag)q.nggnpuRirag("ba"+z,s);}pngpu(r){}};;jvaqbj.IjPnapryZbhfrYvfgrareNQ_VQ=shapgvba(){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;vs(d1)d1.EbyybssCnary=-1;gel{vs(q.erzbirRiragYvfgrare)q.erzbirRiragYvfgrare(z,s,snyfr);ryfr vs(q.qrgnpuRirag)q.qrgnpuRirag("ba"+z,s);}pngpu(r){}};;d1.IjTc=d2(n,c){ine nq=d1;vs(vfAnA(c)){sbe(ine v=0;v<nq.Cnaryf.yratgu;v++)vs(nq.Cnaryf[v].Anzr==c)erghea v;erghea 0;}erghea c;};;d1.IjTpy=d2(n,c,p){ine cn=d1.Cnaryf[IjTc(n,c)];vs(!cn)erghea 0;vs(vfAnA(p)){sbe(ine v=0;v<cn.Pyvpxguehf.yratgu;v++)vs(cn.Pyvpxguehf[v].Anzr==p)erghea v;erghea 0;}erghea p;};;d1.IjGenpr=d2(n,f){gel{vs(jvaqbj["Ij"+"QtQ"])jvaqbj["Ij"+"QtQ"](n,1,f);}pngpu(r){}};;d1.IjYvzvg1=d2(n,f){ine nq=d1,vh=f.fcyvg("/");sbe(ine v=0,p=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.FzV.yratgu>0)nq.FzV+="/";nq.FzV+=vh[v];nq.FtZ[nq.FtZ.yratgu]=snyfr;}}};;d1.IjYvzvg0=d2(n,f){ine nq=d1,vh=f.fcyvg("/");sbe(ine v=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.OvC.yratgu>0)nq.OvC+="/";nq.OvC+=vh[v];}}};;d1.IjRVST=d2(n,c){jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]=IjTrgBow("IjCnayNQ_VQ_"+c+"_Bow");vs(jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]==ahyy)frgGvzrbhg("IjRVST(NQ_VQ,"+c+")",d1.rvsg);};;d1.IjNavzSHC=d2(n,c){ine nq=d1;vs(c>nq.Cnaryf.yratgu)erghea;ine cna=nq.Cnaryf[c],nn=gehr,on=gehr,yn=gehr,en=gehr,cn=nq.Cnaryf[0],sf=nq.ShF,j=cn.Jvqgu,u=cn.Urvtug;vs(j=="100%"){j=sf;en=snyfr;yn=snyfr;}vs(u=="100%"){u=sf;nn=snyfr;on=snyfr;}vs(cn.YnY=="Y")yn=snyfr;vs(cn.YnY=="E")en=snyfr;vs(cn.GnY=="G")nn=snyfr;vs(cn.GnY=="O")on=snyfr;ine k=0,l=0;fjvgpu(nq.NshP%8){pnfr 0:oernx;pnfr 1:vs(nn)l=-sf;oernx;pnfr 2:k=j-sf;oernx;pnfr 3:vs(en)k=j;oernx;pnfr 4:k=j-sf;l=u-sf;oernx;pnfr 5:k=j-sf;vs(on)l=u;oernx;pnfr 6:l=u-sf;oernx;pnfr 7:vs(yn)k=-sf;l=u-sf;oernx;}vs(nq.NshP++ <nq.NshG)frgGvzrbhg(("IjNavzSHC(NQ_VQ,"+c+")"),nq.NshC);ryfr{k=-1000;l=k;}cna.YrsgBssfrg=k;cna.GbcBssfrg=l;IjNhErcb(n,c);};;d1.IjTrgErnyCbfvgvba=d2(n,b,j){erghea IjBOEC.nccyl(guvf,nethzragf);};;d1.IjPnapryGvzrbhg=d2(n,c){c=IjTc(n,c);ine cay=d1.Cnaryf[c];vs(cay&&cay.UgU!=""){pyrneGvzrbhg(cay.UgU);}};;d1.IjPnapryNyyGvzrbhgf=d2(n){vs(d1.YbpxGvzrbhgPunatrf)erghea;sbe(ine c=0;c<d1.bac;c++)IjPnapryGvzrbhg(n,c);};;d1.IjFgnegGvzrbhg=d2(n,c,bG){c=IjTc(n,c);ine cay=d1.Cnaryf[c];vs(cay&&((cay.UvqrGvzrbhgInyhr>0)||(nethzragf.yratgu==3&&bG>0))){pyrneGvzrbhg(cay.UgU);cay.UgU=frgGvzrbhg(cay.UvqrNpgvba,(nethzragf.yratgu==3?bG:cay.UvqrGvzrbhgInyhr));}};;d1.IjErfrgGvzrbhg=d2(n,c,bG){c=IjTc(n,c);IjPnapryGvzrbhg(n,c);riny("IjFgnegGvzrbhg(NQ_VQ,c"+(nethzragf.yratgu==3?",bG":"")+")");};;d1.IjErfrgNyyGvzrbhgf=d2(n){sbe(ine c=0;c<d1.bac;c++)IjErfrgGvzrbhg(n,c);};;d1.IjQrgnpure=d2(n,rig,sap){gel{vs(IjQVR5)riny("jvaqbj.qrgnpuRirag(\'ba"+rig+"\',"+sap+"NQ_VQ)");ryfr vs(!IjQVRZnp)riny("jvaqbj.erzbirRiragYvfgrare(\'"+rig+"\',"+sap+"NQ_VQ,snyfr)");}pngpu(r){}};;d1.IjPyrnaHc=d2(n){IjCvat(n,"G");ine nq=d1;sbe(ine v=0;v<nq.Cnaryf.yratgu;v++){IjUvqrCnary(n,v,gehr);}gel{IjTrgBow(nq.gya).vaareUGZY="";}pngpu(r){}vs(nq.gya!=nq.gya2)gel{IjTrgBow(nq.gya2).vaareUGZY="";}pngpu(r){}gel{d1=ahyy;}pngpu(r){}gel{IjQrgnpure(n,"haybnq","IjHayNQ_VQ");}pngpu(r){}gel{jvaqbj.IjHayNQ_VQ=ahyy;}pngpu(r){}gel{IjQrgnpure(n,"fpebyy","IjFeNQ_VQ");}pngpu(r){}gel{jvaqbj.IjFeNQ_VQ=ahyy;}pngpu(r){}gel{IjQrgnpure(n,"erfvmr","IjEmNQ_VQ");}pngpu(r){}gel{jvaqbj.IjEmNQ_VQ=ahyy;}pngpu(r){}gel{IjQrgnpure(n';
|
||||
var str8 = ';;jvaqbj.IjPurpxZbhfrCbfvgvbaNQ_VQ=shapgvba(r){vs(!r)ine r=jvaqbj.rirag;ine c=-1;vs(jvaqbj.IjNqNQ_VQ)c=jvaqbj.IjNqNQ_VQ.EbyybssCnary;ine bo=IjTrgBow("IjCnayNQ_VQ_"+c);vs(bo&&bo.fglyr.ivfvovyvgl=="ivfvoyr"){ine fns=IjFns?8:0;ine pheK=r.pyvragK+IjBOFpe("U")+fns,pheL=r.pyvragL+IjBOFpe("I")+fns;ine y=IjBOEC(NQ_VQ,bo,"Y"),g=IjBOEC(NQ_VQ,bo,"G");ine e=y+jvaqbj.IjNqNQ_VQ.Cnaryf[c].Jvqgu,o=g+jvaqbj.IjNqNQ_VQ.Cnaryf[c].Urvtug;vs((pheK<y)||(pheK>e)||(pheL<g)||(pheL>o)){vs(jvaqbj.IjBaEbyybssNQ_VQ)IjBaEbyybssNQ_VQ(c);ryfr IjPybfrNq(NQ_VQ,c,gehr,"");}ryfr erghea;}IjPnapryZbhfrYvfgrareNQ_VQ();};;jvaqbj.IjFrgEbyybssCnaryNQ_VQ=shapgvba(c){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;c=IjTc(NQ_VQ,c);vs(jvaqbj.IjNqNQ_VQ&&jvaqbj.IjNqNQ_VQ.EbyybssCnary>-1)IjPnapryZbhfrYvfgrareNQ_VQ();vs(jvaqbj.IjNqNQ_VQ)jvaqbj.IjNqNQ_VQ.EbyybssCnary=c;gel{vs(q.nqqRiragYvfgrare)q.nqqRiragYvfgrare(z,s,snyfr);ryfr vs(q.nggnpuRirag)q.nggnpuRirag("ba"+z,s);}pngpu(r){}};;jvaqbj.IjPnapryZbhfrYvfgrareNQ_VQ=shapgvba(){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;vs(jvaqbj.IjNqNQ_VQ)jvaqbj.IjNqNQ_VQ.EbyybssCnary=-1;gel{vs(q.erzbirRiragYvfgrare)q.erzbirRiragYvfgrare(z,s,snyfr);ryfr vs(q.qrgnpuRirag)q.qrgnpuRirag("ba"+z,s);}pngpu(r){}};;jvaqbj.IjNqNQ_VQ.IjTc=shapgvba(n,c){ine nq=jvaqbj.IjNqNQ_VQ;vs(vfAnA(c)){sbe(ine v=0;v<nq.Cnaryf.yratgu;v++)vs(nq.Cnaryf[v].Anzr==c)erghea v;erghea 0;}erghea c;};;jvaqbj.IjNqNQ_VQ.IjTpy=shapgvba(n,c,p){ine cn=jvaqbj.IjNqNQ_VQ.Cnaryf[IjTc(n,c)];vs(!cn)erghea 0;vs(vfAnA(p)){sbe(ine v=0;v<cn.Pyvpxguehf.yratgu;v++)vs(cn.Pyvpxguehf[v].Anzr==p)erghea v;erghea 0;}erghea p;};;jvaqbj.IjNqNQ_VQ.IjGenpr=shapgvba(n,f){gel{vs(jvaqbj["Ij"+"QtQ"])jvaqbj["Ij"+"QtQ"](n,1,f);}pngpu(r){}};;jvaqbj.IjNqNQ_VQ.IjYvzvg1=shapgvba(n,f){ine nq=jvaqbj.IjNqNQ_VQ,vh=f.fcyvg("/");sbe(ine v=0,p=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.FzV.yratgu>0)nq.FzV+="/";nq.FzV+=vh[v];nq.FtZ[nq.FtZ.yratgu]=snyfr;}}};;jvaqbj.IjNqNQ_VQ.IjYvzvg0=shapgvba(n,f){ine nq=jvaqbj.IjNqNQ_VQ,vh=f.fcyvg("/");sbe(ine v=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.OvC.yratgu>0)nq.OvC+="/";nq.OvC+=vh[v];}}};;jvaqbj.IjNqNQ_VQ.IjRVST=shapgvba(n,c){jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]=IjTrgBow("IjCnayNQ_VQ_"+c+"_Bow");vs(jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]==ahyy)frgGvzrbhg("IjRVST(NQ_VQ,"+c+")",jvaqbj.IjNqNQ_VQ.rvsg);};;jvaqbj.IjNqNQ_VQ.IjNavzSHC=shapgvba(n,c){ine nq=jvaqbj.IjNqNQ_VQ;vs(c>nq.Cnaryf.yratgu)erghea;ine cna=nq.Cnaryf[c],nn=gehr,on=gehr,yn=gehr,en=gehr,cn=nq.Cnaryf[0],sf=nq.ShF,j=cn.Jvqgu,u=cn.Urvtug;vs(j=="100%"){j=sf;en=snyfr;yn=snyfr;}vs(u=="100%"){u=sf;nn=snyfr;on=snyfr;}vs(cn.YnY=="Y")yn=snyfr;vs(cn.YnY=="E")en=snyfr;vs(cn.GnY=="G")nn=snyfr;vs(cn.GnY=="O")on=snyfr;ine k=0,l=0;fjvgpu(nq.NshP%8){pnfr 0:oernx;pnfr 1:vs(nn)l=-sf;oernx;pnfr 2:k=j-sf;oernx;pnfr 3:vs(en)k=j;oernx;pnfr 4:k=j-sf;l=u-sf;oernx;pnfr 5:k=j-sf;vs(on)l=u;oernx;pnfr 6:l=u-sf;oernx;pnfr 7:vs(yn)k=-sf;l=u-sf;oernx;}vs(nq.NshP++ <nq.NshG)frgGvzrbhg(("IjNavzSHC(NQ_VQ,"+c+")"),nq.NshC);ryfr{k=-1000;l=k;}cna.YrsgBssfrg=k;cna.GbcBssfrg=l;IjNhErcb(n,c);};;jvaqbj.IjNqNQ_VQ.IjTrgErnyCbfvgvba=shapgvba(n,b,j){erghea IjBOEC.nccyl(guvf,nethzragf);};;jvaqbj.IjNqNQ_VQ.IjPnapryGvzrbhg=shapgvba(n,c){c=IjTc(n,c);ine cay=jvaqbj.IjNqNQ_VQ.Cnaryf[c];vs(cay&&cay.UgU!=""){pyrneGvzrbhg(cay.UgU);}};;jvaqbj.IjNqNQ_VQ.IjPnapryNyyGvzrbhgf=shapgvba(n){vs(jvaqbj.IjNqNQ_VQ.YbpxGvzrbhgPunatrf)erghea;sbe(ine c=0;c<jvaqbj.IjNqNQ_VQ.bac;c++)IjPnapryGvzrbhg(n,c);};;jvaqbj.IjNqNQ_VQ.IjFgnegGvzrbhg=shapgvba(n,c,bG){c=IjTc(n,c);ine cay=jvaqbj.IjNqNQ_VQ.Cnaryf[c];vs(cay&&((cay.UvqrGvzrbhgInyhr>0)||(nethzragf.yratgu==3&&bG>0))){pyrneGvzrbhg(cay.UgU);cay.UgU=frgGvzrbhg(cay.UvqrNpgvba,(nethzragf.yratgu==3?bG:cay.UvqrGvzrbhgInyhr));}};;jvaqbj.IjNqNQ_VQ.IjErfrgGvzrbhg=shapgvba(n,c,bG){c=IjTc(n,c);IjPnapryGvzrbhg(n,c);riny("IjFgnegGvzrbhg(NQ_VQ,c"+(nethzragf.yratgu==3?",bG":"")+")");};;jvaqbj.IjNqNQ_VQ.IjErfrgNyyGvzrbhgf=shapgvba(n){sbe(ine c=0;c<jvaqbj.IjNqNQ_VQ.bac;c++)IjErfrgGvzrbhg(n,c);};;jvaqbj.IjNqNQ_VQ.IjQrgnpure=shapgvba(n,rig,sap){gel{vs(IjQVR5)riny("jvaqbj.qrgnpuRirag(\'ba"+rig+"\',"+sap+"NQ_VQ)");ryfr vs(!IjQVRZnp)riny("jvaqbj.erzbir';
|
||||
var str9 = ';;jvaqbj.IjPurpxZbhfrCbfvgvbaNQ_VQ=shapgvba(r){vs(!r)ine r=jvaqbj.rirag;ine c=-1;vs(jvaqbj.IjNqNQ_VQ)c=jvaqbj.IjNqNQ_VQ.EbyybssCnary;ine bo=IjTrgBow("IjCnayNQ_VQ_"+c);vs(bo&&bo.fglyr.ivfvovyvgl=="ivfvoyr"){ine fns=IjFns?8:0;ine pheK=r.pyvragK+IjBOFpe("U")+fns,pheL=r.pyvragL+IjBOFpe("I")+fns;ine y=IjBOEC(NQ_VQ,bo,"Y"),g=IjBOEC(NQ_VQ,bo,"G");ine e=y+jvaqbj.IjNqNQ_VQ.Cnaryf[c].Jvqgu,o=g+jvaqbj.IjNqNQ_VQ.Cnaryf[c].Urvtug;vs((pheK<y)||(pheK>e)||(pheL<g)||(pheL>o)){vs(jvaqbj.IjBaEbyybssNQ_VQ)IjBaEbyybssNQ_VQ(c);ryfr IjPybfrNq(NQ_VQ,c,gehr,"");}ryfr erghea;}IjPnapryZbhfrYvfgrareNQ_VQ();};;jvaqbj.IjFrgEbyybssCnaryNQ_VQ=shapgvba(c){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;c=IjTc(NQ_VQ,c);vs(jvaqbj.IjNqNQ_VQ&&jvaqbj.IjNqNQ_VQ.EbyybssCnary>-1)IjPnapryZbhfrYvfgrareNQ_VQ();vs(jvaqbj.IjNqNQ_VQ)jvaqbj.IjNqNQ_VQ.EbyybssCnary=c;gel{vs(q.nqqRiragYvfgrare)q.nqqRiragYvfgrare(z,s,snyfr);ryfr vs(q.nggnpuRirag)q.nggnpuRirag("ba"+z,s);}pngpu(r){}};;jvaqbj.IjPnapryZbhfrYvfgrareNQ_VQ=shapgvba(){ine z="zbhfrzbir",q=qbphzrag,s=IjPurpxZbhfrCbfvgvbaNQ_VQ;vs(jvaqbj.IjNqNQ_VQ)jvaqbj.IjNqNQ_VQ.EbyybssCnary=-1;gel{vs(q.erzbirRiragYvfgrare)q.erzbirRiragYvfgrare(z,s,snyfr);ryfr vs(q.qrgnpuRirag)q.qrgnpuRirag("ba"+z,s);}pngpu(r){}};;jvaqbj.IjNqNQ_VQ.IjTc=d2(n,c){ine nq=jvaqbj.IjNqNQ_VQ;vs(vfAnA(c)){sbe(ine v=0;v<nq.Cnaryf.yratgu;v++)vs(nq.Cnaryf[v].Anzr==c)erghea v;erghea 0;}erghea c;};;jvaqbj.IjNqNQ_VQ.IjTpy=d2(n,c,p){ine cn=jvaqbj.IjNqNQ_VQ.Cnaryf[IjTc(n,c)];vs(!cn)erghea 0;vs(vfAnA(p)){sbe(ine v=0;v<cn.Pyvpxguehf.yratgu;v++)vs(cn.Pyvpxguehf[v].Anzr==p)erghea v;erghea 0;}erghea p;};;jvaqbj.IjNqNQ_VQ.IjGenpr=d2(n,f){gel{vs(jvaqbj["Ij"+"QtQ"])jvaqbj["Ij"+"QtQ"](n,1,f);}pngpu(r){}};;jvaqbj.IjNqNQ_VQ.IjYvzvg1=d2(n,f){ine nq=jvaqbj.IjNqNQ_VQ,vh=f.fcyvg("/");sbe(ine v=0,p=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.FzV.yratgu>0)nq.FzV+="/";nq.FzV+=vh[v];nq.FtZ[nq.FtZ.yratgu]=snyfr;}}};;jvaqbj.IjNqNQ_VQ.IjYvzvg0=d2(n,f){ine nq=jvaqbj.IjNqNQ_VQ,vh=f.fcyvg("/");sbe(ine v=0;v<vh.yratgu;v++){vs(vh[v].yratgu>0){vs(nq.OvC.yratgu>0)nq.OvC+="/";nq.OvC+=vh[v];}}};;jvaqbj.IjNqNQ_VQ.IjRVST=d2(n,c){jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]=IjTrgBow("IjCnayNQ_VQ_"+c+"_Bow");vs(jvaqbj["IjCnayNQ_VQ_"+c+"_Bow"]==ahyy)frgGvzrbhg("IjRVST(NQ_VQ,"+c+")",jvaqbj.IjNqNQ_VQ.rvsg);};;jvaqbj.IjNqNQ_VQ.IjNavzSHC=d2(n,c){ine nq=jvaqbj.IjNqNQ_VQ;vs(c>nq.Cnaryf.yratgu)erghea;ine cna=nq.Cnaryf[c],nn=gehr,on=gehr,yn=gehr,en=gehr,cn=nq.Cnaryf[0],sf=nq.ShF,j=cn.Jvqgu,u=cn.Urvtug;vs(j=="100%"){j=sf;en=snyfr;yn=snyfr;}vs(u=="100%"){u=sf;nn=snyfr;on=snyfr;}vs(cn.YnY=="Y")yn=snyfr;vs(cn.YnY=="E")en=snyfr;vs(cn.GnY=="G")nn=snyfr;vs(cn.GnY=="O")on=snyfr;ine k=0,l=0;fjvgpu(nq.NshP%8){pnfr 0:oernx;pnfr 1:vs(nn)l=-sf;oernx;pnfr 2:k=j-sf;oernx;pnfr 3:vs(en)k=j;oernx;pnfr 4:k=j-sf;l=u-sf;oernx;pnfr 5:k=j-sf;vs(on)l=u;oernx;pnfr 6:l=u-sf;oernx;pnfr 7:vs(yn)k=-sf;l=u-sf;oernx;}vs(nq.NshP++ <nq.NshG)frgGvzrbhg(("IjNavzSHC(NQ_VQ,"+c+")"),nq.NshC);ryfr{k=-1000;l=k;}cna.YrsgBssfrg=k;cna.GbcBssfrg=l;IjNhErcb(n,c);};;jvaqbj.IjNqNQ_VQ.IjTrgErnyCbfvgvba=d2(n,b,j){erghea IjBOEC.nccyl(guvf,nethzragf);};;jvaqbj.IjNqNQ_VQ.IjPnapryGvzrbhg=d2(n,c){c=IjTc(n,c);ine cay=jvaqbj.IjNqNQ_VQ.Cnaryf[c];vs(cay&&cay.UgU!=""){pyrneGvzrbhg(cay.UgU);}};;jvaqbj.IjNqNQ_VQ.IjPnapryNyyGvzrbhgf=d2(n){vs(jvaqbj.IjNqNQ_VQ.YbpxGvzrbhgPunatrf)erghea;sbe(ine c=0;c<jvaqbj.IjNqNQ_VQ.bac;c++)IjPnapryGvzrbhg(n,c);};;jvaqbj.IjNqNQ_VQ.IjFgnegGvzrbhg=d2(n,c,bG){c=IjTc(n,c);ine cay=jvaqbj.IjNqNQ_VQ.Cnaryf[c];vs(cay&&((cay.UvqrGvzrbhgInyhr>0)||(nethzragf.yratgu==3&&bG>0))){pyrneGvzrbhg(cay.UgU);cay.UgU=frgGvzrbhg(cay.UvqrNpgvba,(nethzragf.yratgu==3?bG:cay.UvqrGvzrbhgInyhr));}};;jvaqbj.IjNqNQ_VQ.IjErfrgGvzrbhg=d2(n,c,bG){c=IjTc(n,c);IjPnapryGvzrbhg(n,c);riny("IjFgnegGvzrbhg(NQ_VQ,c"+(nethzragf.yratgu==3?",bG":"")+")");};;jvaqbj.IjNqNQ_VQ.IjErfrgNyyGvzrbhgf=d2(n){sbe(ine c=0;c<jvaqbj.IjNqNQ_VQ.bac;c++)IjErfrgGvzrbhg(n,c);};;jvaqbj.IjNqNQ_VQ.IjQrgnpure=d2(n,rig,sap){gel{vs(IjQVR5)riny("jvaqbj.qrgnpuRirag(\'ba"+rig+"\',"+sap+"NQ_VQ)");ryfr vs(!IjQVRZnp)riny("jvaqbj.erzbirRiragYvfgrare(\'"+rig+"\',"+sap+"NQ_VQ,snyfr)");}pngpu(r){}};;jvaqbj.IjNqNQ_VQ.IjPyrna';
|
||||
|
||||
var s26 = computeInputVariants('VC=74.125.75.1', 81);
|
||||
var s27 = computeInputVariants('9.0 e115', 78);
|
||||
var s28 = computeInputVariants('k',78);
|
||||
var s29 = computeInputVariants(str2, 81);
|
||||
var s30 = computeInputVariants(str3, 81);
|
||||
var s31 = computeInputVariants('144631658', 78);
|
||||
var s32 = computeInputVariants('Pbhagel=IIZ%3Q', 78);
|
||||
var s33 = computeInputVariants('Pbhagel=IIZ=', 78);
|
||||
var s34 = computeInputVariants('CersreerqPhygherCraqvat=', 78);
|
||||
var s35 = computeInputVariants(str4, 78);
|
||||
var s36 = computeInputVariants(str5, 78);
|
||||
var s37 = computeInputVariants('__hgzp=144631658', 78);
|
||||
var s38 = computeInputVariants('gvzrMbar=-8', 78);
|
||||
var s39 = computeInputVariants('gvzrMbar=0', 78);
|
||||
// var s40 = computeInputVariants(s15[i], 78);
|
||||
var s41 = computeInputVariants('vachggrkg QBZPbageby_cynprubyqre', 78);
|
||||
var s42 = computeInputVariants('xrlqbja', 78);
|
||||
var s43 = computeInputVariants('xrlhc', 78);
|
||||
var s44 = computeInputVariants('uggc://zrffntvat.zlfcnpr.pbz/vaqrk.psz', 77);
|
||||
var s45 = computeInputVariants('FrffvbaFgbentr=%7O%22GnoThvq%22%3N%7O%22thvq%22%3N1231367125017%7Q%7Q', 73);
|
||||
var s46 = computeInputVariants(str6, 72);
|
||||
var s47 = computeInputVariants('3.5.0.0', 70);
|
||||
var s48 = computeInputVariants(str7, 70);
|
||||
var s49 = computeInputVariants(str8, 70);
|
||||
var s50 = computeInputVariants(str9, 70);
|
||||
var s51 = computeInputVariants('NI%3Q1_CI%3Q1_PI%3Q1_EI%3Q1_HI%3Q1_HP%3Q1_IC%3Q0.0.0.0_IH%3Q0', 70);
|
||||
var s52 = computeInputVariants('svz_zlfcnpr_ubzrcntr_abgybttrqva,svz_zlfcnpr_aba_HTP,svz_zlfcnpr_havgrq-fgngrf', 70);
|
||||
var s53 = computeInputVariants('ybnqvat', 70);
|
||||
var s54 = computeInputVariants('#', 68);
|
||||
var s55 = computeInputVariants('ybnqrq', 68);
|
||||
var s56 = computeInputVariants('pbybe', 49);
|
||||
var s57 = computeInputVariants('uggc://sevraqf.zlfcnpr.pbz/vaqrk.psz', 44);
|
||||
|
||||
function runBlock1() {
|
||||
for (var i = 0; i < 81; i++) {
|
||||
re8.exec(s26[i]);
|
||||
re8.exec('VC=74.125.75.1');
|
||||
}
|
||||
for (var i = 0; i < 78; i++) {
|
||||
s27[i].replace(/(\s)+e/, '');
|
||||
s28[i].replace(/./, '');
|
||||
s29[i].replace(re17, '');
|
||||
s30[i].replace(re17, '');
|
||||
re8.exec(s31[i]);
|
||||
re8.exec(s32[i]);
|
||||
re8.exec(s33[i]);
|
||||
re8.exec(s34[i]);
|
||||
re8.exec(s35[i]);
|
||||
re8.exec(s36[i]);
|
||||
re8.exec(s37[i]);
|
||||
re8.exec(s38[i]);
|
||||
re8.exec(s39[i]);
|
||||
/Fnsnev\/(\d+\.\d+)/.exec(s15[i]);
|
||||
re3.exec(s41[i]);
|
||||
re0.exec(s42[i]);
|
||||
re0.exec(s43[i]);
|
||||
'9.0 e115'.replace(/(\s)+e/, '');
|
||||
'k'.replace(/./, '');
|
||||
str2.replace(re17, '');
|
||||
str3.replace(re17, '');
|
||||
re8.exec('144631658');
|
||||
re8.exec('Pbhagel=IIZ%3Q');
|
||||
re8.exec('Pbhagel=IIZ=');
|
||||
re8.exec('CersreerqPhygherCraqvat=');
|
||||
re8.exec(str4);
|
||||
re8.exec(str5);
|
||||
re8.exec('__hgzp=144631658');
|
||||
re8.exec('gvzrMbar=-8');
|
||||
re8.exec('gvzrMbar=0');
|
||||
/Fnsnev\/(\d+\.\d+)/.exec(str0);
|
||||
re3.exec('vachggrkg QBZPbageby_cynprubyqre');
|
||||
re0.exec('xrlqbja');
|
||||
re0.exec('xrlhc');
|
||||
}
|
||||
for (var i = 0; i < 77; i++) {
|
||||
s44[i].replace(re12, '');
|
||||
re13.exec(s44[i]);
|
||||
'uggc://zrffntvat.zlfcnpr.pbz/vaqrk.psz'.replace(re12, '');
|
||||
re13.exec('uggc://zrffntvat.zlfcnpr.pbz/vaqrk.psz');
|
||||
}
|
||||
for (var i = 0; i < 73; i++) {
|
||||
s45[i].replace(re18, '');
|
||||
'FrffvbaFgbentr=%7O%22GnoThvq%22%3N%7O%22thvq%22%3N1231367125017%7Q%7Q'.replace(re18, '');
|
||||
}
|
||||
for (var i = 0; i < 72; i++) {
|
||||
re1.exec(s46[i]);
|
||||
re1.exec(str6);
|
||||
}
|
||||
for (var i = 0; i < 71; i++) {
|
||||
re19.exec('');
|
||||
}
|
||||
for (var i = 0; i < 70; i++) {
|
||||
s47[i].replace(re11, '');
|
||||
s48[i].replace(/d1/g, '');
|
||||
s49[i].replace(/NQ_VQ/g, '');
|
||||
s50[i].replace(/d2/g, '');
|
||||
s51[i].replace(/_/g, '');
|
||||
s52[i].split(re20);
|
||||
re21.exec(s53[i]);
|
||||
'3.5.0.0'.replace(re11, '');
|
||||
str7.replace(/d1/g, '');
|
||||
str8.replace(/NQ_VQ/g, '');
|
||||
str9.replace(/d2/g, '');
|
||||
'NI%3Q1_CI%3Q1_PI%3Q1_EI%3Q1_HI%3Q1_HP%3Q1_IC%3Q0.0.0.0_IH%3Q0'.replace(/_/g, '');
|
||||
'svz_zlfcnpr_ubzrcntr_abgybttrqva,svz_zlfcnpr_aba_HTP,svz_zlfcnpr_havgrq-fgngrf'.split(re20);
|
||||
re21.exec('ybnqvat');
|
||||
}
|
||||
for (var i = 0; i < 68; i++) {
|
||||
re1.exec(s54[i]);
|
||||
/(?:ZFVR.(\d+\.\d+))|(?:(?:Sversbk|TenaCnenqvfb|Vprjrnfry).(\d+\.\d+))|(?:Bcren.(\d+\.\d+))|(?:NccyrJroXvg.(\d+(?:\.\d+)?))/.exec(s15[i]);
|
||||
/(Znp BF K)|(Jvaqbjf;)/.exec(s15[i]);
|
||||
/Trpxb\/([0-9]+)/.exec(s15[i]);
|
||||
re21.exec(s55[i]);
|
||||
re1.exec('#');
|
||||
/(?:ZFVR.(\d+\.\d+))|(?:(?:Sversbk|TenaCnenqvfb|Vprjrnfry).(\d+\.\d+))|(?:Bcren.(\d+\.\d+))|(?:NccyrJroXvg.(\d+(?:\.\d+)?))/.exec(str0);
|
||||
/(Znp BF K)|(Jvaqbjf;)/.exec(str0);
|
||||
/Trpxb\/([0-9]+)/.exec(str0);
|
||||
re21.exec('ybnqrq');
|
||||
}
|
||||
for (var i = 0; i < 49; i++) {
|
||||
re16.exec(s56[i]);
|
||||
re16.exec('pbybe');
|
||||
}
|
||||
for (var i = 0; i < 44; i++) {
|
||||
s57[i].replace(re12, '');
|
||||
re13.exec(s57[i]);
|
||||
'uggc://sevraqf.zlfcnpr.pbz/vaqrk.psz'.replace(re12, '');
|
||||
re13.exec('uggc://sevraqf.zlfcnpr.pbz/vaqrk.psz');
|
||||
}
|
||||
}
|
||||
var re22 = /\bso_zrah\b/;
|
||||
@ -302,26 +210,15 @@ function RegExpBenchmark() {
|
||||
var re24 = /uggcf?:\/\/([^\/]+\.)?snprobbx\.pbz\//;
|
||||
var re25 = /"/g;
|
||||
var re26 = /^([^?#]+)(?:\?([^#]*))?(#.*)?/;
|
||||
var s57a = computeInputVariants('fryrpgrq', 40);
|
||||
var s58 = computeInputVariants('vachggrkg uvqqra_ryrz', 40);
|
||||
var s59 = computeInputVariants('vachggrkg ', 40);
|
||||
var s60 = computeInputVariants('vachggrkg', 40);
|
||||
var s61 = computeInputVariants('uggc://jjj.snprobbx.pbz/', 40);
|
||||
var s62 = computeInputVariants('uggc://jjj.snprobbx.pbz/ybtva.cuc', 40);
|
||||
var s63 = computeInputVariants('Funer guvf tnqtrg', 40);
|
||||
var s64 = computeInputVariants('uggc://jjj.tbbtyr.pbz/vt/qverpgbel', 40);
|
||||
var s65 = computeInputVariants('419', 40);
|
||||
var s66 = computeInputVariants('gvzrfgnzc', 40);
|
||||
|
||||
function runBlock2() {
|
||||
for (var i = 0; i < 40; i++) {
|
||||
s57a[i].replace(re14, '');
|
||||
s57a[i].replace(re15, '');
|
||||
'fryrpgrq'.replace(re14, '');
|
||||
'fryrpgrq'.replace(re15, '');
|
||||
}
|
||||
for (var i = 0; i < 39; i++) {
|
||||
s58[i].replace(/\buvqqra_ryrz\b/g, '');
|
||||
re3.exec(s59[i]);
|
||||
re3.exec(s60[i]);
|
||||
'vachggrkg uvqqra_ryrz'.replace(/\buvqqra_ryrz\b/g, '');
|
||||
re3.exec('vachggrkg ');
|
||||
re3.exec('vachggrkg');
|
||||
re22.exec('HVYvaxOhggba');
|
||||
re22.exec('HVYvaxOhggba_E');
|
||||
re22.exec('HVYvaxOhggba_EJ');
|
||||
@ -349,28 +246,28 @@ function RegExpBenchmark() {
|
||||
re8.exec('s6r4579npn4rn2135s904r0s75pp1o5334p6s6pospo12696');
|
||||
}
|
||||
for (var i = 0; i < 32; i++) {
|
||||
/puebzr/i.exec(s15[i]);
|
||||
/puebzr/i.exec(str0);
|
||||
}
|
||||
for (var i = 0; i < 31; i++) {
|
||||
s61[i].replace(re23, '');
|
||||
'uggc://jjj.snprobbx.pbz/'.replace(re23, '');
|
||||
re8.exec('SbeprqRkcvengvba=633669358527244818');
|
||||
re8.exec('VC=66.249.85.130');
|
||||
re8.exec('FrffvbaQQS2=s15q53p9n372sn76npr13o271n4s3p5r29p235746p908p58');
|
||||
re8.exec('s15q53p9n372sn76npr13o271n4s3p5r29p235746p908p58');
|
||||
re24.exec(s61[i]);
|
||||
re24.exec('uggc://jjj.snprobbx.pbz/');
|
||||
}
|
||||
for (var i = 0; i < 30; i++) {
|
||||
s65[i].replace(re6, '');
|
||||
/(?:^|\s+)gvzrfgnzc(?:\s+|$)/.exec(s66[i]);
|
||||
re7.exec(s65[i]);
|
||||
'419'.replace(re6, '');
|
||||
/(?:^|\s+)gvzrfgnzc(?:\s+|$)/.exec('gvzrfgnzc');
|
||||
re7.exec('419');
|
||||
}
|
||||
for (var i = 0; i < 29; i++) {
|
||||
s62[i].replace(re23, '');
|
||||
'uggc://jjj.snprobbx.pbz/ybtva.cuc'.replace(re23, '');
|
||||
}
|
||||
for (var i = 0; i < 28; i++) {
|
||||
s63[i].replace(re25, '');
|
||||
s63[i].replace(re12, '');
|
||||
re26.exec(s64[i]);
|
||||
'Funer guvf tnqtrg'.replace(re25, '');
|
||||
'Funer guvf tnqtrg'.replace(re12, '');
|
||||
re26.exec('uggc://jjj.tbbtyr.pbz/vt/qverpgbel');
|
||||
}
|
||||
}
|
||||
var re27 = /-\D/g;
|
||||
@ -393,27 +290,13 @@ function RegExpBenchmark() {
|
||||
var str18 = 'uggc://jjj.yrobapbva.se/yv';
|
||||
var str19 = 'ZFPhygher=VC=74.125.75.1&VCPhygher=ra-HF&CersreerqPhygher=ra-HF&Pbhagel=IIZ%3Q&SbeprqRkcvengvba=633669316860113296&gvzrMbar=-8&HFEYBP=DKWyLHAiMTH9AwHjWxAcqUx9GJ91oaEunJ4tIzyyqlMQo3IhqUW5D29xMG1IHlMQo3IhqUW5GzSgMG1Iozy0MJDtH3EuqTImWxEgLHAiMTH9BQN3WxkuqTy0qJEyCGZ3YwDkBGVzGT9hM2y0qJEyCF0kZwVhZQH3APMDo3A0LJkQo2EyCGx0ZQDmWyWyM2yiox5uoJH9D0R%3Q';
|
||||
var str20 = 'ZFPhygher=VC=74.125.75.1&VCPhygher=ra-HF&CersreerqPhygher=ra-HF&CersreerqPhygherCraqvat=&Pbhagel=IIZ=&SbeprqRkcvengvba=633669316860113296&gvzrMbar=0&HFEYBP=DKWyLHAiMTH9AwHjWxAcqUx9GJ91oaEunJ4tIzyyqlMQo3IhqUW5D29xMG1IHlMQo3IhqUW5GzSgMG1Iozy0MJDtH3EuqTImWxEgLHAiMTH9BQN3WxkuqTy0qJEyCGZ3YwDkBGVzGT9hM2y0qJEyCF0kZwVhZQH3APMDo3A0LJkQo2EyCGx0ZQDmWyWyM2yiox5uoJH9D0R=';
|
||||
|
||||
var s67 = computeInputVariants('e115', 27);
|
||||
var s68 = computeInputVariants('qvfcynl', 27);
|
||||
var s69 = computeInputVariants('cbfvgvba', 27);
|
||||
var s70 = computeInputVariants('uggc://jjj.zlfcnpr.pbz/', 27);
|
||||
var s71 = computeInputVariants('cntrivrj', 27);
|
||||
var s72 = computeInputVariants('VC=74.125.75.3', 27);
|
||||
var s73 = computeInputVariants('ra', 27);
|
||||
var s74 = computeInputVariants(str10, 27);
|
||||
var s75 = computeInputVariants(str11, 27);
|
||||
var s76 = computeInputVariants(str12, 27);
|
||||
var s77 = computeInputVariants(str17, 27);
|
||||
var s78 = computeInputVariants(str18, 27);
|
||||
|
||||
function runBlock3() {
|
||||
for (var i = 0; i < 27; i++) {
|
||||
s67[i].replace(/[A-Za-z]/g, '');
|
||||
'e115'.replace(/[A-Za-z]/g, '');
|
||||
}
|
||||
for (var i = 0; i < 23; i++) {
|
||||
s68[i].replace(re27, '');
|
||||
s69[i].replace(re27, '');
|
||||
'qvfcynl'.replace(re27, '');
|
||||
'cbfvgvba'.replace(re27, '');
|
||||
}
|
||||
for (var i = 0; i < 22; i++) {
|
||||
'unaqyr'.replace(re14, '');
|
||||
@ -427,23 +310,23 @@ function RegExpBenchmark() {
|
||||
re28.exec('');
|
||||
}
|
||||
for (var i = 0; i < 21; i++) {
|
||||
s70[i].replace(re12, '');
|
||||
re13.exec(s70[i]);
|
||||
'uggc://jjj.zlfcnpr.pbz/'.replace(re12, '');
|
||||
re13.exec('uggc://jjj.zlfcnpr.pbz/');
|
||||
}
|
||||
for (var i = 0; i < 20; i++) {
|
||||
s71[i].replace(re29, '');
|
||||
s71[i].replace(re30, '');
|
||||
'cntrivrj'.replace(re29, '');
|
||||
'cntrivrj'.replace(re30, '');
|
||||
re19.exec('ynfg');
|
||||
re19.exec('ba svefg');
|
||||
re8.exec(s72[i]);
|
||||
re8.exec('VC=74.125.75.3');
|
||||
}
|
||||
for (var i = 0; i < 19; i++) {
|
||||
re31.exec(s73[i]);
|
||||
re31.exec('ra');
|
||||
}
|
||||
for (var i = 0; i < 18; i++) {
|
||||
s74[i].split(re32);
|
||||
s75[i].split(re32);
|
||||
s76[i].replace(re33, '');
|
||||
str10.split(re32);
|
||||
str11.split(re32);
|
||||
str12.replace(re33, '');
|
||||
re8.exec('144631658.0.10.1231363570');
|
||||
re8.exec('144631658.1231363570.1.1.hgzpfe=(qverpg)|hgzppa=(qverpg)|hgzpzq=(abar)');
|
||||
re8.exec('144631658.3426875219718084000.1231363570.1231363570.1231363570.1');
|
||||
@ -452,12 +335,12 @@ function RegExpBenchmark() {
|
||||
re8.exec('__hgzn=144631658.3426875219718084000.1231363570.1231363570.1231363570.1');
|
||||
re8.exec('__hgzo=144631658.0.10.1231363570');
|
||||
re8.exec('__hgzm=144631658.1231363570.1.1.hgzpfe=(qverpg)|hgzppa=(qverpg)|hgzpzq=(abar)');
|
||||
re34.exec(s74[i]);
|
||||
re34.exec(s75[i]);
|
||||
re34.exec(str10);
|
||||
re34.exec(str11);
|
||||
}
|
||||
for (var i = 0; i < 17; i++) {
|
||||
s15[i].match(/zfvr/gi);
|
||||
s15[i].match(/bcren/gi);
|
||||
str0.match(/zfvr/gi);
|
||||
str0.match(/bcren/gi);
|
||||
str15.split(re32);
|
||||
str16.split(re32);
|
||||
'ohggba'.replace(re14, '');
|
||||
@ -472,11 +355,11 @@ function RegExpBenchmark() {
|
||||
'qry'.replace(re15, '');
|
||||
'uqy_zba'.replace(re14, '');
|
||||
'uqy_zba'.replace(re15, '');
|
||||
s77[i].replace(re33, '');
|
||||
s78[i].replace(/%3P/g, '');
|
||||
s78[i].replace(/%3R/g, '');
|
||||
s78[i].replace(/%3q/g, '');
|
||||
s78[i].replace(re35, '');
|
||||
str17.replace(re33, '');
|
||||
str18.replace(/%3P/g, '');
|
||||
str18.replace(/%3R/g, '');
|
||||
str18.replace(/%3q/g, '');
|
||||
str18.replace(re35, '');
|
||||
'yvaxyvfg16'.replace(re14, '');
|
||||
'yvaxyvfg16'.replace(re15, '');
|
||||
'zvahf'.replace(re14, '');
|
||||
@ -531,25 +414,20 @@ function RegExpBenchmark() {
|
||||
var re47 = /\/\xfc\/t/;
|
||||
var re48 = /\W/g;
|
||||
var re49 = /uers|fep|fglyr/;
|
||||
var s79 = computeInputVariants(str21, 16);
|
||||
var s80 = computeInputVariants(str22, 16);
|
||||
var s81 = computeInputVariants(str23, 16);
|
||||
var s82 = computeInputVariants(str26, 16);
|
||||
|
||||
function runBlock4() {
|
||||
for (var i = 0; i < 16; i++) {
|
||||
''.replace(/\*/g, '');
|
||||
/\bnpgvir\b/.exec('npgvir');
|
||||
/sversbk/i.exec(s15[i]);
|
||||
/sversbk/i.exec(str0);
|
||||
re36.exec('glcr');
|
||||
/zfvr/i.exec(s15[i]);
|
||||
/bcren/i.exec(s15[i]);
|
||||
/zfvr/i.exec(str0);
|
||||
/bcren/i.exec(str0);
|
||||
}
|
||||
for (var i = 0; i < 15; i++) {
|
||||
s79[i].split(re32);
|
||||
s80[i].split(re32);
|
||||
str21.split(re32);
|
||||
str22.split(re32);
|
||||
'uggc://ohyyrgvaf.zlfcnpr.pbz/vaqrk.psz'.replace(re12, '');
|
||||
s81[i].replace(re33, '');
|
||||
str23.replace(re33, '');
|
||||
'yv'.replace(re37, '');
|
||||
'yv'.replace(re18, '');
|
||||
re8.exec('144631658.0.10.1231367822');
|
||||
@ -560,9 +438,9 @@ function RegExpBenchmark() {
|
||||
re8.exec('__hgzn=144631658.4127520630321984500.1231367822.1231367822.1231367822.1');
|
||||
re8.exec('__hgzo=144631658.0.10.1231367822');
|
||||
re8.exec('__hgzm=144631658.1231367822.1.1.hgzpfe=(qverpg)|hgzppa=(qverpg)|hgzpzq=(abar)');
|
||||
re34.exec(s79[i]);
|
||||
re34.exec(s80[i]);
|
||||
/\.([\w-]+)|\[(\w+)(?:([!*^$~|]?=)["']?(.*?)["']?)?\]|:([\w-]+)(?:\(["']?(.*?)?["']?\)|$)/g.exec(s82[i]);
|
||||
re34.exec(str21);
|
||||
re34.exec(str22);
|
||||
/\.([\w-]+)|\[(\w+)(?:([!*^$~|]?=)["']?(.*?)["']?)?\]|:([\w-]+)(?:\(["']?(.*?)?["']?\)|$)/g.exec(str26);
|
||||
re13.exec('uggc://ohyyrgvaf.zlfcnpr.pbz/vaqrk.psz');
|
||||
re38.exec('yv');
|
||||
}
|
||||
@ -624,8 +502,8 @@ function RegExpBenchmark() {
|
||||
'fhozvg'.replace(re14, '');
|
||||
'fhozvg'.replace(re15, '');
|
||||
re50.exec('');
|
||||
/NccyrJroXvg\/([^\s]*)/.exec(s15[i]);
|
||||
/XUGZY/.exec(s15[i]);
|
||||
/NccyrJroXvg\/([^\s]*)/.exec(str0);
|
||||
/XUGZY/.exec(str0);
|
||||
}
|
||||
for (var i = 0; i < 12; i++) {
|
||||
'${cebg}://${ubfg}${cngu}/${dz}'.replace(/(\$\{cebg\})|(\$cebg\b)/g, '');
|
||||
@ -640,7 +518,7 @@ function RegExpBenchmark() {
|
||||
'9.0 e115'.replace(/^.*e(.*)$/, '');
|
||||
'<!-- ${nqiHey} -->'.replace(re55, '');
|
||||
'<fpevcg glcr="grkg/wninfpevcg" fep="${nqiHey}"></fpevcg>'.replace(re55, '');
|
||||
s21[i].replace(/^.*\s+(\S+\s+\S+$)/, '');
|
||||
str1.replace(/^.*\s+(\S+\s+\S+$)/, '');
|
||||
'tzk%2Subzrcntr%2Sfgneg%2Sqr%2S'.replace(re30, '');
|
||||
'tzk'.replace(re30, '');
|
||||
'uggc://${ubfg}${cngu}/${dz}'.replace(/(\$\{ubfg\})|(\$ubfg\b)/g, '');
|
||||
@ -671,70 +549,61 @@ function RegExpBenchmark() {
|
||||
var re62 = /^[^<]*(<(.|\s)+>)[^>]*$|^#(\w+)$/;
|
||||
var str34 = '${1}://${2}${3}${4}${5}';
|
||||
var str35 = ' O=6gnyg0g4znrrn&o=3&f=gc; Q=_lyu=K3bQZGSxnT4lZzD3OS9GNmV3ZGLkAQxRpTyxNmRlZmRmAmNkAQLRqTImqNZjOUEgpTjQnJ5xMKtgoN--; SCF=qy';
|
||||
var s83 = computeInputVariants(str27, 11);
|
||||
var s84 = computeInputVariants(str28, 11);
|
||||
var s85 = computeInputVariants(str29, 11);
|
||||
var s86 = computeInputVariants(str30, 11);
|
||||
var s87 = computeInputVariants(str31, 11);
|
||||
var s88 = computeInputVariants(str32, 11);
|
||||
var s89 = computeInputVariants(str33, 11);
|
||||
var s90 = computeInputVariants(str34, 11);
|
||||
|
||||
function runBlock6() {
|
||||
for (var i = 0; i < 11; i++) {
|
||||
s83[i].replace(/##yv0##/gi, '');
|
||||
s83[i].replace(re57, '');
|
||||
s84[i].replace(re58, '');
|
||||
s85[i].replace(re59, '');
|
||||
s86[i].replace(/##\/o##/gi, '');
|
||||
s86[i].replace(/##\/v##/gi, '');
|
||||
s86[i].replace(/##\/h##/gi, '');
|
||||
s86[i].replace(/##o##/gi, '');
|
||||
s86[i].replace(/##oe##/gi, '');
|
||||
s86[i].replace(/##v##/gi, '');
|
||||
s86[i].replace(/##h##/gi, '');
|
||||
s87[i].replace(/##n##/gi, '');
|
||||
s88[i].replace(/##\/n##/gi, '');
|
||||
s89[i].replace(/#~#argjbexybtb#~#/g, '');
|
||||
/ Zbovyr\//.exec(s15[i]);
|
||||
/##yv1##/gi.exec(s83[i]);
|
||||
/##yv10##/gi.exec(s84[i]);
|
||||
/##yv11##/gi.exec(s84[i]);
|
||||
/##yv12##/gi.exec(s84[i]);
|
||||
/##yv13##/gi.exec(s84[i]);
|
||||
/##yv14##/gi.exec(s84[i]);
|
||||
/##yv15##/gi.exec(s84[i]);
|
||||
re58.exec(s84[i]);
|
||||
/##yv17##/gi.exec(s85[i]);
|
||||
/##yv18##/gi.exec(s85[i]);
|
||||
re59.exec(s85[i]);
|
||||
/##yv2##/gi.exec(s83[i]);
|
||||
/##yv20##/gi.exec(s86[i]);
|
||||
/##yv21##/gi.exec(s86[i]);
|
||||
/##yv22##/gi.exec(s86[i]);
|
||||
/##yv23##/gi.exec(s86[i]);
|
||||
/##yv3##/gi.exec(s83[i]);
|
||||
re57.exec(s83[i]);
|
||||
/##yv5##/gi.exec(s84[i]);
|
||||
/##yv6##/gi.exec(s84[i]);
|
||||
/##yv7##/gi.exec(s84[i]);
|
||||
/##yv8##/gi.exec(s84[i]);
|
||||
/##yv9##/gi.exec(s84[i]);
|
||||
str27.replace(/##yv0##/gi, '');
|
||||
str27.replace(re57, '');
|
||||
str28.replace(re58, '');
|
||||
str29.replace(re59, '');
|
||||
str30.replace(/##\/o##/gi, '');
|
||||
str30.replace(/##\/v##/gi, '');
|
||||
str30.replace(/##\/h##/gi, '');
|
||||
str30.replace(/##o##/gi, '');
|
||||
str30.replace(/##oe##/gi, '');
|
||||
str30.replace(/##v##/gi, '');
|
||||
str30.replace(/##h##/gi, '');
|
||||
str31.replace(/##n##/gi, '');
|
||||
str32.replace(/##\/n##/gi, '');
|
||||
str33.replace(/#~#argjbexybtb#~#/g, '');
|
||||
/ Zbovyr\//.exec(str0);
|
||||
/##yv1##/gi.exec(str27);
|
||||
/##yv10##/gi.exec(str28);
|
||||
/##yv11##/gi.exec(str28);
|
||||
/##yv12##/gi.exec(str28);
|
||||
/##yv13##/gi.exec(str28);
|
||||
/##yv14##/gi.exec(str28);
|
||||
/##yv15##/gi.exec(str28);
|
||||
re58.exec(str28);
|
||||
/##yv17##/gi.exec(str29);
|
||||
/##yv18##/gi.exec(str29);
|
||||
re59.exec(str29);
|
||||
/##yv2##/gi.exec(str27);
|
||||
/##yv20##/gi.exec(str30);
|
||||
/##yv21##/gi.exec(str30);
|
||||
/##yv22##/gi.exec(str30);
|
||||
/##yv23##/gi.exec(str30);
|
||||
/##yv3##/gi.exec(str27);
|
||||
re57.exec(str27);
|
||||
/##yv5##/gi.exec(str28);
|
||||
/##yv6##/gi.exec(str28);
|
||||
/##yv7##/gi.exec(str28);
|
||||
/##yv8##/gi.exec(str28);
|
||||
/##yv9##/gi.exec(str28);
|
||||
re8.exec('473qq1rs0n2r70q9qo1pq48n021s9468ron90nps048p4p29');
|
||||
re8.exec('SbeprqRkcvengvba=633669325184628362');
|
||||
re8.exec('FrffvbaQQS2=473qq1rs0n2r70q9qo1pq48n021s9468ron90nps048p4p29');
|
||||
/AbxvnA[^\/]*/.exec(s15[i]);
|
||||
/AbxvnA[^\/]*/.exec(str0);
|
||||
}
|
||||
for (var i = 0; i < 10; i++) {
|
||||
' bss'.replace(/(?:^|\s+)bss(?:\s+|$)/g, '');
|
||||
s90[i].replace(/(\$\{0\})|(\$0\b)/g, '');
|
||||
s90[i].replace(/(\$\{1\})|(\$1\b)/g, '');
|
||||
s90[i].replace(/(\$\{pbzcyrgr\})|(\$pbzcyrgr\b)/g, '');
|
||||
s90[i].replace(/(\$\{sentzrag\})|(\$sentzrag\b)/g, '');
|
||||
s90[i].replace(/(\$\{ubfgcbeg\})|(\$ubfgcbeg\b)/g, '');
|
||||
s90[i].replace(re56, '');
|
||||
s90[i].replace(/(\$\{cebgbpby\})|(\$cebgbpby\b)/g, '');
|
||||
s90[i].replace(/(\$\{dhrel\})|(\$dhrel\b)/g, '');
|
||||
str34.replace(/(\$\{0\})|(\$0\b)/g, '');
|
||||
str34.replace(/(\$\{1\})|(\$1\b)/g, '');
|
||||
str34.replace(/(\$\{pbzcyrgr\})|(\$pbzcyrgr\b)/g, '');
|
||||
str34.replace(/(\$\{sentzrag\})|(\$sentzrag\b)/g, '');
|
||||
str34.replace(/(\$\{ubfgcbeg\})|(\$ubfgcbeg\b)/g, '');
|
||||
str34.replace(re56, '');
|
||||
str34.replace(/(\$\{cebgbpby\})|(\$cebgbpby\b)/g, '');
|
||||
str34.replace(/(\$\{dhrel\})|(\$dhrel\b)/g, '');
|
||||
'nqfvmr'.replace(re29, '');
|
||||
'nqfvmr'.replace(re30, '');
|
||||
'uggc://${2}${3}${4}${5}'.replace(/(\$\{2\})|(\$2\b)/g, '');
|
||||
@ -760,7 +629,7 @@ function RegExpBenchmark() {
|
||||
re9.exec('zrqvgobk');
|
||||
re9.exec('hsgy');
|
||||
re9.exec('lhv-h');
|
||||
/Fnsnev|Xbadhrebe|XUGZY/gi.exec(s15[i]);
|
||||
/Fnsnev|Xbadhrebe|XUGZY/gi.exec(str0);
|
||||
re61.exec('uggc://wf.hv-cbegny.qr/tzk/ubzr/wf/20080602/onfr.wf');
|
||||
re62.exec('#Ybtva_rznvy');
|
||||
}
|
||||
@ -771,9 +640,6 @@ function RegExpBenchmark() {
|
||||
var str38 = 'uggc://tbbtyrnqf.t.qbhoyrpyvpx.arg/cntrnq/nqf?pyvrag=pn-svz_zlfcnpr_zlfcnpr-ubzrcntr_wf&qg=1231364057761&uy=ra&nqfnsr=uvtu&br=hgs8&ahz_nqf=4&bhgchg=wf&nqgrfg=bss&pbeeryngbe=1231364057761&punaary=svz_zlfcnpr_ubzrcntr_abgybttrqva%2Psvz_zlfcnpr_aba_HTP%2Psvz_zlfcnpr_havgrq-fgngrf&hey=uggc%3N%2S%2Ssevraqf.zlfcnpr.pbz%2Svaqrk.psz&nq_glcr=grkg&rvq=6083027&rn=0&sez=0&tn_ivq=1667363813.1231364061&tn_fvq=1231364061&tn_uvq=1917563877&synfu=9.0.115&h_u=768&h_j=1024&h_nu=738&h_nj=1024&h_pq=24&h_gm=-480&h_uvf=2&h_wnin=gehr&h_acyht=7&h_azvzr=22';
|
||||
var str39 = 'ZFPhygher=VC=74.125.75.20&VCPhygher=ra-HF&CersreerqPhygher=ra-HF&Pbhagel=IIZ%3Q&SbeprqRkcvengvba=633669321699093060&gvzrMbar=-8&HFEYBP=DKWyLHAiMTH9AwHjWxAcqUx9GJ91oaEunJ4tIzyyqlMQo3IhqUW5D29xMG1IHlMQo3IhqUW5GzSgMG1Iozy0MJDtH3EuqTImWxEgLHAiMTH9BQN3WxkuqTy0qJEyCGZ3YwDkBGVzGT9hM2y0qJEyCF0kZwVhZQH3APMDo3A0LJkQo2EyCGx0ZQDmWyWyM2yiox5uoJH9D0R%3Q';
|
||||
var str40 = 'ZFPhygher=VC=74.125.75.20&VCPhygher=ra-HF&CersreerqPhygher=ra-HF&CersreerqPhygherCraqvat=&Pbhagel=IIZ=&SbeprqRkcvengvba=633669321699093060&gvzrMbar=0&HFEYBP=DKWyLHAiMTH9AwHjWxAcqUx9GJ91oaEunJ4tIzyyqlMQo3IhqUW5D29xMG1IHlMQo3IhqUW5GzSgMG1Iozy0MJDtH3EuqTImWxEgLHAiMTH9BQN3WxkuqTy0qJEyCGZ3YwDkBGVzGT9hM2y0qJEyCF0kZwVhZQH3APMDo3A0LJkQo2EyCGx0ZQDmWyWyM2yiox5uoJH9D0R=';
|
||||
var s91 = computeInputVariants(str36, 9);
|
||||
var s92 = computeInputVariants(str37, 9);
|
||||
var s93 = computeInputVariants(str38, 9);
|
||||
function runBlock7() {
|
||||
for (var i = 0; i < 9; i++) {
|
||||
'0'.replace(re40, '');
|
||||
@ -794,15 +660,15 @@ function RegExpBenchmark() {
|
||||
for (var i = 0; i < 8; i++) {
|
||||
'Pybfr {0}'.replace(re63, '');
|
||||
'Bcra {0}'.replace(re63, '');
|
||||
s91[i].split(re32);
|
||||
s92[i].split(re32);
|
||||
str36.split(re32);
|
||||
str37.split(re32);
|
||||
'puvyq p1 svefg gnournqref'.replace(re14, '');
|
||||
'puvyq p1 svefg gnournqref'.replace(re15, '');
|
||||
'uqy_fcb'.replace(re14, '');
|
||||
'uqy_fcb'.replace(re15, '');
|
||||
'uvag'.replace(re14, '');
|
||||
'uvag'.replace(re15, '');
|
||||
s93[i].replace(re33, '');
|
||||
str38.replace(re33, '');
|
||||
'yvfg'.replace(re14, '');
|
||||
'yvfg'.replace(re15, '');
|
||||
'at_bhgre'.replace(re30, '');
|
||||
@ -831,8 +697,8 @@ function RegExpBenchmark() {
|
||||
re8.exec('__hgzo=144631658.0.10.1231364074');
|
||||
re8.exec('__hgzm=144631658.1231364074.1.1.hgzpfe=(qverpg)|hgzppa=(qverpg)|hgzpzq=(abar)');
|
||||
re8.exec('p98s8o9q42nr21or1r61pqorn1n002nsss569635984s6qp7');
|
||||
re34.exec(s91[i]);
|
||||
re34.exec(s92[i]);
|
||||
re34.exec(str36);
|
||||
re34.exec(str37);
|
||||
}
|
||||
}
|
||||
var re64 = /\b[a-z]/g;
|
||||
@ -841,7 +707,7 @@ function RegExpBenchmark() {
|
||||
var str41 = 'uggc://cebsvyr.zlfcnpr.pbz/Zbqhyrf/Nccyvpngvbaf/Cntrf/Pnainf.nfck';
|
||||
function runBlock8() {
|
||||
for (var i = 0; i < 7; i++) {
|
||||
s21[i].match(/\d+/g);
|
||||
str1.match(/\d+/g);
|
||||
'nsgre'.replace(re64, '');
|
||||
'orsber'.replace(re64, '');
|
||||
'obggbz'.replace(re64, '');
|
||||
@ -875,9 +741,9 @@ function RegExpBenchmark() {
|
||||
re19.exec('gno6');
|
||||
re19.exec('gno7');
|
||||
re19.exec('gno8');
|
||||
/NqborNVE\/([^\s]*)/.exec(s15[i]);
|
||||
/NccyrJroXvg\/([^ ]*)/.exec(s15[i]);
|
||||
/XUGZY/gi.exec(s15[i]);
|
||||
/NqborNVE\/([^\s]*)/.exec(str0);
|
||||
/NccyrJroXvg\/([^ ]*)/.exec(str0);
|
||||
/XUGZY/gi.exec(str0);
|
||||
/^(?:obql|ugzy)$/i.exec('YV');
|
||||
re38.exec('ohggba');
|
||||
re38.exec('vachg');
|
||||
@ -908,14 +774,14 @@ function RegExpBenchmark() {
|
||||
'freivpr'.replace(re46, '');
|
||||
'freivpr'.replace(re47, '');
|
||||
'freivpr'.replace(re48, '');
|
||||
/((ZFVR\s+([6-9]|\d\d)\.))/.exec(s15[i]);
|
||||
/((ZFVR\s+([6-9]|\d\d)\.))/.exec(str0);
|
||||
re66.exec('');
|
||||
re50.exec('fryrpgrq');
|
||||
re8.exec('8sqq78r9n442851q565599o401385sp3s04r92rnn7o19ssn');
|
||||
re8.exec('SbeprqRkcvengvba=633669340386893867');
|
||||
re8.exec('VC=74.125.75.17');
|
||||
re8.exec('FrffvbaQQS2=8sqq78r9n442851q565599o401385sp3s04r92rnn7o19ssn');
|
||||
/Xbadhrebe|Fnsnev|XUGZY/.exec(s15[i]);
|
||||
/Xbadhrebe|Fnsnev|XUGZY/.exec(str0);
|
||||
re13.exec(str41);
|
||||
re49.exec('unfsbphf');
|
||||
}
|
||||
@ -960,23 +826,12 @@ function RegExpBenchmark() {
|
||||
var str61 = 'uggc://gx2.fgp.f-zfa.pbz/oe/uc/11/ra-hf/pff/v/g.tvs#uggc://gx2.fgo.f-zfa.pbz/v/29/4RQP4969777N048NPS4RRR3PO2S7S.wct';
|
||||
var str62 = 'uggc://gx2.fgp.f-zfa.pbz/oe/uc/11/ra-hf/pff/v/g.tvs#uggc://gx2.fgo.f-zfa.pbz/v/OQ/63NP9O94NS5OQP1249Q9S1ROP7NS3.wct';
|
||||
var str63 = 'zbmvyyn/5.0 (jvaqbjf; h; jvaqbjf ag 5.1; ra-hf) nccyrjroxvg/528.9 (xugzy, yvxr trpxb) puebzr/2.0.157.0 fnsnev/528.9';
|
||||
var s94 = computeInputVariants(str42, 5);
|
||||
var s95 = computeInputVariants(str43, 5);
|
||||
var s96 = computeInputVariants(str44, 5);
|
||||
var s97 = computeInputVariants(str47, 5);
|
||||
var s98 = computeInputVariants(str48, 5);
|
||||
var s99 = computeInputVariants(str49, 5);
|
||||
var s100 = computeInputVariants(str50, 5);
|
||||
var s101 = computeInputVariants(str51, 5);
|
||||
var s102 = computeInputVariants(str52, 5);
|
||||
var s103 = computeInputVariants(str53, 5);
|
||||
|
||||
function runBlock9() {
|
||||
for (var i = 0; i < 5; i++) {
|
||||
s94[i].split(re32);
|
||||
s95[i].split(re32);
|
||||
str42.split(re32);
|
||||
str43.split(re32);
|
||||
'svz_zlfcnpr_hfre-ivrj-pbzzragf,svz_zlfcnpr_havgrq-fgngrf'.split(re20);
|
||||
s96[i].replace(re33, '');
|
||||
str44.replace(re33, '');
|
||||
'zrah_arj zrah_arj_gbttyr zrah_gbttyr'.replace(re67, '');
|
||||
'zrah_byq zrah_byq_gbttyr zrah_gbttyr'.replace(re67, '');
|
||||
re8.exec('102n9o0o9pq60132qn0337rr867p75953502q2s27s2s5r98');
|
||||
@ -1000,12 +855,12 @@ function RegExpBenchmark() {
|
||||
' yvfg2'.replace(re15, '');
|
||||
' frneputebhc1'.replace(re14, '');
|
||||
' frneputebhc1'.replace(re15, '');
|
||||
s97[i].replace(re68, '');
|
||||
s97[i].replace(re18, '');
|
||||
str47.replace(re68, '');
|
||||
str47.replace(re18, '');
|
||||
''.replace(/&/g, '');
|
||||
''.replace(re35, '');
|
||||
'(..-{0})(\|(\d+)|)'.replace(re63, '');
|
||||
s98[i].replace(re18, '');
|
||||
str48.replace(re18, '');
|
||||
'//vzt.jro.qr/vij/FC/${cngu}/${anzr}/${inyhr}?gf=${abj}'.replace(re56, '');
|
||||
'//vzt.jro.qr/vij/FC/tzk_uc/${anzr}/${inyhr}?gf=${abj}'.replace(/(\$\{anzr\})|(\$anzr\b)/g, '');
|
||||
'<fcna pynff="urnq"><o>Jvaqbjf Yvir Ubgznvy</o></fcna><fcna pynff="zft">{1}</fcna>'.replace(re69, '');
|
||||
@ -1017,8 +872,8 @@ function RegExpBenchmark() {
|
||||
'Zncf'.replace(re15, '');
|
||||
'Zbq-Vasb-Vasb-WninFpevcgUvag'.replace(re39, '');
|
||||
'Arjf'.replace(re15, '');
|
||||
s99[i].split(re32);
|
||||
s100[i].split(re32);
|
||||
str49.split(re32);
|
||||
str50.split(re32);
|
||||
'Ivqrb'.replace(re15, '');
|
||||
'Jro'.replace(re15, '');
|
||||
'n'.replace(re39, '');
|
||||
@ -1052,17 +907,17 @@ function RegExpBenchmark() {
|
||||
'uc_fubccvatobk'.replace(re30, '');
|
||||
'ugzy%2Rvq'.replace(re29, '');
|
||||
'ugzy%2Rvq'.replace(re30, '');
|
||||
s101[i].replace(re33, '');
|
||||
str51.replace(re33, '');
|
||||
'uggc://wf.hv-cbegny.qr/tzk/ubzr/wf/20080602/cebgbglcr.wf${4}${5}'.replace(re71, '');
|
||||
'uggc://wf.hv-cbegny.qr/tzk/ubzr/wf/20080602/cebgbglcr.wf${5}'.replace(re72, '');
|
||||
s102[i].replace(re73, '');
|
||||
str52.replace(re73, '');
|
||||
'uggc://zfacbegny.112.2b7.arg/o/ff/zfacbegnyubzr/1/U.7-cqi-2/f55332979829981?[NDO]&{1}&{2}&[NDR]'.replace(re69, '');
|
||||
'vztZFSG'.replace(re14, '');
|
||||
'vztZFSG'.replace(re15, '');
|
||||
'zfasbbg1 ps'.replace(re14, '');
|
||||
'zfasbbg1 ps'.replace(re15, '');
|
||||
s103[i].replace(re14, '');
|
||||
s103[i].replace(re15, '');
|
||||
str53.replace(re14, '');
|
||||
str53.replace(re15, '');
|
||||
'cnerag puebzr6 fvatyr1 gno fryrpgrq ovaq'.replace(re14, '');
|
||||
'cnerag puebzr6 fvatyr1 gno fryrpgrq ovaq'.replace(re15, '');
|
||||
'cevznel'.replace(re14, '');
|
||||
@ -1090,11 +945,11 @@ function RegExpBenchmark() {
|
||||
re8.exec('__hgzn=144631658.2770915348920628700.1231367708.1231367708.1231367708.1');
|
||||
re8.exec('__hgzo=144631658.0.10.1231367708');
|
||||
re8.exec('__hgzm=144631658.1231367708.1.1.hgzpfe=(qverpg)|hgzppa=(qverpg)|hgzpzq=(abar)');
|
||||
re34.exec(s99[i]);
|
||||
re34.exec(s100[i]);
|
||||
/ZFVR\s+5[.]01/.exec(s15[i]);
|
||||
re34.exec(str49);
|
||||
re34.exec(str50);
|
||||
/ZFVR\s+5[.]01/.exec(str0);
|
||||
/HF(?=;)/i.exec(str56);
|
||||
re74.exec(s97[i]);
|
||||
re74.exec(str47);
|
||||
re28.exec('svefg npgvir svefgNpgvir');
|
||||
re28.exec('ynfg');
|
||||
/\bp:(..)/i.exec('m:94043|yn:37.4154|yb:-122.0585|p:HF');
|
||||
@ -1112,15 +967,15 @@ function RegExpBenchmark() {
|
||||
re79.exec(str60);
|
||||
re79.exec(str59);
|
||||
/\|p:([a-z]{2})/i.exec('m:94043|yn:37.4154|yb:-122.0585|p:HF|ue:1');
|
||||
re80.exec(s97[i]);
|
||||
re80.exec(str47);
|
||||
re61.exec('cebgbglcr.wf');
|
||||
re68.exec(s97[i]);
|
||||
re81.exec(s97[i]);
|
||||
re82.exec(s97[i]);
|
||||
/^Fubpxjnir Synfu (\d)/.exec(s21[i]);
|
||||
/^Fubpxjnir Synfu (\d+)/.exec(s21[i]);
|
||||
re68.exec(str47);
|
||||
re81.exec(str47);
|
||||
re82.exec(str47);
|
||||
/^Fubpxjnir Synfu (\d)/.exec(str1);
|
||||
/^Fubpxjnir Synfu (\d+)/.exec(str1);
|
||||
re83.exec('[bowrpg tybony]');
|
||||
re62.exec(s97[i]);
|
||||
re62.exec(str47);
|
||||
re84.exec(str61);
|
||||
re84.exec(str62);
|
||||
/jroxvg/.exec(str63);
|
||||
@ -1742,8 +1597,6 @@ function RegExpBenchmark() {
|
||||
/jvaqbjf/.exec(str63);
|
||||
}
|
||||
}
|
||||
|
||||
function run() {
|
||||
for (var i = 0; i < 5; i++) {
|
||||
runBlock0();
|
||||
runBlock1();
|
||||
@ -1759,6 +1612,3 @@ function RegExpBenchmark() {
|
||||
runBlock11();
|
||||
}
|
||||
}
|
||||
|
||||
this.run = run;
|
||||
}
|
||||
|
4
deps/v8/benchmarks/revisions.html
vendored
4
deps/v8/benchmarks/revisions.html
vendored
@ -26,9 +26,7 @@ the benchmark suite.
|
||||
typos in the DeltaBlue implementation. Changed the Splay benchmark to
|
||||
avoid converting the same numeric key to a string over and over again
|
||||
and to avoid inserting and removing the same element repeatedly thus
|
||||
increasing pressure on the memory subsystem. Changed the RegExp
|
||||
benchmark to exercise the regular expression engine on different input
|
||||
strings.</p>
|
||||
increasing pressure on the memory subsystem.</p>
|
||||
|
||||
<p>Furthermore, the benchmark runner was changed to run the benchmarks
|
||||
for at least a few times to stabilize the reported numbers on slower
|
||||
|
2
deps/v8/benchmarks/run.html
vendored
2
deps/v8/benchmarks/run.html
vendored
@ -114,7 +114,7 @@ higher scores means better performance: <em>Bigger is better!</em>
|
||||
<li><b>RayTrace</b><br>Ray tracer benchmark based on code by <a href="http://flog.co.nz/">Adam Burmister</a> (<i>904 lines</i>).</li>
|
||||
<li><b>EarleyBoyer</b><br>Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch's Scheme2Js compiler (<i>4684 lines</i>).</li>
|
||||
<li><b>RegExp</b><br>Regular expression benchmark generated by extracting regular expression operations from 50 of the most popular web pages
|
||||
(<i>1761 lines</i>).
|
||||
(<i>1614 lines</i>).
|
||||
</li>
|
||||
<li><b>Splay</b><br>Data manipulation benchmark that deals with splay trees and exercises the automatic memory management subsystem (<i>394 lines</i>).</li>
|
||||
</ul>
|
||||
|
2
deps/v8/src/api.cc
vendored
2
deps/v8/src/api.cc
vendored
@ -4433,7 +4433,7 @@ double CpuProfileNode::GetSelfSamplesCount() const {
|
||||
|
||||
unsigned CpuProfileNode::GetCallUid() const {
|
||||
IsDeadCheck("v8::CpuProfileNode::GetCallUid");
|
||||
return reinterpret_cast<const i::ProfileNode*>(this)->entry()->GetCallUid();
|
||||
return reinterpret_cast<const i::ProfileNode*>(this)->entry()->call_uid();
|
||||
}
|
||||
|
||||
|
||||
|
13
deps/v8/src/arm/frames-arm.cc
vendored
13
deps/v8/src/arm/frames-arm.cc
vendored
@ -37,8 +37,17 @@ namespace v8 {
|
||||
namespace internal {
|
||||
|
||||
|
||||
Address ExitFrame::ComputeStackPointer(Address fp) {
|
||||
return fp + ExitFrameConstants::kSPOffset;
|
||||
StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) {
|
||||
if (fp == 0) return NONE;
|
||||
// Compute frame type and stack pointer.
|
||||
Address sp = fp + ExitFrameConstants::kSPOffset;
|
||||
|
||||
// Fill in the state.
|
||||
state->sp = sp;
|
||||
state->fp = fp;
|
||||
state->pc_address = reinterpret_cast<Address*>(sp - 1 * kPointerSize);
|
||||
ASSERT(*state->pc_address != NULL);
|
||||
return EXIT;
|
||||
}
|
||||
|
||||
|
||||
|
55
deps/v8/src/arm/full-codegen-arm.cc
vendored
55
deps/v8/src/arm/full-codegen-arm.cc
vendored
@ -620,7 +620,7 @@ void FullCodeGenerator::EmitDeclaration(Variable* variable,
|
||||
__ pop(r2); // Receiver.
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
// Value in r0 is ignored (declarations are statements).
|
||||
}
|
||||
}
|
||||
@ -956,7 +956,7 @@ void FullCodeGenerator::EmitDynamicLoadFromSlotFastCase(
|
||||
slow));
|
||||
__ mov(r0, Operand(key_literal->handle()));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ jmp(done);
|
||||
}
|
||||
}
|
||||
@ -1022,7 +1022,7 @@ void FullCodeGenerator::EmitLoadGlobalSlotCheckExtensions(
|
||||
? RelocInfo::CODE_TARGET
|
||||
: RelocInfo::CODE_TARGET_CONTEXT;
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, mode);
|
||||
__ Call(ic, mode);
|
||||
}
|
||||
|
||||
|
||||
@ -1041,7 +1041,7 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
__ ldr(r0, CodeGenerator::GlobalObject());
|
||||
__ mov(r2, Operand(var->name()));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
Apply(context, r0);
|
||||
|
||||
} else if (slot != NULL && slot->type() == Slot::LOOKUP) {
|
||||
@ -1100,7 +1100,7 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
|
||||
// Call keyed load IC. It has arguments key and receiver in r0 and r1.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
Apply(context, r0);
|
||||
}
|
||||
}
|
||||
@ -1189,7 +1189,7 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) {
|
||||
__ mov(r2, Operand(key->handle()));
|
||||
__ ldr(r1, MemOperand(sp));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
break;
|
||||
}
|
||||
// Fall through.
|
||||
@ -1409,7 +1409,7 @@ void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) {
|
||||
__ mov(r2, Operand(key->handle()));
|
||||
// Call load IC. It has arguments receiver and property name r0 and r2.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
}
|
||||
|
||||
|
||||
@ -1417,7 +1417,7 @@ void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) {
|
||||
SetSourcePosition(prop->position());
|
||||
// Call keyed load IC. It has arguments key and receiver in r0 and r1.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
}
|
||||
|
||||
|
||||
@ -1475,7 +1475,7 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(r0); // Restore value.
|
||||
__ mov(r2, Operand(prop->key()->AsLiteral()->handle()));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
break;
|
||||
}
|
||||
case KEYED_PROPERTY: {
|
||||
@ -1486,7 +1486,7 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(r2);
|
||||
__ pop(r0); // Restore value.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1509,7 +1509,7 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var,
|
||||
__ mov(r2, Operand(var->name()));
|
||||
__ ldr(r1, CodeGenerator::GlobalObject());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
|
||||
} else if (var->mode() != Variable::CONST || op == Token::INIT_CONST) {
|
||||
// Perform the assignment for non-const variables and for initialization
|
||||
@ -1598,7 +1598,7 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) {
|
||||
}
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -1642,7 +1642,7 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) {
|
||||
}
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -1691,7 +1691,7 @@ void FullCodeGenerator::EmitCallWithIC(Call* expr,
|
||||
// Call the IC initialization code.
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count, in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ Call(ic, mode);
|
||||
// Restore context register.
|
||||
__ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, r0);
|
||||
@ -1715,7 +1715,7 @@ void FullCodeGenerator::EmitKeyedCallWithIC(Call* expr,
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeKeyedCallInitialize(arg_count,
|
||||
in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ Call(ic, mode);
|
||||
// Restore context register.
|
||||
__ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, r0);
|
||||
@ -1854,7 +1854,7 @@ void FullCodeGenerator::VisitCall(Call* expr) {
|
||||
__ pop(r1); // We do not need to keep the receiver.
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ ldr(r1, CodeGenerator::GlobalObject());
|
||||
__ ldr(r1, FieldMemOperand(r1, GlobalObject::kGlobalReceiverOffset));
|
||||
__ Push(r0, r1); // Function, receiver.
|
||||
@ -2769,7 +2769,7 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) {
|
||||
__ mov(r2, Operand(expr->name()));
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count,
|
||||
NOT_IN_LOOP);
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
// Restore context register.
|
||||
__ ldr(cp, MemOperand(fp, StandardFrameConstants::kContextOffset));
|
||||
} else {
|
||||
@ -3065,7 +3065,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ mov(r2, Operand(prop->key()->AsLiteral()->handle()));
|
||||
__ pop(r1);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
if (expr->is_postfix()) {
|
||||
if (context_ != Expression::kEffect) {
|
||||
ApplyTOS(context_);
|
||||
@ -3079,7 +3079,7 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ pop(r1); // Key.
|
||||
__ pop(r2); // Receiver.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
if (expr->is_postfix()) {
|
||||
if (context_ != Expression::kEffect) {
|
||||
ApplyTOS(context_);
|
||||
@ -3102,7 +3102,7 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr, Location where) {
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
// Use a regular load, not a contextual load, to avoid a reference
|
||||
// error.
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
if (where == kStack) __ push(r0);
|
||||
} else if (proxy != NULL &&
|
||||
proxy->var()->slot() != NULL &&
|
||||
@ -3365,21 +3365,10 @@ void FullCodeGenerator::VisitThisFunction(ThisFunction* expr) {
|
||||
}
|
||||
|
||||
|
||||
Register FullCodeGenerator::result_register() {
|
||||
return r0;
|
||||
}
|
||||
Register FullCodeGenerator::result_register() { return r0; }
|
||||
|
||||
|
||||
Register FullCodeGenerator::context_register() {
|
||||
return cp;
|
||||
}
|
||||
|
||||
|
||||
void FullCodeGenerator::EmitCallIC(Handle<Code> ic, RelocInfo::Mode mode) {
|
||||
ASSERT(mode == RelocInfo::CODE_TARGET ||
|
||||
mode == RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ Call(ic, mode);
|
||||
}
|
||||
Register FullCodeGenerator::context_register() { return cp; }
|
||||
|
||||
|
||||
void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) {
|
||||
|
15
deps/v8/src/arm/ic-arm.cc
vendored
15
deps/v8/src/arm/ic-arm.cc
vendored
@ -967,14 +967,6 @@ bool LoadIC::PatchInlinedLoad(Address address, Object* map, int offset) {
|
||||
}
|
||||
|
||||
|
||||
bool LoadIC::PatchInlinedContextualLoad(Address address,
|
||||
Object* map,
|
||||
Object* cell) {
|
||||
// TODO(<bug#>): implement this.
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
bool StoreIC::PatchInlinedStore(Address address, Object* map, int offset) {
|
||||
// Find the end of the inlined code for the store if there is an
|
||||
// inlined version of the store.
|
||||
@ -1244,6 +1236,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
// -- r1 : receiver
|
||||
// -----------------------------------
|
||||
Label miss;
|
||||
Label index_out_of_range;
|
||||
|
||||
Register receiver = r1;
|
||||
Register index = r0;
|
||||
@ -1258,7 +1251,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
result,
|
||||
&miss, // When not a string.
|
||||
&miss, // When not a number.
|
||||
&miss, // When index out of range.
|
||||
&index_out_of_range,
|
||||
STRING_INDEX_IS_ARRAY_INDEX);
|
||||
char_at_generator.GenerateFast(masm);
|
||||
__ Ret();
|
||||
@ -1266,6 +1259,10 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
ICRuntimeCallHelper call_helper;
|
||||
char_at_generator.GenerateSlow(masm, call_helper);
|
||||
|
||||
__ bind(&index_out_of_range);
|
||||
__ LoadRoot(r0, Heap::kUndefinedValueRootIndex);
|
||||
__ Ret();
|
||||
|
||||
__ bind(&miss);
|
||||
GenerateMiss(masm);
|
||||
}
|
||||
|
33
deps/v8/src/arm/stub-cache-arm.cc
vendored
33
deps/v8/src/arm/stub-cache-arm.cc
vendored
@ -266,12 +266,7 @@ void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm,
|
||||
|
||||
|
||||
void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype(
|
||||
MacroAssembler* masm, int index, Register prototype, Label* miss) {
|
||||
// Check we're still in the same context.
|
||||
__ ldr(prototype, MemOperand(cp, Context::SlotOffset(Context::GLOBAL_INDEX)));
|
||||
__ Move(ip, Top::global());
|
||||
__ cmp(prototype, ip);
|
||||
__ b(ne, miss);
|
||||
MacroAssembler* masm, int index, Register prototype) {
|
||||
// Get the global function with the given index.
|
||||
JSFunction* function = JSFunction::cast(Top::global_context()->get(index));
|
||||
// Load its initial map. The global functions all have initial maps.
|
||||
@ -1439,8 +1434,7 @@ Object* CallStubCompiler::CompileStringCharCodeAtCall(
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
r0,
|
||||
&miss);
|
||||
r0);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), r0, holder,
|
||||
r1, r3, r4, name, &miss);
|
||||
@ -1511,8 +1505,7 @@ Object* CallStubCompiler::CompileStringCharAtCall(Object* object,
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
r0,
|
||||
&miss);
|
||||
r0);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), r0, holder,
|
||||
r1, r3, r4, name, &miss);
|
||||
@ -1633,16 +1626,6 @@ Object* CallStubCompiler::CompileStringFromCharCodeCall(
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileMathFloorCall(Object* object,
|
||||
JSObject* holder,
|
||||
JSGlobalPropertyCell* cell,
|
||||
JSFunction* function,
|
||||
String* name) {
|
||||
// TODO(872): implement this.
|
||||
return Heap::undefined_value();
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
JSObject* holder,
|
||||
JSFunction* function,
|
||||
@ -1722,7 +1705,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ b(hs, &miss);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::STRING_FUNCTION_INDEX, r0, &miss);
|
||||
masm(), Context::STRING_FUNCTION_INDEX, r0);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), r0, holder, r3,
|
||||
r1, r4, name, &miss);
|
||||
}
|
||||
@ -1742,7 +1725,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, r0, &miss);
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, r0);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), r0, holder, r3,
|
||||
r1, r4, name, &miss);
|
||||
}
|
||||
@ -1765,7 +1748,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, r0, &miss);
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, r0);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), r0, holder, r3,
|
||||
r1, r4, name, &miss);
|
||||
}
|
||||
@ -2229,11 +2212,11 @@ Object* LoadStubCompiler::CompileLoadGlobal(JSObject* object,
|
||||
}
|
||||
|
||||
__ mov(r0, r4);
|
||||
__ IncrementCounter(&Counters::named_load_global_stub, 1, r1, r3);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline, 1, r1, r3);
|
||||
__ Ret();
|
||||
|
||||
__ bind(&miss);
|
||||
__ IncrementCounter(&Counters::named_load_global_stub_miss, 1, r1, r3);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline_miss, 1, r1, r3);
|
||||
GenerateLoadMiss(masm(), Code::LOAD_IC);
|
||||
|
||||
// Return the generated code.
|
||||
|
40
deps/v8/src/bootstrapper.cc
vendored
40
deps/v8/src/bootstrapper.cc
vendored
@ -1344,41 +1344,33 @@ bool Genesis::InstallNatives() {
|
||||
}
|
||||
|
||||
|
||||
static Handle<JSObject> ResolveCustomCallGeneratorHolder(
|
||||
Handle<Context> global_context,
|
||||
const char* holder_expr) {
|
||||
Handle<GlobalObject> global(global_context->global());
|
||||
const char* period_pos = strchr(holder_expr, '.');
|
||||
if (period_pos == NULL) {
|
||||
return Handle<JSObject>::cast(
|
||||
GetProperty(global, Factory::LookupAsciiSymbol(holder_expr)));
|
||||
}
|
||||
ASSERT_EQ(".prototype", period_pos);
|
||||
Vector<const char> property(holder_expr,
|
||||
static_cast<int>(period_pos - holder_expr));
|
||||
Handle<JSFunction> function = Handle<JSFunction>::cast(
|
||||
GetProperty(global, Factory::LookupSymbol(property)));
|
||||
return Handle<JSObject>(JSObject::cast(function->prototype()));
|
||||
}
|
||||
|
||||
|
||||
static void InstallCustomCallGenerator(Handle<JSObject> holder,
|
||||
static void InstallCustomCallGenerator(
|
||||
Handle<JSFunction> holder_function,
|
||||
CallStubCompiler::CustomGeneratorOwner owner_flag,
|
||||
const char* function_name,
|
||||
int id) {
|
||||
Handle<JSObject> owner;
|
||||
if (owner_flag == CallStubCompiler::FUNCTION) {
|
||||
owner = Handle<JSObject>::cast(holder_function);
|
||||
} else {
|
||||
ASSERT(owner_flag == CallStubCompiler::INSTANCE_PROTOTYPE);
|
||||
owner = Handle<JSObject>(
|
||||
JSObject::cast(holder_function->instance_prototype()));
|
||||
}
|
||||
Handle<String> name = Factory::LookupAsciiSymbol(function_name);
|
||||
Handle<JSFunction> function(JSFunction::cast(holder->GetProperty(*name)));
|
||||
Handle<JSFunction> function(JSFunction::cast(owner->GetProperty(*name)));
|
||||
function->shared()->set_function_data(Smi::FromInt(id));
|
||||
}
|
||||
|
||||
|
||||
void Genesis::InstallCustomCallGenerators() {
|
||||
HandleScope scope;
|
||||
#define INSTALL_CALL_GENERATOR(holder_expr, fun_name, name) \
|
||||
#define INSTALL_CALL_GENERATOR(holder_fun, owner_flag, fun_name, name) \
|
||||
{ \
|
||||
Handle<JSObject> holder = ResolveCustomCallGeneratorHolder( \
|
||||
global_context(), #holder_expr); \
|
||||
Handle<JSFunction> holder(global_context()->holder_fun##_function()); \
|
||||
const int id = CallStubCompiler::k##name##CallGenerator; \
|
||||
InstallCustomCallGenerator(holder, #fun_name, id); \
|
||||
InstallCustomCallGenerator(holder, CallStubCompiler::owner_flag, \
|
||||
#fun_name, id); \
|
||||
}
|
||||
CUSTOM_CALL_IC_GENERATORS(INSTALL_CALL_GENERATOR)
|
||||
#undef INSTALL_CALL_GENERATOR
|
||||
|
62
deps/v8/src/conversions.cc
vendored
62
deps/v8/src/conversions.cc
vendored
@ -956,9 +956,8 @@ static char* CreateExponentialRepresentation(char* decimal_rep,
|
||||
|
||||
|
||||
char* DoubleToExponentialCString(double value, int f) {
|
||||
const int kMaxDigitsAfterPoint = 20;
|
||||
// f might be -1 to signal that f was undefined in JavaScript.
|
||||
ASSERT(f >= -1 && f <= kMaxDigitsAfterPoint);
|
||||
ASSERT(f >= -1 && f <= 20);
|
||||
|
||||
bool negative = false;
|
||||
if (value < 0) {
|
||||
@ -970,60 +969,29 @@ char* DoubleToExponentialCString(double value, int f) {
|
||||
int decimal_point;
|
||||
int sign;
|
||||
char* decimal_rep = NULL;
|
||||
bool used_gay_dtoa = false;
|
||||
// f corresponds to the digits after the point. There is always one digit
|
||||
// before the point. The number of requested_digits equals hence f + 1.
|
||||
// And we have to add one character for the null-terminator.
|
||||
const int kV8DtoaBufferCapacity = kMaxDigitsAfterPoint + 1 + 1;
|
||||
// Make sure that the buffer is big enough, even if we fall back to the
|
||||
// shortest representation (which happens when f equals -1).
|
||||
ASSERT(kBase10MaximalLength <= kMaxDigitsAfterPoint + 1);
|
||||
char v8_dtoa_buffer[kV8DtoaBufferCapacity];
|
||||
int decimal_rep_length;
|
||||
|
||||
if (f == -1) {
|
||||
if (DoubleToAscii(value, DTOA_SHORTEST, 0,
|
||||
Vector<char>(v8_dtoa_buffer, kV8DtoaBufferCapacity),
|
||||
&sign, &decimal_rep_length, &decimal_point)) {
|
||||
f = decimal_rep_length - 1;
|
||||
decimal_rep = v8_dtoa_buffer;
|
||||
} else {
|
||||
decimal_rep = dtoa(value, 0, 0, &decimal_point, &sign, NULL);
|
||||
decimal_rep_length = StrLength(decimal_rep);
|
||||
f = decimal_rep_length - 1;
|
||||
used_gay_dtoa = true;
|
||||
}
|
||||
} else {
|
||||
if (DoubleToAscii(value, DTOA_PRECISION, f + 1,
|
||||
Vector<char>(v8_dtoa_buffer, kV8DtoaBufferCapacity),
|
||||
&sign, &decimal_rep_length, &decimal_point)) {
|
||||
decimal_rep = v8_dtoa_buffer;
|
||||
f = StrLength(decimal_rep) - 1;
|
||||
} else {
|
||||
decimal_rep = dtoa(value, 2, f + 1, &decimal_point, &sign, NULL);
|
||||
decimal_rep_length = StrLength(decimal_rep);
|
||||
used_gay_dtoa = true;
|
||||
}
|
||||
}
|
||||
int decimal_rep_length = StrLength(decimal_rep);
|
||||
ASSERT(decimal_rep_length > 0);
|
||||
ASSERT(decimal_rep_length <= f + 1);
|
||||
USE(decimal_rep_length);
|
||||
|
||||
int exponent = decimal_point - 1;
|
||||
char* result =
|
||||
CreateExponentialRepresentation(decimal_rep, exponent, negative, f+1);
|
||||
|
||||
if (used_gay_dtoa) {
|
||||
freedtoa(decimal_rep);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
char* DoubleToPrecisionCString(double value, int p) {
|
||||
const int kMinimalDigits = 1;
|
||||
const int kMaximalDigits = 21;
|
||||
ASSERT(p >= kMinimalDigits && p <= kMaximalDigits);
|
||||
USE(kMinimalDigits);
|
||||
ASSERT(p >= 1 && p <= 21);
|
||||
|
||||
bool negative = false;
|
||||
if (value < 0) {
|
||||
@ -1034,22 +1002,8 @@ char* DoubleToPrecisionCString(double value, int p) {
|
||||
// Find a sufficiently precise decimal representation of n.
|
||||
int decimal_point;
|
||||
int sign;
|
||||
char* decimal_rep = NULL;
|
||||
bool used_gay_dtoa = false;
|
||||
// Add one for the terminating null character.
|
||||
const int kV8DtoaBufferCapacity = kMaximalDigits + 1;
|
||||
char v8_dtoa_buffer[kV8DtoaBufferCapacity];
|
||||
int decimal_rep_length;
|
||||
|
||||
if (DoubleToAscii(value, DTOA_PRECISION, p,
|
||||
Vector<char>(v8_dtoa_buffer, kV8DtoaBufferCapacity),
|
||||
&sign, &decimal_rep_length, &decimal_point)) {
|
||||
decimal_rep = v8_dtoa_buffer;
|
||||
} else {
|
||||
decimal_rep = dtoa(value, 2, p, &decimal_point, &sign, NULL);
|
||||
decimal_rep_length = StrLength(decimal_rep);
|
||||
used_gay_dtoa = true;
|
||||
}
|
||||
char* decimal_rep = dtoa(value, 2, p, &decimal_point, &sign, NULL);
|
||||
int decimal_rep_length = StrLength(decimal_rep);
|
||||
ASSERT(decimal_rep_length <= p);
|
||||
|
||||
int exponent = decimal_point - 1;
|
||||
@ -1093,9 +1047,7 @@ char* DoubleToPrecisionCString(double value, int p) {
|
||||
result = builder.Finalize();
|
||||
}
|
||||
|
||||
if (used_gay_dtoa) {
|
||||
freedtoa(decimal_rep);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
|
5
deps/v8/src/cpu-profiler-inl.h
vendored
5
deps/v8/src/cpu-profiler-inl.h
vendored
@ -82,11 +82,14 @@ TickSample* ProfilerEventsProcessor::TickSampleEvent() {
|
||||
|
||||
bool ProfilerEventsProcessor::FilterOutCodeCreateEvent(
|
||||
Logger::LogEventsAndTags tag) {
|
||||
// In browser mode, leave only callbacks and non-native JS entries.
|
||||
// We filter out regular expressions as currently we can't tell
|
||||
// whether they origin from native scripts, so let's not confise people by
|
||||
// showing them weird regexes they didn't wrote.
|
||||
return FLAG_prof_browser_mode
|
||||
&& (tag != Logger::CALLBACK_TAG
|
||||
&& tag != Logger::FUNCTION_TAG
|
||||
&& tag != Logger::LAZY_COMPILE_TAG
|
||||
&& tag != Logger::REG_EXP_TAG
|
||||
&& tag != Logger::SCRIPT_TAG);
|
||||
}
|
||||
|
||||
|
34
deps/v8/src/debug-debugger.js
vendored
34
deps/v8/src/debug-debugger.js
vendored
@ -45,7 +45,7 @@ Debug.DebugEvent = { Break: 1,
|
||||
ScriptCollected: 6 };
|
||||
|
||||
// Types of exceptions that can be broken upon.
|
||||
Debug.ExceptionBreak = { Caught : 0,
|
||||
Debug.ExceptionBreak = { All : 0,
|
||||
Uncaught: 1 };
|
||||
|
||||
// The different types of steps.
|
||||
@ -87,27 +87,7 @@ var debugger_flags = {
|
||||
this.value = !!value;
|
||||
%SetDisableBreak(!this.value);
|
||||
}
|
||||
},
|
||||
breakOnCaughtException: {
|
||||
getValue: function() { return Debug.isBreakOnException(); },
|
||||
setValue: function(value) {
|
||||
if (value) {
|
||||
Debug.setBreakOnException();
|
||||
} else {
|
||||
Debug.clearBreakOnException();
|
||||
}
|
||||
}
|
||||
},
|
||||
breakOnUncaughtException: {
|
||||
getValue: function() { return Debug.isBreakOnUncaughtException(); },
|
||||
setValue: function(value) {
|
||||
if (value) {
|
||||
Debug.setBreakOnUncaughtException();
|
||||
} else {
|
||||
Debug.clearBreakOnUncaughtException();
|
||||
}
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
@ -801,15 +781,11 @@ Debug.clearStepping = function() {
|
||||
}
|
||||
|
||||
Debug.setBreakOnException = function() {
|
||||
return %ChangeBreakOnException(Debug.ExceptionBreak.Caught, true);
|
||||
return %ChangeBreakOnException(Debug.ExceptionBreak.All, true);
|
||||
};
|
||||
|
||||
Debug.clearBreakOnException = function() {
|
||||
return %ChangeBreakOnException(Debug.ExceptionBreak.Caught, false);
|
||||
};
|
||||
|
||||
Debug.isBreakOnException = function() {
|
||||
return !!%IsBreakOnException(Debug.ExceptionBreak.Caught);
|
||||
return %ChangeBreakOnException(Debug.ExceptionBreak.All, false);
|
||||
};
|
||||
|
||||
Debug.setBreakOnUncaughtException = function() {
|
||||
@ -820,10 +796,6 @@ Debug.clearBreakOnUncaughtException = function() {
|
||||
return %ChangeBreakOnException(Debug.ExceptionBreak.Uncaught, false);
|
||||
};
|
||||
|
||||
Debug.isBreakOnUncaughtException = function() {
|
||||
return !!%IsBreakOnException(Debug.ExceptionBreak.Uncaught);
|
||||
};
|
||||
|
||||
Debug.showBreakPoints = function(f, full) {
|
||||
if (!IS_FUNCTION(f)) throw new Error('Parameters have wrong types.');
|
||||
var source = full ? this.scriptSource(f) : this.source(f);
|
||||
|
9
deps/v8/src/debug.cc
vendored
9
deps/v8/src/debug.cc
vendored
@ -1200,15 +1200,6 @@ void Debug::ChangeBreakOnException(ExceptionBreakType type, bool enable) {
|
||||
}
|
||||
|
||||
|
||||
bool Debug::IsBreakOnException(ExceptionBreakType type) {
|
||||
if (type == BreakUncaughtException) {
|
||||
return break_on_uncaught_exception_;
|
||||
} else {
|
||||
return break_on_exception_;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void Debug::PrepareStep(StepAction step_action, int step_count) {
|
||||
HandleScope scope;
|
||||
ASSERT(Debug::InDebugger());
|
||||
|
1
deps/v8/src/debug.h
vendored
1
deps/v8/src/debug.h
vendored
@ -236,7 +236,6 @@ class Debug {
|
||||
static void FloodWithOneShot(Handle<SharedFunctionInfo> shared);
|
||||
static void FloodHandlerWithOneShot();
|
||||
static void ChangeBreakOnException(ExceptionBreakType type, bool enable);
|
||||
static bool IsBreakOnException(ExceptionBreakType type);
|
||||
static void PrepareStep(StepAction step_action, int step_count);
|
||||
static void ClearStepping();
|
||||
static bool StepNextContinue(BreakLocationIterator* break_location_iterator,
|
||||
|
7
deps/v8/src/dtoa.cc
vendored
7
deps/v8/src/dtoa.cc
vendored
@ -65,12 +65,11 @@ bool DoubleToAscii(double v, DtoaMode mode, int requested_digits,
|
||||
|
||||
switch (mode) {
|
||||
case DTOA_SHORTEST:
|
||||
return FastDtoa(v, FAST_DTOA_SHORTEST, 0, buffer, length, point);
|
||||
return FastDtoa(v, buffer, length, point);
|
||||
case DTOA_FIXED:
|
||||
return FastFixedDtoa(v, requested_digits, buffer, length, point);
|
||||
case DTOA_PRECISION:
|
||||
return FastDtoa(v, FAST_DTOA_PRECISION, requested_digits,
|
||||
buffer, length, point);
|
||||
default:
|
||||
break;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
323
deps/v8/src/fast-dtoa.cc
vendored
323
deps/v8/src/fast-dtoa.cc
vendored
@ -42,8 +42,8 @@ namespace internal {
|
||||
//
|
||||
// A different range might be chosen on a different platform, to optimize digit
|
||||
// generation, but a smaller range requires more powers of ten to be cached.
|
||||
static const int kMinimalTargetExponent = -60;
|
||||
static const int kMaximalTargetExponent = -32;
|
||||
static const int minimal_target_exponent = -60;
|
||||
static const int maximal_target_exponent = -32;
|
||||
|
||||
|
||||
// Adjusts the last digit of the generated number, and screens out generated
|
||||
@ -61,7 +61,7 @@ static const int kMaximalTargetExponent = -32;
|
||||
// Output: returns true if the buffer is guaranteed to contain the closest
|
||||
// representable number to the input.
|
||||
// Modifies the generated digits in the buffer to approach (round towards) w.
|
||||
static bool RoundWeed(Vector<char> buffer,
|
||||
bool RoundWeed(Vector<char> buffer,
|
||||
int length,
|
||||
uint64_t distance_too_high_w,
|
||||
uint64_t unsafe_interval,
|
||||
@ -75,7 +75,7 @@ static bool RoundWeed(Vector<char> buffer,
|
||||
// Note: w_low < w < w_high
|
||||
//
|
||||
// The real w (* unit) must lie somewhere inside the interval
|
||||
// ]w_low; w_high[ (often written as "(w_low; w_high)")
|
||||
// ]w_low; w_low[ (often written as "(w_low; w_low)")
|
||||
|
||||
// Basically the buffer currently contains a number in the unsafe interval
|
||||
// ]too_low; too_high[ with too_low < w < too_high
|
||||
@ -122,10 +122,10 @@ static bool RoundWeed(Vector<char> buffer,
|
||||
// inside the safe interval then we simply do not know and bail out (returning
|
||||
// false).
|
||||
//
|
||||
// Similarly we have to take into account the imprecision of 'w' when finding
|
||||
// the closest representation of 'w'. If we have two potential
|
||||
// representations, and one is closer to both w_low and w_high, then we know
|
||||
// it is closer to the actual value v.
|
||||
// Similarly we have to take into account the imprecision of 'w' when rounding
|
||||
// the buffer. If we have two potential representations we need to make sure
|
||||
// that the chosen one is closer to w_low and w_high since v can be anywhere
|
||||
// between them.
|
||||
//
|
||||
// By generating the digits of too_high we got the largest (closest to
|
||||
// too_high) buffer that is still in the unsafe interval. In the case where
|
||||
@ -139,9 +139,6 @@ static bool RoundWeed(Vector<char> buffer,
|
||||
// (buffer{-1} < w_high) && w_high - buffer{-1} > buffer - w_high
|
||||
// Instead of using the buffer directly we use its distance to too_high.
|
||||
// Conceptually rest ~= too_high - buffer
|
||||
// We need to do the following tests in this order to avoid over- and
|
||||
// underflows.
|
||||
ASSERT(rest <= unsafe_interval);
|
||||
while (rest < small_distance && // Negated condition 1
|
||||
unsafe_interval - rest >= ten_kappa && // Negated condition 2
|
||||
(rest + ten_kappa < small_distance || // buffer{-1} > w_high
|
||||
@ -169,62 +166,6 @@ static bool RoundWeed(Vector<char> buffer,
|
||||
}
|
||||
|
||||
|
||||
// Rounds the buffer upwards if the result is closer to v by possibly adding
|
||||
// 1 to the buffer. If the precision of the calculation is not sufficient to
|
||||
// round correctly, return false.
|
||||
// The rounding might shift the whole buffer in which case the kappa is
|
||||
// adjusted. For example "99", kappa = 3 might become "10", kappa = 4.
|
||||
//
|
||||
// If 2*rest > ten_kappa then the buffer needs to be round up.
|
||||
// rest can have an error of +/- 1 unit. This function accounts for the
|
||||
// imprecision and returns false, if the rounding direction cannot be
|
||||
// unambiguously determined.
|
||||
//
|
||||
// Precondition: rest < ten_kappa.
|
||||
static bool RoundWeedCounted(Vector<char> buffer,
|
||||
int length,
|
||||
uint64_t rest,
|
||||
uint64_t ten_kappa,
|
||||
uint64_t unit,
|
||||
int* kappa) {
|
||||
ASSERT(rest < ten_kappa);
|
||||
// The following tests are done in a specific order to avoid overflows. They
|
||||
// will work correctly with any uint64 values of rest < ten_kappa and unit.
|
||||
//
|
||||
// If the unit is too big, then we don't know which way to round. For example
|
||||
// a unit of 50 means that the real number lies within rest +/- 50. If
|
||||
// 10^kappa == 40 then there is no way to tell which way to round.
|
||||
if (unit >= ten_kappa) return false;
|
||||
// Even if unit is just half the size of 10^kappa we are already completely
|
||||
// lost. (And after the previous test we know that the expression will not
|
||||
// over/underflow.)
|
||||
if (ten_kappa - unit <= unit) return false;
|
||||
// If 2 * (rest + unit) <= 10^kappa we can safely round down.
|
||||
if ((ten_kappa - rest > rest) && (ten_kappa - 2 * rest >= 2 * unit)) {
|
||||
return true;
|
||||
}
|
||||
// If 2 * (rest - unit) >= 10^kappa, then we can safely round up.
|
||||
if ((rest > unit) && (ten_kappa - (rest - unit) <= (rest - unit))) {
|
||||
// Increment the last digit recursively until we find a non '9' digit.
|
||||
buffer[length - 1]++;
|
||||
for (int i = length - 1; i > 0; --i) {
|
||||
if (buffer[i] != '0' + 10) break;
|
||||
buffer[i] = '0';
|
||||
buffer[i - 1]++;
|
||||
}
|
||||
// If the first digit is now '0'+ 10 we had a buffer with all '9's. With the
|
||||
// exception of the first digit all digits are now '0'. Simply switch the
|
||||
// first digit to '1' and adjust the kappa. Example: "99" becomes "10" and
|
||||
// the power (the kappa) is increased.
|
||||
if (buffer[0] == '0' + 10) {
|
||||
buffer[0] = '1';
|
||||
(*kappa) += 1;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
static const uint32_t kTen4 = 10000;
|
||||
static const uint32_t kTen5 = 100000;
|
||||
@ -237,7 +178,7 @@ static const uint32_t kTen9 = 1000000000;
|
||||
// number. We furthermore receive the maximum number of bits 'number' has.
|
||||
// If number_bits == 0 then 0^-1 is returned
|
||||
// The number of bits must be <= 32.
|
||||
// Precondition: number < (1 << (number_bits + 1)).
|
||||
// Precondition: (1 << number_bits) <= number < (1 << (number_bits + 1)).
|
||||
static void BiggestPowerTen(uint32_t number,
|
||||
int number_bits,
|
||||
uint32_t* power,
|
||||
@ -340,18 +281,18 @@ static void BiggestPowerTen(uint32_t number,
|
||||
|
||||
// Generates the digits of input number w.
|
||||
// w is a floating-point number (DiyFp), consisting of a significand and an
|
||||
// exponent. Its exponent is bounded by kMinimalTargetExponent and
|
||||
// kMaximalTargetExponent.
|
||||
// exponent. Its exponent is bounded by minimal_target_exponent and
|
||||
// maximal_target_exponent.
|
||||
// Hence -60 <= w.e() <= -32.
|
||||
//
|
||||
// Returns false if it fails, in which case the generated digits in the buffer
|
||||
// should not be used.
|
||||
// Preconditions:
|
||||
// * low, w and high are correct up to 1 ulp (unit in the last place). That
|
||||
// is, their error must be less than a unit of their last digits.
|
||||
// is, their error must be less that a unit of their last digits.
|
||||
// * low.e() == w.e() == high.e()
|
||||
// * low < w < high, and taking into account their error: low~ <= high~
|
||||
// * kMinimalTargetExponent <= w.e() <= kMaximalTargetExponent
|
||||
// * minimal_target_exponent <= w.e() <= maximal_target_exponent
|
||||
// Postconditions: returns false if procedure fails.
|
||||
// otherwise:
|
||||
// * buffer is not null-terminated, but len contains the number of digits.
|
||||
@ -380,7 +321,7 @@ static void BiggestPowerTen(uint32_t number,
|
||||
// represent 'w' we can stop. Everything inside the interval low - high
|
||||
// represents w. However we have to pay attention to low, high and w's
|
||||
// imprecision.
|
||||
static bool DigitGen(DiyFp low,
|
||||
bool DigitGen(DiyFp low,
|
||||
DiyFp w,
|
||||
DiyFp high,
|
||||
Vector<char> buffer,
|
||||
@ -388,7 +329,7 @@ static bool DigitGen(DiyFp low,
|
||||
int* kappa) {
|
||||
ASSERT(low.e() == w.e() && w.e() == high.e());
|
||||
ASSERT(low.f() + 1 <= high.f() - 1);
|
||||
ASSERT(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent);
|
||||
ASSERT(minimal_target_exponent <= w.e() && w.e() <= maximal_target_exponent);
|
||||
// low, w and high are imprecise, but by less than one ulp (unit in the last
|
||||
// place).
|
||||
// If we remove (resp. add) 1 ulp from low (resp. high) we are certain that
|
||||
@ -418,23 +359,23 @@ static bool DigitGen(DiyFp low,
|
||||
uint32_t integrals = static_cast<uint32_t>(too_high.f() >> -one.e());
|
||||
// Modulo by one is an and.
|
||||
uint64_t fractionals = too_high.f() & (one.f() - 1);
|
||||
uint32_t divisor;
|
||||
int divisor_exponent;
|
||||
uint32_t divider;
|
||||
int divider_exponent;
|
||||
BiggestPowerTen(integrals, DiyFp::kSignificandSize - (-one.e()),
|
||||
&divisor, &divisor_exponent);
|
||||
*kappa = divisor_exponent + 1;
|
||||
÷r, ÷r_exponent);
|
||||
*kappa = divider_exponent + 1;
|
||||
*length = 0;
|
||||
// Loop invariant: buffer = too_high / 10^kappa (integer division)
|
||||
// The invariant holds for the first iteration: kappa has been initialized
|
||||
// with the divisor exponent + 1. And the divisor is the biggest power of ten
|
||||
// with the divider exponent + 1. And the divider is the biggest power of ten
|
||||
// that is smaller than integrals.
|
||||
while (*kappa > 0) {
|
||||
int digit = integrals / divisor;
|
||||
int digit = integrals / divider;
|
||||
buffer[*length] = '0' + digit;
|
||||
(*length)++;
|
||||
integrals %= divisor;
|
||||
integrals %= divider;
|
||||
(*kappa)--;
|
||||
// Note that kappa now equals the exponent of the divisor and that the
|
||||
// Note that kappa now equals the exponent of the divider and that the
|
||||
// invariant thus holds again.
|
||||
uint64_t rest =
|
||||
(static_cast<uint64_t>(integrals) << -one.e()) + fractionals;
|
||||
@ -445,24 +386,32 @@ static bool DigitGen(DiyFp low,
|
||||
// that lies within the unsafe interval.
|
||||
return RoundWeed(buffer, *length, DiyFp::Minus(too_high, w).f(),
|
||||
unsafe_interval.f(), rest,
|
||||
static_cast<uint64_t>(divisor) << -one.e(), unit);
|
||||
static_cast<uint64_t>(divider) << -one.e(), unit);
|
||||
}
|
||||
divisor /= 10;
|
||||
divider /= 10;
|
||||
}
|
||||
|
||||
// The integrals have been generated. We are at the point of the decimal
|
||||
// separator. In the following loop we simply multiply the remaining digits by
|
||||
// 10 and divide by one. We just need to pay attention to multiply associated
|
||||
// data (like the interval or 'unit'), too.
|
||||
// Note that the multiplication by 10 does not overflow, because w.e >= -60
|
||||
// and thus one.e >= -60.
|
||||
ASSERT(one.e() >= -60);
|
||||
ASSERT(fractionals < one.f());
|
||||
ASSERT(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f());
|
||||
// Instead of multiplying by 10 we multiply by 5 (cheaper operation) and
|
||||
// increase its (imaginary) exponent. At the same time we decrease the
|
||||
// divider's (one's) exponent and shift its significand.
|
||||
// Basically, if fractionals was a DiyFp (with fractionals.e == one.e):
|
||||
// fractionals.f *= 10;
|
||||
// fractionals.f >>= 1; fractionals.e++; // value remains unchanged.
|
||||
// one.f >>= 1; one.e++; // value remains unchanged.
|
||||
// and we have again fractionals.e == one.e which allows us to divide
|
||||
// fractionals.f() by one.f()
|
||||
// We simply combine the *= 10 and the >>= 1.
|
||||
while (true) {
|
||||
fractionals *= 10;
|
||||
unit *= 10;
|
||||
unsafe_interval.set_f(unsafe_interval.f() * 10);
|
||||
fractionals *= 5;
|
||||
unit *= 5;
|
||||
unsafe_interval.set_f(unsafe_interval.f() * 5);
|
||||
unsafe_interval.set_e(unsafe_interval.e() + 1); // Will be optimized out.
|
||||
one.set_f(one.f() >> 1);
|
||||
one.set_e(one.e() + 1);
|
||||
// Integer division by one.
|
||||
int digit = static_cast<int>(fractionals >> -one.e());
|
||||
buffer[*length] = '0' + digit;
|
||||
@ -477,113 +426,6 @@ static bool DigitGen(DiyFp low,
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Generates (at most) requested_digits of input number w.
|
||||
// w is a floating-point number (DiyFp), consisting of a significand and an
|
||||
// exponent. Its exponent is bounded by kMinimalTargetExponent and
|
||||
// kMaximalTargetExponent.
|
||||
// Hence -60 <= w.e() <= -32.
|
||||
//
|
||||
// Returns false if it fails, in which case the generated digits in the buffer
|
||||
// should not be used.
|
||||
// Preconditions:
|
||||
// * w is correct up to 1 ulp (unit in the last place). That
|
||||
// is, its error must be strictly less than a unit of its last digit.
|
||||
// * kMinimalTargetExponent <= w.e() <= kMaximalTargetExponent
|
||||
//
|
||||
// Postconditions: returns false if procedure fails.
|
||||
// otherwise:
|
||||
// * buffer is not null-terminated, but length contains the number of
|
||||
// digits.
|
||||
// * the representation in buffer is the most precise representation of
|
||||
// requested_digits digits.
|
||||
// * buffer contains at most requested_digits digits of w. If there are less
|
||||
// than requested_digits digits then some trailing '0's have been removed.
|
||||
// * kappa is such that
|
||||
// w = buffer * 10^kappa + eps with |eps| < 10^kappa / 2.
|
||||
//
|
||||
// Remark: This procedure takes into account the imprecision of its input
|
||||
// numbers. If the precision is not enough to guarantee all the postconditions
|
||||
// then false is returned. This usually happens rarely, but the failure-rate
|
||||
// increases with higher requested_digits.
|
||||
static bool DigitGenCounted(DiyFp w,
|
||||
int requested_digits,
|
||||
Vector<char> buffer,
|
||||
int* length,
|
||||
int* kappa) {
|
||||
ASSERT(kMinimalTargetExponent <= w.e() && w.e() <= kMaximalTargetExponent);
|
||||
ASSERT(kMinimalTargetExponent >= -60);
|
||||
ASSERT(kMaximalTargetExponent <= -32);
|
||||
// w is assumed to have an error less than 1 unit. Whenever w is scaled we
|
||||
// also scale its error.
|
||||
uint64_t w_error = 1;
|
||||
// We cut the input number into two parts: the integral digits and the
|
||||
// fractional digits. We don't emit any decimal separator, but adapt kappa
|
||||
// instead. Example: instead of writing "1.2" we put "12" into the buffer and
|
||||
// increase kappa by 1.
|
||||
DiyFp one = DiyFp(static_cast<uint64_t>(1) << -w.e(), w.e());
|
||||
// Division by one is a shift.
|
||||
uint32_t integrals = static_cast<uint32_t>(w.f() >> -one.e());
|
||||
// Modulo by one is an and.
|
||||
uint64_t fractionals = w.f() & (one.f() - 1);
|
||||
uint32_t divisor;
|
||||
int divisor_exponent;
|
||||
BiggestPowerTen(integrals, DiyFp::kSignificandSize - (-one.e()),
|
||||
&divisor, &divisor_exponent);
|
||||
*kappa = divisor_exponent + 1;
|
||||
*length = 0;
|
||||
|
||||
// Loop invariant: buffer = w / 10^kappa (integer division)
|
||||
// The invariant holds for the first iteration: kappa has been initialized
|
||||
// with the divisor exponent + 1. And the divisor is the biggest power of ten
|
||||
// that is smaller than 'integrals'.
|
||||
while (*kappa > 0) {
|
||||
int digit = integrals / divisor;
|
||||
buffer[*length] = '0' + digit;
|
||||
(*length)++;
|
||||
requested_digits--;
|
||||
integrals %= divisor;
|
||||
(*kappa)--;
|
||||
// Note that kappa now equals the exponent of the divisor and that the
|
||||
// invariant thus holds again.
|
||||
if (requested_digits == 0) break;
|
||||
divisor /= 10;
|
||||
}
|
||||
|
||||
if (requested_digits == 0) {
|
||||
uint64_t rest =
|
||||
(static_cast<uint64_t>(integrals) << -one.e()) + fractionals;
|
||||
return RoundWeedCounted(buffer, *length, rest,
|
||||
static_cast<uint64_t>(divisor) << -one.e(), w_error,
|
||||
kappa);
|
||||
}
|
||||
|
||||
// The integrals have been generated. We are at the point of the decimal
|
||||
// separator. In the following loop we simply multiply the remaining digits by
|
||||
// 10 and divide by one. We just need to pay attention to multiply associated
|
||||
// data (the 'unit'), too.
|
||||
// Note that the multiplication by 10 does not overflow, because w.e >= -60
|
||||
// and thus one.e >= -60.
|
||||
ASSERT(one.e() >= -60);
|
||||
ASSERT(fractionals < one.f());
|
||||
ASSERT(V8_2PART_UINT64_C(0xFFFFFFFF, FFFFFFFF) / 10 >= one.f());
|
||||
while (requested_digits > 0 && fractionals > w_error) {
|
||||
fractionals *= 10;
|
||||
w_error *= 10;
|
||||
// Integer division by one.
|
||||
int digit = static_cast<int>(fractionals >> -one.e());
|
||||
buffer[*length] = '0' + digit;
|
||||
(*length)++;
|
||||
requested_digits--;
|
||||
fractionals &= one.f() - 1; // Modulo by one.
|
||||
(*kappa)--;
|
||||
}
|
||||
if (requested_digits != 0) return false;
|
||||
return RoundWeedCounted(buffer, *length, fractionals, one.f(), w_error,
|
||||
kappa);
|
||||
}
|
||||
|
||||
|
||||
// Provides a decimal representation of v.
|
||||
// Returns true if it succeeds, otherwise the result cannot be trusted.
|
||||
// There will be *length digits inside the buffer (not null-terminated).
|
||||
@ -595,10 +437,7 @@ static bool DigitGenCounted(DiyFp w,
|
||||
// The last digit will be closest to the actual v. That is, even if several
|
||||
// digits might correctly yield 'v' when read again, the closest will be
|
||||
// computed.
|
||||
static bool Grisu3(double v,
|
||||
Vector<char> buffer,
|
||||
int* length,
|
||||
int* decimal_exponent) {
|
||||
bool grisu3(double v, Vector<char> buffer, int* length, int* decimal_exponent) {
|
||||
DiyFp w = Double(v).AsNormalizedDiyFp();
|
||||
// boundary_minus and boundary_plus are the boundaries between v and its
|
||||
// closest floating-point neighbors. Any number strictly between
|
||||
@ -609,12 +448,12 @@ static bool Grisu3(double v,
|
||||
ASSERT(boundary_plus.e() == w.e());
|
||||
DiyFp ten_mk; // Cached power of ten: 10^-k
|
||||
int mk; // -k
|
||||
GetCachedPower(w.e() + DiyFp::kSignificandSize, kMinimalTargetExponent,
|
||||
kMaximalTargetExponent, &mk, &ten_mk);
|
||||
ASSERT((kMinimalTargetExponent <= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize) &&
|
||||
(kMaximalTargetExponent >= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize));
|
||||
GetCachedPower(w.e() + DiyFp::kSignificandSize, minimal_target_exponent,
|
||||
maximal_target_exponent, &mk, &ten_mk);
|
||||
ASSERT(minimal_target_exponent <= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize &&
|
||||
maximal_target_exponent >= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize);
|
||||
// Note that ten_mk is only an approximation of 10^-k. A DiyFp only contains a
|
||||
// 64 bit significand and ten_mk is thus only precise up to 64 bits.
|
||||
|
||||
@ -649,75 +488,17 @@ static bool Grisu3(double v,
|
||||
}
|
||||
|
||||
|
||||
// The "counted" version of grisu3 (see above) only generates requested_digits
|
||||
// number of digits. This version does not generate the shortest representation,
|
||||
// and with enough requested digits 0.1 will at some point print as 0.9999999...
|
||||
// Grisu3 is too imprecise for real halfway cases (1.5 will not work) and
|
||||
// therefore the rounding strategy for halfway cases is irrelevant.
|
||||
static bool Grisu3Counted(double v,
|
||||
int requested_digits,
|
||||
Vector<char> buffer,
|
||||
int* length,
|
||||
int* decimal_exponent) {
|
||||
DiyFp w = Double(v).AsNormalizedDiyFp();
|
||||
DiyFp ten_mk; // Cached power of ten: 10^-k
|
||||
int mk; // -k
|
||||
GetCachedPower(w.e() + DiyFp::kSignificandSize, kMinimalTargetExponent,
|
||||
kMaximalTargetExponent, &mk, &ten_mk);
|
||||
ASSERT((kMinimalTargetExponent <= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize) &&
|
||||
(kMaximalTargetExponent >= w.e() + ten_mk.e() +
|
||||
DiyFp::kSignificandSize));
|
||||
// Note that ten_mk is only an approximation of 10^-k. A DiyFp only contains a
|
||||
// 64 bit significand and ten_mk is thus only precise up to 64 bits.
|
||||
|
||||
// The DiyFp::Times procedure rounds its result, and ten_mk is approximated
|
||||
// too. The variable scaled_w (as well as scaled_boundary_minus/plus) are now
|
||||
// off by a small amount.
|
||||
// In fact: scaled_w - w*10^k < 1ulp (unit in the last place) of scaled_w.
|
||||
// In other words: let f = scaled_w.f() and e = scaled_w.e(), then
|
||||
// (f-1) * 2^e < w*10^k < (f+1) * 2^e
|
||||
DiyFp scaled_w = DiyFp::Times(w, ten_mk);
|
||||
|
||||
// We now have (double) (scaled_w * 10^-mk).
|
||||
// DigitGen will generate the first requested_digits digits of scaled_w and
|
||||
// return together with a kappa such that scaled_w ~= buffer * 10^kappa. (It
|
||||
// will not always be exactly the same since DigitGenCounted only produces a
|
||||
// limited number of digits.)
|
||||
int kappa;
|
||||
bool result = DigitGenCounted(scaled_w, requested_digits,
|
||||
buffer, length, &kappa);
|
||||
*decimal_exponent = -mk + kappa;
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
bool FastDtoa(double v,
|
||||
FastDtoaMode mode,
|
||||
int requested_digits,
|
||||
Vector<char> buffer,
|
||||
int* length,
|
||||
int* decimal_point) {
|
||||
int* point) {
|
||||
ASSERT(v > 0);
|
||||
ASSERT(!Double(v).IsSpecial());
|
||||
|
||||
bool result = false;
|
||||
int decimal_exponent = 0;
|
||||
switch (mode) {
|
||||
case FAST_DTOA_SHORTEST:
|
||||
result = Grisu3(v, buffer, length, &decimal_exponent);
|
||||
break;
|
||||
case FAST_DTOA_PRECISION:
|
||||
result = Grisu3Counted(v, requested_digits,
|
||||
buffer, length, &decimal_exponent);
|
||||
break;
|
||||
default:
|
||||
UNREACHABLE();
|
||||
}
|
||||
if (result) {
|
||||
*decimal_point = *length + decimal_exponent;
|
||||
int decimal_exponent;
|
||||
bool result = grisu3(v, buffer, length, &decimal_exponent);
|
||||
*point = *length + decimal_exponent;
|
||||
buffer[*length] = '\0';
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
|
43
deps/v8/src/fast-dtoa.h
vendored
43
deps/v8/src/fast-dtoa.h
vendored
@ -31,52 +31,27 @@
|
||||
namespace v8 {
|
||||
namespace internal {
|
||||
|
||||
enum FastDtoaMode {
|
||||
// Computes the shortest representation of the given input. The returned
|
||||
// result will be the most accurate number of this length. Longer
|
||||
// representations might be more accurate.
|
||||
FAST_DTOA_SHORTEST,
|
||||
// Computes a representation where the precision (number of digits) is
|
||||
// given as input. The precision is independent of the decimal point.
|
||||
FAST_DTOA_PRECISION
|
||||
};
|
||||
|
||||
// FastDtoa will produce at most kFastDtoaMaximalLength digits. This does not
|
||||
// include the terminating '\0' character.
|
||||
static const int kFastDtoaMaximalLength = 17;
|
||||
|
||||
// Provides a decimal representation of v.
|
||||
// The result should be interpreted as buffer * 10^(point - length).
|
||||
//
|
||||
// Precondition:
|
||||
// * v must be a strictly positive finite double.
|
||||
//
|
||||
// v must be a strictly positive finite double.
|
||||
// Returns true if it succeeds, otherwise the result can not be trusted.
|
||||
// There will be *length digits inside the buffer followed by a null terminator.
|
||||
// If the function returns true and mode equals
|
||||
// - FAST_DTOA_SHORTEST, then
|
||||
// the parameter requested_digits is ignored.
|
||||
// The result satisfies
|
||||
// If the function returns true then
|
||||
// v == (double) (buffer * 10^(point - length)).
|
||||
// The digits in the buffer are the shortest representation possible. E.g.
|
||||
// if 0.099999999999 and 0.1 represent the same double then "1" is returned
|
||||
// with point = 0.
|
||||
// The digits in the buffer are the shortest representation possible: no
|
||||
// 0.099999999999 instead of 0.1.
|
||||
// The last digit will be closest to the actual v. That is, even if several
|
||||
// digits might correctly yield 'v' when read again, the buffer will contain
|
||||
// the one closest to v.
|
||||
// - FAST_DTOA_PRECISION, then
|
||||
// the buffer contains requested_digits digits.
|
||||
// the difference v - (buffer * 10^(point-length)) is closest to zero for
|
||||
// all possible representations of requested_digits digits.
|
||||
// If there are two values that are equally close, then FastDtoa returns
|
||||
// false.
|
||||
// For both modes the buffer must be large enough to hold the result.
|
||||
// digits might correctly yield 'v' when read again, the buffer will contain the
|
||||
// one closest to v.
|
||||
// The variable 'sign' will be '0' if the given number is positive, and '1'
|
||||
// otherwise.
|
||||
bool FastDtoa(double d,
|
||||
FastDtoaMode mode,
|
||||
int requested_digits,
|
||||
Vector<char> buffer,
|
||||
int* length,
|
||||
int* decimal_point);
|
||||
int* point);
|
||||
|
||||
} } // namespace v8::internal
|
||||
|
||||
|
52
deps/v8/src/frames.cc
vendored
52
deps/v8/src/frames.cc
vendored
@ -143,8 +143,8 @@ void StackFrameIterator::Reset() {
|
||||
state.pc_address =
|
||||
reinterpret_cast<Address*>(StandardFrame::ComputePCAddress(fp_));
|
||||
type = StackFrame::ComputeType(&state);
|
||||
}
|
||||
if (SingletonFor(type) == NULL) return;
|
||||
}
|
||||
frame_ = SingletonFor(type, &state);
|
||||
}
|
||||
|
||||
@ -203,24 +203,13 @@ bool StackTraceFrameIterator::IsValidFrame() {
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
|
||||
bool SafeStackFrameIterator::ExitFrameValidator::IsValidFP(Address fp) {
|
||||
if (!validator_.IsValid(fp)) return false;
|
||||
Address sp = ExitFrame::ComputeStackPointer(fp);
|
||||
if (!validator_.IsValid(sp)) return false;
|
||||
StackFrame::State state;
|
||||
ExitFrame::FillState(fp, sp, &state);
|
||||
if (!validator_.IsValid(reinterpret_cast<Address>(state.pc_address))) {
|
||||
return false;
|
||||
}
|
||||
return *state.pc_address != NULL;
|
||||
}
|
||||
|
||||
|
||||
SafeStackFrameIterator::SafeStackFrameIterator(
|
||||
Address fp, Address sp, Address low_bound, Address high_bound) :
|
||||
maintainer_(),
|
||||
stack_validator_(low_bound, high_bound),
|
||||
is_valid_top_(IsValidTop(low_bound, high_bound)),
|
||||
maintainer_(), low_bound_(low_bound), high_bound_(high_bound),
|
||||
is_valid_top_(
|
||||
IsWithinBounds(low_bound, high_bound,
|
||||
Top::c_entry_fp(Top::GetCurrentThread())) &&
|
||||
Top::handler(Top::GetCurrentThread()) != NULL),
|
||||
is_valid_fp_(IsWithinBounds(low_bound, high_bound, fp)),
|
||||
is_working_iterator_(is_valid_top_ || is_valid_fp_),
|
||||
iteration_done_(!is_working_iterator_),
|
||||
@ -228,14 +217,6 @@ SafeStackFrameIterator::SafeStackFrameIterator(
|
||||
}
|
||||
|
||||
|
||||
bool SafeStackFrameIterator::IsValidTop(Address low_bound, Address high_bound) {
|
||||
Address fp = Top::c_entry_fp(Top::GetCurrentThread());
|
||||
ExitFrameValidator validator(low_bound, high_bound);
|
||||
if (!validator.IsValidFP(fp)) return false;
|
||||
return Top::handler(Top::GetCurrentThread()) != NULL;
|
||||
}
|
||||
|
||||
|
||||
void SafeStackFrameIterator::Advance() {
|
||||
ASSERT(is_working_iterator_);
|
||||
ASSERT(!done());
|
||||
@ -277,8 +258,9 @@ bool SafeStackFrameIterator::IsValidCaller(StackFrame* frame) {
|
||||
// sure that caller FP address is valid.
|
||||
Address caller_fp = Memory::Address_at(
|
||||
frame->fp() + EntryFrameConstants::kCallerFPOffset);
|
||||
ExitFrameValidator validator(stack_validator_);
|
||||
if (!validator.IsValidFP(caller_fp)) return false;
|
||||
if (!IsValidStackAddress(caller_fp)) {
|
||||
return false;
|
||||
}
|
||||
} else if (frame->is_arguments_adaptor()) {
|
||||
// See ArgumentsAdaptorFrame::GetCallerStackPointer. It assumes that
|
||||
// the number of arguments is stored on stack as Smi. We need to check
|
||||
@ -433,22 +415,6 @@ Address ExitFrame::GetCallerStackPointer() const {
|
||||
}
|
||||
|
||||
|
||||
StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) {
|
||||
if (fp == 0) return NONE;
|
||||
Address sp = ComputeStackPointer(fp);
|
||||
FillState(fp, sp, state);
|
||||
ASSERT(*state->pc_address != NULL);
|
||||
return EXIT;
|
||||
}
|
||||
|
||||
|
||||
void ExitFrame::FillState(Address fp, Address sp, State* state) {
|
||||
state->sp = sp;
|
||||
state->fp = fp;
|
||||
state->pc_address = reinterpret_cast<Address*>(sp - 1 * kPointerSize);
|
||||
}
|
||||
|
||||
|
||||
Address StandardFrame::GetExpressionAddress(int n) const {
|
||||
const int offset = StandardFrameConstants::kExpressionsOffset;
|
||||
return fp() + offset - n * kPointerSize;
|
||||
|
47
deps/v8/src/frames.h
vendored
47
deps/v8/src/frames.h
vendored
@ -67,7 +67,7 @@ class PcToCodeCache : AllStatic {
|
||||
static PcToCodeCacheEntry* GetCacheEntry(Address pc);
|
||||
|
||||
private:
|
||||
static const int kPcToCodeCacheSize = 1024;
|
||||
static const int kPcToCodeCacheSize = 256;
|
||||
static PcToCodeCacheEntry cache_[kPcToCodeCacheSize];
|
||||
};
|
||||
|
||||
@ -141,13 +141,6 @@ class StackFrame BASE_EMBEDDED {
|
||||
NO_ID = 0
|
||||
};
|
||||
|
||||
struct State {
|
||||
State() : sp(NULL), fp(NULL), pc_address(NULL) { }
|
||||
Address sp;
|
||||
Address fp;
|
||||
Address* pc_address;
|
||||
};
|
||||
|
||||
// Copy constructor; it breaks the connection to host iterator.
|
||||
StackFrame(const StackFrame& original) {
|
||||
this->state_ = original.state_;
|
||||
@ -208,6 +201,12 @@ class StackFrame BASE_EMBEDDED {
|
||||
int index) const { }
|
||||
|
||||
protected:
|
||||
struct State {
|
||||
Address sp;
|
||||
Address fp;
|
||||
Address* pc_address;
|
||||
};
|
||||
|
||||
explicit StackFrame(StackFrameIterator* iterator) : iterator_(iterator) { }
|
||||
virtual ~StackFrame() { }
|
||||
|
||||
@ -319,8 +318,6 @@ class ExitFrame: public StackFrame {
|
||||
// pointer. Used when constructing the first stack frame seen by an
|
||||
// iterator and the frames following entry frames.
|
||||
static Type GetStateForFramePointer(Address fp, State* state);
|
||||
static Address ComputeStackPointer(Address fp);
|
||||
static void FillState(Address fp, Address sp, State* state);
|
||||
|
||||
protected:
|
||||
explicit ExitFrame(StackFrameIterator* iterator) : StackFrame(iterator) { }
|
||||
@ -446,7 +443,6 @@ class JavaScriptFrame: public StandardFrame {
|
||||
inline Object* function_slot_object() const;
|
||||
|
||||
friend class StackFrameIterator;
|
||||
friend class StackTracer;
|
||||
};
|
||||
|
||||
|
||||
@ -658,36 +654,12 @@ class SafeStackFrameIterator BASE_EMBEDDED {
|
||||
}
|
||||
|
||||
private:
|
||||
class StackAddressValidator {
|
||||
public:
|
||||
StackAddressValidator(Address low_bound, Address high_bound)
|
||||
: low_bound_(low_bound), high_bound_(high_bound) { }
|
||||
bool IsValid(Address addr) const {
|
||||
return IsWithinBounds(low_bound_, high_bound_, addr);
|
||||
}
|
||||
private:
|
||||
Address low_bound_;
|
||||
Address high_bound_;
|
||||
};
|
||||
|
||||
class ExitFrameValidator {
|
||||
public:
|
||||
explicit ExitFrameValidator(const StackAddressValidator& validator)
|
||||
: validator_(validator) { }
|
||||
ExitFrameValidator(Address low_bound, Address high_bound)
|
||||
: validator_(low_bound, high_bound) { }
|
||||
bool IsValidFP(Address fp);
|
||||
private:
|
||||
StackAddressValidator validator_;
|
||||
};
|
||||
|
||||
bool IsValidStackAddress(Address addr) const {
|
||||
return stack_validator_.IsValid(addr);
|
||||
return IsWithinBounds(low_bound_, high_bound_, addr);
|
||||
}
|
||||
bool CanIterateHandles(StackFrame* frame, StackHandler* handler);
|
||||
bool IsValidFrame(StackFrame* frame) const;
|
||||
bool IsValidCaller(StackFrame* frame);
|
||||
static bool IsValidTop(Address low_bound, Address high_bound);
|
||||
|
||||
// This is a nasty hack to make sure the active count is incremented
|
||||
// before the constructor for the embedded iterator is invoked. This
|
||||
@ -702,7 +674,8 @@ class SafeStackFrameIterator BASE_EMBEDDED {
|
||||
|
||||
ActiveCountMaintainer maintainer_;
|
||||
static int active_count_;
|
||||
StackAddressValidator stack_validator_;
|
||||
Address low_bound_;
|
||||
Address high_bound_;
|
||||
const bool is_valid_top_;
|
||||
const bool is_valid_fp_;
|
||||
const bool is_working_iterator_;
|
||||
|
3
deps/v8/src/full-codegen.h
vendored
3
deps/v8/src/full-codegen.h
vendored
@ -509,9 +509,6 @@ class FullCodeGenerator: public AstVisitor {
|
||||
static Register result_register();
|
||||
static Register context_register();
|
||||
|
||||
// Helper for calling an IC stub.
|
||||
void EmitCallIC(Handle<Code> ic, RelocInfo::Mode mode);
|
||||
|
||||
// Set fields in the stack frame. Offsets are the frame pointer relative
|
||||
// offsets defined in, e.g., StandardFrameConstants.
|
||||
void StoreToFrameField(int frame_offset, Register value);
|
||||
|
31
deps/v8/src/heap.cc
vendored
31
deps/v8/src/heap.cc
vendored
@ -2650,20 +2650,6 @@ Object* Heap::AllocateArgumentsObject(Object* callee, int length) {
|
||||
}
|
||||
|
||||
|
||||
static bool HasDuplicates(DescriptorArray* descriptors) {
|
||||
int count = descriptors->number_of_descriptors();
|
||||
if (count > 1) {
|
||||
String* prev_key = descriptors->GetKey(0);
|
||||
for (int i = 1; i != count; i++) {
|
||||
String* current_key = descriptors->GetKey(i);
|
||||
if (prev_key == current_key) return true;
|
||||
prev_key = current_key;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
Object* Heap::AllocateInitialMap(JSFunction* fun) {
|
||||
ASSERT(!fun->has_initial_map());
|
||||
|
||||
@ -2697,9 +2683,8 @@ Object* Heap::AllocateInitialMap(JSFunction* fun) {
|
||||
if (fun->shared()->CanGenerateInlineConstructor(prototype)) {
|
||||
int count = fun->shared()->this_property_assignments_count();
|
||||
if (count > in_object_properties) {
|
||||
// Inline constructor can only handle inobject properties.
|
||||
fun->shared()->ForbidInlineConstructor();
|
||||
} else {
|
||||
count = in_object_properties;
|
||||
}
|
||||
Object* descriptors_obj = DescriptorArray::Allocate(count);
|
||||
if (descriptors_obj->IsFailure()) return descriptors_obj;
|
||||
DescriptorArray* descriptors = DescriptorArray::cast(descriptors_obj);
|
||||
@ -2711,21 +2696,11 @@ Object* Heap::AllocateInitialMap(JSFunction* fun) {
|
||||
descriptors->Set(i, &field);
|
||||
}
|
||||
descriptors->SetNextEnumerationIndex(count);
|
||||
descriptors->SortUnchecked();
|
||||
|
||||
// The descriptors may contain duplicates because the compiler does not
|
||||
// guarantee the uniqueness of property names (it would have required
|
||||
// quadratic time). Once the descriptors are sorted we can check for
|
||||
// duplicates in linear time.
|
||||
if (HasDuplicates(descriptors)) {
|
||||
fun->shared()->ForbidInlineConstructor();
|
||||
} else {
|
||||
descriptors->Sort();
|
||||
map->set_instance_descriptors(descriptors);
|
||||
map->set_pre_allocated_property_fields(count);
|
||||
map->set_unused_property_fields(in_object_properties - count);
|
||||
}
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}
|
||||
|
||||
|
45
deps/v8/src/ia32/assembler-ia32.cc
vendored
45
deps/v8/src/ia32/assembler-ia32.cc
vendored
@ -2179,16 +2179,6 @@ void Assembler::sqrtsd(XMMRegister dst, XMMRegister src) {
|
||||
}
|
||||
|
||||
|
||||
void Assembler::andpd(XMMRegister dst, XMMRegister src) {
|
||||
EnsureSpace ensure_space(this);
|
||||
last_pc_ = pc_;
|
||||
EMIT(0x66);
|
||||
EMIT(0x0F);
|
||||
EMIT(0x54);
|
||||
emit_sse_operand(dst, src);
|
||||
}
|
||||
|
||||
|
||||
void Assembler::ucomisd(XMMRegister dst, XMMRegister src) {
|
||||
ASSERT(CpuFeatures::IsEnabled(SSE2));
|
||||
EnsureSpace ensure_space(this);
|
||||
@ -2211,28 +2201,6 @@ void Assembler::movmskpd(Register dst, XMMRegister src) {
|
||||
}
|
||||
|
||||
|
||||
void Assembler::cmpltsd(XMMRegister dst, XMMRegister src) {
|
||||
ASSERT(CpuFeatures::IsEnabled(SSE2));
|
||||
EnsureSpace ensure_space(this);
|
||||
last_pc_ = pc_;
|
||||
EMIT(0xF2);
|
||||
EMIT(0x0F);
|
||||
EMIT(0xC2);
|
||||
emit_sse_operand(dst, src);
|
||||
EMIT(1); // LT == 1
|
||||
}
|
||||
|
||||
|
||||
void Assembler::movaps(XMMRegister dst, XMMRegister src) {
|
||||
ASSERT(CpuFeatures::IsEnabled(SSE2));
|
||||
EnsureSpace ensure_space(this);
|
||||
last_pc_ = pc_;
|
||||
EMIT(0x0F);
|
||||
EMIT(0x28);
|
||||
emit_sse_operand(dst, src);
|
||||
}
|
||||
|
||||
|
||||
void Assembler::movdqa(const Operand& dst, XMMRegister src ) {
|
||||
ASSERT(CpuFeatures::IsEnabled(SSE2));
|
||||
EnsureSpace ensure_space(this);
|
||||
@ -2390,19 +2358,6 @@ void Assembler::ptest(XMMRegister dst, XMMRegister src) {
|
||||
emit_sse_operand(dst, src);
|
||||
}
|
||||
|
||||
|
||||
void Assembler::psllq(XMMRegister reg, int8_t imm8) {
|
||||
ASSERT(CpuFeatures::IsEnabled(SSE2));
|
||||
EnsureSpace ensure_space(this);
|
||||
last_pc_ = pc_;
|
||||
EMIT(0x66);
|
||||
EMIT(0x0F);
|
||||
EMIT(0x73);
|
||||
emit_sse_operand(esi, reg); // esi == 6
|
||||
EMIT(imm8);
|
||||
}
|
||||
|
||||
|
||||
void Assembler::emit_sse_operand(XMMRegister reg, const Operand& adr) {
|
||||
Register ireg = { reg.code() };
|
||||
emit_operand(ireg, adr);
|
||||
|
8
deps/v8/src/ia32/assembler-ia32.h
vendored
8
deps/v8/src/ia32/assembler-ia32.h
vendored
@ -788,15 +788,9 @@ class Assembler : public Malloced {
|
||||
void xorpd(XMMRegister dst, XMMRegister src);
|
||||
void sqrtsd(XMMRegister dst, XMMRegister src);
|
||||
|
||||
void andpd(XMMRegister dst, XMMRegister src);
|
||||
|
||||
void ucomisd(XMMRegister dst, XMMRegister src);
|
||||
void movmskpd(Register dst, XMMRegister src);
|
||||
|
||||
void cmpltsd(XMMRegister dst, XMMRegister src);
|
||||
|
||||
void movaps(XMMRegister dst, XMMRegister src);
|
||||
|
||||
void movdqa(XMMRegister dst, const Operand& src);
|
||||
void movdqa(const Operand& dst, XMMRegister src);
|
||||
void movdqu(XMMRegister dst, const Operand& src);
|
||||
@ -812,8 +806,6 @@ class Assembler : public Malloced {
|
||||
void pxor(XMMRegister dst, XMMRegister src);
|
||||
void ptest(XMMRegister dst, XMMRegister src);
|
||||
|
||||
void psllq(XMMRegister reg, int8_t imm8);
|
||||
|
||||
// Parallel XMM operations.
|
||||
void movntdqa(XMMRegister src, const Operand& dst);
|
||||
void movntdq(const Operand& dst, XMMRegister src);
|
||||
|
80
deps/v8/src/ia32/codegen-ia32.cc
vendored
80
deps/v8/src/ia32/codegen-ia32.cc
vendored
@ -9144,15 +9144,9 @@ class DeferredReferenceGetNamedValue: public DeferredCode {
|
||||
public:
|
||||
DeferredReferenceGetNamedValue(Register dst,
|
||||
Register receiver,
|
||||
Handle<String> name,
|
||||
bool is_contextual)
|
||||
: dst_(dst),
|
||||
receiver_(receiver),
|
||||
name_(name),
|
||||
is_contextual_(is_contextual) {
|
||||
set_comment(is_contextual
|
||||
? "[ DeferredReferenceGetNamedValue (contextual)"
|
||||
: "[ DeferredReferenceGetNamedValue");
|
||||
Handle<String> name)
|
||||
: dst_(dst), receiver_(receiver), name_(name) {
|
||||
set_comment("[ DeferredReferenceGetNamedValue");
|
||||
}
|
||||
|
||||
virtual void Generate();
|
||||
@ -9164,7 +9158,6 @@ class DeferredReferenceGetNamedValue: public DeferredCode {
|
||||
Register dst_;
|
||||
Register receiver_;
|
||||
Handle<String> name_;
|
||||
bool is_contextual_;
|
||||
};
|
||||
|
||||
|
||||
@ -9174,15 +9167,9 @@ void DeferredReferenceGetNamedValue::Generate() {
|
||||
}
|
||||
__ Set(ecx, Immediate(name_));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
RelocInfo::Mode mode = is_contextual_
|
||||
? RelocInfo::CODE_TARGET_CONTEXT
|
||||
: RelocInfo::CODE_TARGET;
|
||||
__ call(ic, mode);
|
||||
// The call must be followed by:
|
||||
// - a test eax instruction to indicate that the inobject property
|
||||
// case was inlined.
|
||||
// - a mov ecx instruction to indicate that the contextual property
|
||||
// load was inlined.
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// The call must be followed by a test eax instruction to indicate
|
||||
// that the inobject property case was inlined.
|
||||
//
|
||||
// Store the delta to the map check instruction here in the test
|
||||
// instruction. Use masm_-> instead of the __ macro since the
|
||||
@ -9190,13 +9177,8 @@ void DeferredReferenceGetNamedValue::Generate() {
|
||||
int delta_to_patch_site = masm_->SizeOfCodeGeneratedSince(patch_site());
|
||||
// Here we use masm_-> instead of the __ macro because this is the
|
||||
// instruction that gets patched and coverage code gets in the way.
|
||||
if (is_contextual_) {
|
||||
masm_->mov(ecx, -delta_to_patch_site);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline_miss, 1);
|
||||
} else {
|
||||
masm_->test(eax, Immediate(-delta_to_patch_site));
|
||||
__ IncrementCounter(&Counters::named_load_inline_miss, 1);
|
||||
}
|
||||
|
||||
if (!dst_.is(eax)) __ mov(dst_, eax);
|
||||
}
|
||||
@ -9367,17 +9349,12 @@ Result CodeGenerator::EmitNamedLoad(Handle<String> name, bool is_contextual) {
|
||||
#ifdef DEBUG
|
||||
int original_height = frame()->height();
|
||||
#endif
|
||||
|
||||
bool contextual_load_in_builtin =
|
||||
is_contextual &&
|
||||
(Bootstrapper::IsActive() ||
|
||||
(!info_->closure().is_null() && info_->closure()->IsBuiltin()));
|
||||
|
||||
Result result;
|
||||
// Do not inline in the global code or when not in loop.
|
||||
if (scope()->is_global_scope() ||
|
||||
loop_nesting() == 0 ||
|
||||
contextual_load_in_builtin) {
|
||||
// Do not inline the inobject property case for loads from the global
|
||||
// object. Also do not inline for unoptimized code. This saves time in
|
||||
// the code generator. Unoptimized code is toplevel code or code that is
|
||||
// not in a loop.
|
||||
if (is_contextual || scope()->is_global_scope() || loop_nesting() == 0) {
|
||||
Comment cmnt(masm(), "[ Load from named Property");
|
||||
frame()->Push(name);
|
||||
|
||||
@ -9390,26 +9367,19 @@ Result CodeGenerator::EmitNamedLoad(Handle<String> name, bool is_contextual) {
|
||||
// instruction here.
|
||||
__ nop();
|
||||
} else {
|
||||
// Inline the property load.
|
||||
Comment cmnt(masm(), is_contextual
|
||||
? "[ Inlined contextual property load"
|
||||
: "[ Inlined named property load");
|
||||
// Inline the inobject property case.
|
||||
Comment cmnt(masm(), "[ Inlined named property load");
|
||||
Result receiver = frame()->Pop();
|
||||
receiver.ToRegister();
|
||||
|
||||
result = allocator()->Allocate();
|
||||
ASSERT(result.is_valid());
|
||||
DeferredReferenceGetNamedValue* deferred =
|
||||
new DeferredReferenceGetNamedValue(result.reg(),
|
||||
receiver.reg(),
|
||||
name,
|
||||
is_contextual);
|
||||
new DeferredReferenceGetNamedValue(result.reg(), receiver.reg(), name);
|
||||
|
||||
if (!is_contextual) {
|
||||
// Check that the receiver is a heap object.
|
||||
__ test(receiver.reg(), Immediate(kSmiTagMask));
|
||||
deferred->Branch(zero);
|
||||
}
|
||||
|
||||
__ bind(deferred->patch_site());
|
||||
// This is the map check instruction that will be patched (so we can't
|
||||
@ -9421,33 +9391,17 @@ Result CodeGenerator::EmitNamedLoad(Handle<String> name, bool is_contextual) {
|
||||
// which allows the assert below to succeed and patching to work.
|
||||
deferred->Branch(not_equal);
|
||||
|
||||
// The delta from the patch label to the actual load must be
|
||||
// statically known.
|
||||
// The delta from the patch label to the load offset must be statically
|
||||
// known.
|
||||
ASSERT(masm()->SizeOfCodeGeneratedSince(deferred->patch_site()) ==
|
||||
LoadIC::kOffsetToLoadInstruction);
|
||||
|
||||
if (is_contextual) {
|
||||
// Load the (initialy invalid) cell and get its value.
|
||||
masm()->mov(result.reg(), Factory::null_value());
|
||||
if (FLAG_debug_code) {
|
||||
__ cmp(FieldOperand(result.reg(), HeapObject::kMapOffset),
|
||||
Factory::global_property_cell_map());
|
||||
__ Assert(equal, "Uninitialized inlined contextual load");
|
||||
}
|
||||
__ mov(result.reg(),
|
||||
FieldOperand(result.reg(), JSGlobalPropertyCell::kValueOffset));
|
||||
__ cmp(result.reg(), Factory::the_hole_value());
|
||||
deferred->Branch(equal);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline, 1);
|
||||
} else {
|
||||
// The initial (invalid) offset has to be large enough to force a 32-bit
|
||||
// instruction encoding to allow patching with an arbitrary offset. Use
|
||||
// kMaxInt (minus kHeapObjectTag).
|
||||
int offset = kMaxInt;
|
||||
masm()->mov(result.reg(), FieldOperand(receiver.reg(), offset));
|
||||
__ IncrementCounter(&Counters::named_load_inline, 1);
|
||||
}
|
||||
|
||||
__ IncrementCounter(&Counters::named_load_inline, 1);
|
||||
deferred->BindExit();
|
||||
}
|
||||
ASSERT(frame()->height() == original_height - 1);
|
||||
|
47
deps/v8/src/ia32/disasm-ia32.cc
vendored
47
deps/v8/src/ia32/disasm-ia32.cc
vendored
@ -685,8 +685,7 @@ int DisassemblerIA32::MemoryFPUInstruction(int escape_opcode,
|
||||
|
||||
case 0xDD: switch (regop) {
|
||||
case 0: mnem = "fld_d"; break;
|
||||
case 1: mnem = "fisttp_d"; break;
|
||||
case 2: mnem = "fst_d"; break;
|
||||
case 2: mnem = "fstp"; break;
|
||||
case 3: mnem = "fstp_d"; break;
|
||||
default: UnimplementedInstruction();
|
||||
}
|
||||
@ -958,14 +957,6 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer,
|
||||
} else if (f0byte == 0xA2 || f0byte == 0x31) {
|
||||
AppendToBuffer("%s", f0mnem);
|
||||
data += 2;
|
||||
} else if (f0byte == 0x28) {
|
||||
data += 2;
|
||||
int mod, regop, rm;
|
||||
get_modrm(*data, &mod, ®op, &rm);
|
||||
AppendToBuffer("movaps %s,%s",
|
||||
NameOfXMMRegister(regop),
|
||||
NameOfXMMRegister(rm));
|
||||
data++;
|
||||
} else if ((f0byte & 0xF0) == 0x80) {
|
||||
data += JumpConditional(data, branch_hint);
|
||||
} else if (f0byte == 0xBE || f0byte == 0xBF || f0byte == 0xB6 ||
|
||||
@ -1165,23 +1156,6 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer,
|
||||
NameOfXMMRegister(regop),
|
||||
NameOfXMMRegister(rm));
|
||||
data++;
|
||||
} else if (*data == 0x73) {
|
||||
data++;
|
||||
int mod, regop, rm;
|
||||
get_modrm(*data, &mod, ®op, &rm);
|
||||
int8_t imm8 = static_cast<int8_t>(data[1]);
|
||||
AppendToBuffer("psllq %s,%d",
|
||||
NameOfXMMRegister(rm),
|
||||
static_cast<int>(imm8));
|
||||
data += 2;
|
||||
} else if (*data == 0x54) {
|
||||
data++;
|
||||
int mod, regop, rm;
|
||||
get_modrm(*data, &mod, ®op, &rm);
|
||||
AppendToBuffer("andpd %s,%s",
|
||||
NameOfXMMRegister(regop),
|
||||
NameOfXMMRegister(rm));
|
||||
data++;
|
||||
} else {
|
||||
UnimplementedInstruction();
|
||||
}
|
||||
@ -1300,23 +1274,6 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer,
|
||||
NameOfXMMRegister(rm));
|
||||
data++;
|
||||
}
|
||||
} else if (b2 == 0xC2) {
|
||||
// Intel manual 2A, Table 3-18.
|
||||
const char* const pseudo_op[] = {
|
||||
"cmpeqsd",
|
||||
"cmpltsd",
|
||||
"cmplesd",
|
||||
"cmpunordsd",
|
||||
"cmpneqsd",
|
||||
"cmpnltsd",
|
||||
"cmpnlesd",
|
||||
"cmpordsd"
|
||||
};
|
||||
AppendToBuffer("%s %s,%s",
|
||||
pseudo_op[data[1]],
|
||||
NameOfXMMRegister(regop),
|
||||
NameOfXMMRegister(rm));
|
||||
data += 2;
|
||||
} else {
|
||||
if (mod != 0x3) {
|
||||
AppendToBuffer("%s %s,", mnem, NameOfXMMRegister(regop));
|
||||
@ -1410,7 +1367,7 @@ int DisassemblerIA32::InstructionDecode(v8::internal::Vector<char> out_buffer,
|
||||
" %s",
|
||||
tmp_buffer_.start());
|
||||
return instr_len;
|
||||
} // NOLINT (function is too long)
|
||||
}
|
||||
|
||||
|
||||
//------------------------------------------------------------------------------
|
||||
|
12
deps/v8/src/ia32/frames-ia32.cc
vendored
12
deps/v8/src/ia32/frames-ia32.cc
vendored
@ -35,8 +35,16 @@ namespace v8 {
|
||||
namespace internal {
|
||||
|
||||
|
||||
Address ExitFrame::ComputeStackPointer(Address fp) {
|
||||
return Memory::Address_at(fp + ExitFrameConstants::kSPOffset);
|
||||
StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) {
|
||||
if (fp == 0) return NONE;
|
||||
// Compute the stack pointer.
|
||||
Address sp = Memory::Address_at(fp + ExitFrameConstants::kSPOffset);
|
||||
// Fill in the state.
|
||||
state->fp = fp;
|
||||
state->sp = sp;
|
||||
state->pc_address = reinterpret_cast<Address*>(sp - 1 * kPointerSize);
|
||||
ASSERT(*state->pc_address != NULL);
|
||||
return EXIT;
|
||||
}
|
||||
|
||||
|
||||
|
106
deps/v8/src/ia32/full-codegen-ia32.cc
vendored
106
deps/v8/src/ia32/full-codegen-ia32.cc
vendored
@ -631,7 +631,10 @@ void FullCodeGenerator::EmitDeclaration(Variable* variable,
|
||||
__ pop(edx);
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Absence of a test eax instruction following the call
|
||||
// indicates that none of the load was inlined.
|
||||
__ nop();
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -988,7 +991,8 @@ void FullCodeGenerator::EmitLoadGlobalSlotCheckExtensions(
|
||||
RelocInfo::Mode mode = (typeof_state == INSIDE_TYPEOF)
|
||||
? RelocInfo::CODE_TARGET
|
||||
: RelocInfo::CODE_TARGET_CONTEXT;
|
||||
EmitCallIC(ic, mode);
|
||||
__ call(ic, mode);
|
||||
__ nop(); // Signal no inlined code.
|
||||
}
|
||||
|
||||
|
||||
@ -1065,7 +1069,7 @@ void FullCodeGenerator::EmitDynamicLoadFromSlotFastCase(
|
||||
slow));
|
||||
__ mov(eax, Immediate(key_literal->handle()));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ jmp(done);
|
||||
}
|
||||
}
|
||||
@ -1089,7 +1093,12 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
__ mov(eax, CodeGenerator::GlobalObject());
|
||||
__ mov(ecx, var->name());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ call(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
// By emitting a nop we make sure that we do not have a test eax
|
||||
// instruction after the call it is treated specially by the LoadIC code
|
||||
// Remember that the assembler may choose to do peephole optimization
|
||||
// (eg, push/pop elimination).
|
||||
__ nop();
|
||||
Apply(context, eax);
|
||||
|
||||
} else if (slot != NULL && slot->type() == Slot::LOOKUP) {
|
||||
@ -1152,8 +1161,10 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
|
||||
// Do a keyed property load.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Notice: We must not have a "test eax, ..." instruction after the
|
||||
// call. It is treated specially by the LoadIC code.
|
||||
__ nop();
|
||||
// Drop key and object left on the stack by IC.
|
||||
Apply(context, eax);
|
||||
}
|
||||
@ -1251,7 +1262,8 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) {
|
||||
__ mov(ecx, Immediate(key->handle()));
|
||||
__ mov(edx, Operand(esp, 0));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
break;
|
||||
}
|
||||
// Fall through.
|
||||
@ -1464,14 +1476,16 @@ void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) {
|
||||
Literal* key = prop->key()->AsLiteral();
|
||||
__ mov(ecx, Immediate(key->handle()));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
}
|
||||
|
||||
|
||||
void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) {
|
||||
SetSourcePosition(prop->position());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
}
|
||||
|
||||
|
||||
@ -1830,7 +1844,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(eax); // Restore value.
|
||||
__ mov(ecx, prop->key()->AsLiteral()->handle());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
break;
|
||||
}
|
||||
case KEYED_PROPERTY: {
|
||||
@ -1841,7 +1856,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(edx);
|
||||
__ pop(eax); // Restore value.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1864,7 +1880,8 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var,
|
||||
__ mov(ecx, var->name());
|
||||
__ mov(edx, CodeGenerator::GlobalObject());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
|
||||
} else if (var->mode() != Variable::CONST || op == Token::INIT_CONST) {
|
||||
// Perform the assignment for non-const variables and for initialization
|
||||
@ -1948,7 +1965,8 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) {
|
||||
__ pop(edx);
|
||||
}
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -1986,7 +2004,10 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) {
|
||||
// Record source code position before IC call.
|
||||
SetSourcePosition(expr->position());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -2033,7 +2054,7 @@ void FullCodeGenerator::EmitCallWithIC(Call* expr,
|
||||
SetSourcePosition(expr->position());
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count, in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ call(ic, mode);
|
||||
// Restore context register.
|
||||
__ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, eax);
|
||||
@ -2056,7 +2077,7 @@ void FullCodeGenerator::EmitKeyedCallWithIC(Call* expr,
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeKeyedCallInitialize(
|
||||
arg_count, in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ call(ic, mode);
|
||||
// Restore context register.
|
||||
__ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, eax);
|
||||
@ -2180,7 +2201,7 @@ void FullCodeGenerator::VisitCall(Call* expr) {
|
||||
} else {
|
||||
// Call to a keyed property.
|
||||
// For a synthetic property use keyed load IC followed by function call,
|
||||
// for a regular property use keyed EmitCallIC.
|
||||
// for a regular property use keyed CallIC.
|
||||
VisitForValue(prop->obj(), kStack);
|
||||
if (prop->is_synthetic()) {
|
||||
VisitForValue(prop->key(), kAccumulator);
|
||||
@ -2189,7 +2210,11 @@ void FullCodeGenerator::VisitCall(Call* expr) {
|
||||
__ pop(edx); // We do not need to keep the receiver.
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// By emitting a nop we make sure that we do not have a "test eax,..."
|
||||
// instruction after the call as it is treated specially
|
||||
// by the LoadIC code.
|
||||
__ nop();
|
||||
// Push result (function).
|
||||
__ push(eax);
|
||||
// Push Global receiver.
|
||||
@ -3117,7 +3142,7 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) {
|
||||
__ Set(ecx, Immediate(expr->name()));
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count, in_loop);
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Restore context register.
|
||||
__ mov(esi, Operand(ebp, StandardFrameConstants::kContextOffset));
|
||||
} else {
|
||||
@ -3422,7 +3447,10 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ mov(ecx, prop->key()->AsLiteral()->handle());
|
||||
__ pop(edx);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
if (expr->is_postfix()) {
|
||||
if (context_ != Expression::kEffect) {
|
||||
ApplyTOS(context_);
|
||||
@ -3436,7 +3464,10 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ pop(ecx);
|
||||
__ pop(edx);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
if (expr->is_postfix()) {
|
||||
// Result is on the stack
|
||||
if (context_ != Expression::kEffect) {
|
||||
@ -3460,7 +3491,8 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr, Location where) {
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
// Use a regular load, not a contextual load, to avoid a reference
|
||||
// error.
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
if (where == kStack) __ push(eax);
|
||||
} else if (proxy != NULL &&
|
||||
proxy->var()->slot() != NULL &&
|
||||
@ -3712,36 +3744,10 @@ void FullCodeGenerator::VisitThisFunction(ThisFunction* expr) {
|
||||
}
|
||||
|
||||
|
||||
Register FullCodeGenerator::result_register() {
|
||||
return eax;
|
||||
}
|
||||
Register FullCodeGenerator::result_register() { return eax; }
|
||||
|
||||
|
||||
Register FullCodeGenerator::context_register() {
|
||||
return esi;
|
||||
}
|
||||
|
||||
|
||||
void FullCodeGenerator::EmitCallIC(Handle<Code> ic, RelocInfo::Mode mode) {
|
||||
ASSERT(mode == RelocInfo::CODE_TARGET ||
|
||||
mode == RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ call(ic, mode);
|
||||
|
||||
// If we're calling a (keyed) load or store stub, we have to mark
|
||||
// the call as containing no inlined code so we will not attempt to
|
||||
// patch it.
|
||||
switch (ic->kind()) {
|
||||
case Code::LOAD_IC:
|
||||
case Code::KEYED_LOAD_IC:
|
||||
case Code::STORE_IC:
|
||||
case Code::KEYED_STORE_IC:
|
||||
__ nop(); // Signals no inlined code.
|
||||
break;
|
||||
default:
|
||||
// Do nothing.
|
||||
break;
|
||||
}
|
||||
}
|
||||
Register FullCodeGenerator::context_register() { return esi; }
|
||||
|
||||
|
||||
void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) {
|
||||
|
39
deps/v8/src/ia32/ic-ia32.cc
vendored
39
deps/v8/src/ia32/ic-ia32.cc
vendored
@ -692,6 +692,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
// -- esp[0] : return address
|
||||
// -----------------------------------
|
||||
Label miss;
|
||||
Label index_out_of_range;
|
||||
|
||||
Register receiver = edx;
|
||||
Register index = eax;
|
||||
@ -706,7 +707,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
result,
|
||||
&miss, // When not a string.
|
||||
&miss, // When not a number.
|
||||
&miss, // When index out of range.
|
||||
&index_out_of_range,
|
||||
STRING_INDEX_IS_ARRAY_INDEX);
|
||||
char_at_generator.GenerateFast(masm);
|
||||
__ ret(0);
|
||||
@ -714,6 +715,10 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
ICRuntimeCallHelper call_helper;
|
||||
char_at_generator.GenerateSlow(masm, call_helper);
|
||||
|
||||
__ bind(&index_out_of_range);
|
||||
__ Set(eax, Immediate(Factory::undefined_value()));
|
||||
__ ret(0);
|
||||
|
||||
__ bind(&miss);
|
||||
GenerateMiss(masm);
|
||||
}
|
||||
@ -1661,38 +1666,6 @@ bool LoadIC::PatchInlinedLoad(Address address, Object* map, int offset) {
|
||||
}
|
||||
|
||||
|
||||
// One byte opcode for mov ecx,0xXXXXXXXX.
|
||||
static const byte kMovEcxByte = 0xB9;
|
||||
|
||||
bool LoadIC::PatchInlinedContextualLoad(Address address,
|
||||
Object* map,
|
||||
Object* cell) {
|
||||
// The address of the instruction following the call.
|
||||
Address mov_instruction_address =
|
||||
address + Assembler::kCallTargetAddressOffset;
|
||||
// If the instruction following the call is not a cmp eax, nothing
|
||||
// was inlined.
|
||||
if (*mov_instruction_address != kMovEcxByte) return false;
|
||||
|
||||
Address delta_address = mov_instruction_address + 1;
|
||||
// The delta to the start of the map check instruction.
|
||||
int delta = *reinterpret_cast<int*>(delta_address);
|
||||
|
||||
// The map address is the last 4 bytes of the 7-byte
|
||||
// operand-immediate compare instruction, so we add 3 to get the
|
||||
// offset to the last 4 bytes.
|
||||
Address map_address = mov_instruction_address + delta + 3;
|
||||
*(reinterpret_cast<Object**>(map_address)) = map;
|
||||
|
||||
// The cell is in the last 4 bytes of a five byte mov reg, imm32
|
||||
// instruction, so we add 1 to get the offset to the last 4 bytes.
|
||||
Address offset_address =
|
||||
mov_instruction_address + delta + kOffsetToLoadInstruction + 1;
|
||||
*reinterpret_cast<Object**>(offset_address) = cell;
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
bool StoreIC::PatchInlinedStore(Address address, Object* map, int offset) {
|
||||
// The address of the instruction following the call.
|
||||
Address test_instruction_address =
|
||||
|
11
deps/v8/src/ia32/macro-assembler-ia32.cc
vendored
11
deps/v8/src/ia32/macro-assembler-ia32.cc
vendored
@ -1553,17 +1553,6 @@ void MacroAssembler::ConvertToInt32(Register dst,
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::LoadPowerOf2(XMMRegister dst,
|
||||
Register scratch,
|
||||
int power) {
|
||||
ASSERT(is_uintn(power + HeapNumber::kExponentBias,
|
||||
HeapNumber::kExponentBits));
|
||||
mov(scratch, Immediate(power + HeapNumber::kExponentBias));
|
||||
movd(dst, Operand(scratch));
|
||||
psllq(dst, HeapNumber::kMantissaBits);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfInstanceTypeIsNotSequentialAscii(
|
||||
Register instance_type,
|
||||
Register scratch,
|
||||
|
2
deps/v8/src/ia32/macro-assembler-ia32.h
vendored
2
deps/v8/src/ia32/macro-assembler-ia32.h
vendored
@ -258,8 +258,6 @@ class MacroAssembler: public Assembler {
|
||||
TypeInfo info,
|
||||
Label* on_not_int32);
|
||||
|
||||
void LoadPowerOf2(XMMRegister dst, Register scratch, int power);
|
||||
|
||||
// Abort execution if argument is not a number. Used in debug code.
|
||||
void AbortIfNotNumber(Register object);
|
||||
|
||||
|
147
deps/v8/src/ia32/stub-cache-ia32.cc
vendored
147
deps/v8/src/ia32/stub-cache-ia32.cc
vendored
@ -265,11 +265,7 @@ void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm,
|
||||
|
||||
|
||||
void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype(
|
||||
MacroAssembler* masm, int index, Register prototype, Label* miss) {
|
||||
// Check we're still in the same context.
|
||||
__ cmp(Operand(esi, Context::SlotOffset(Context::GLOBAL_INDEX)),
|
||||
Top::global());
|
||||
__ j(not_equal, miss);
|
||||
MacroAssembler* masm, int index, Register prototype) {
|
||||
// Get the global function with the given index.
|
||||
JSFunction* function = JSFunction::cast(Top::global_context()->get(index));
|
||||
// Load its initial map. The global functions all have initial maps.
|
||||
@ -1630,8 +1626,7 @@ Object* CallStubCompiler::CompileStringCharCodeAtCall(
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
eax,
|
||||
&miss);
|
||||
eax);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), eax, holder,
|
||||
ebx, edx, edi, name, &miss);
|
||||
@ -1700,8 +1695,7 @@ Object* CallStubCompiler::CompileStringCharAtCall(Object* object,
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
eax,
|
||||
&miss);
|
||||
eax);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), eax, holder,
|
||||
ebx, edx, edi, name, &miss);
|
||||
@ -1819,131 +1813,6 @@ Object* CallStubCompiler::CompileStringFromCharCodeCall(
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileMathFloorCall(Object* object,
|
||||
JSObject* holder,
|
||||
JSGlobalPropertyCell* cell,
|
||||
JSFunction* function,
|
||||
String* name) {
|
||||
// ----------- S t a t e -------------
|
||||
// -- ecx : name
|
||||
// -- esp[0] : return address
|
||||
// -- esp[(argc - n) * 4] : arg[n] (zero-based)
|
||||
// -- ...
|
||||
// -- esp[(argc + 1) * 4] : receiver
|
||||
// -----------------------------------
|
||||
|
||||
if (!CpuFeatures::IsSupported(SSE2)) return Heap::undefined_value();
|
||||
CpuFeatures::Scope use_sse2(SSE2);
|
||||
|
||||
const int argc = arguments().immediate();
|
||||
|
||||
// If the object is not a JSObject or we got an unexpected number of
|
||||
// arguments, bail out to the regular call.
|
||||
if (!object->IsJSObject() || argc != 1) return Heap::undefined_value();
|
||||
|
||||
Label miss;
|
||||
GenerateNameCheck(name, &miss);
|
||||
|
||||
if (cell == NULL) {
|
||||
__ mov(edx, Operand(esp, 2 * kPointerSize));
|
||||
|
||||
STATIC_ASSERT(kSmiTag == 0);
|
||||
__ test(edx, Immediate(kSmiTagMask));
|
||||
__ j(zero, &miss);
|
||||
|
||||
CheckPrototypes(JSObject::cast(object), edx, holder, ebx, eax, edi, name,
|
||||
&miss);
|
||||
} else {
|
||||
ASSERT(cell->value() == function);
|
||||
GenerateGlobalReceiverCheck(JSObject::cast(object), holder, name, &miss);
|
||||
GenerateLoadFunctionFromCell(cell, function, &miss);
|
||||
}
|
||||
|
||||
// Load the (only) argument into eax.
|
||||
__ mov(eax, Operand(esp, 1 * kPointerSize));
|
||||
|
||||
// Check if the argument is a smi.
|
||||
Label smi;
|
||||
STATIC_ASSERT(kSmiTag == 0);
|
||||
__ test(eax, Immediate(kSmiTagMask));
|
||||
__ j(zero, &smi);
|
||||
|
||||
// Check if the argument is a heap number and load its value into xmm0.
|
||||
Label slow;
|
||||
__ CheckMap(eax, Factory::heap_number_map(), &slow, true);
|
||||
__ movdbl(xmm0, FieldOperand(eax, HeapNumber::kValueOffset));
|
||||
|
||||
// Check if the argument is strictly positive. Note this also
|
||||
// discards NaN.
|
||||
__ xorpd(xmm1, xmm1);
|
||||
__ ucomisd(xmm0, xmm1);
|
||||
__ j(below_equal, &slow);
|
||||
|
||||
// Do a truncating conversion.
|
||||
__ cvttsd2si(eax, Operand(xmm0));
|
||||
|
||||
// Check if the result fits into a smi. Note this also checks for
|
||||
// 0x80000000 which signals a failed conversion.
|
||||
Label wont_fit_into_smi;
|
||||
__ test(eax, Immediate(0xc0000000));
|
||||
__ j(not_zero, &wont_fit_into_smi);
|
||||
|
||||
// Smi tag and return.
|
||||
__ SmiTag(eax);
|
||||
__ bind(&smi);
|
||||
__ ret(2 * kPointerSize);
|
||||
|
||||
// Check if the argument is < 2^kMantissaBits.
|
||||
Label already_round;
|
||||
__ bind(&wont_fit_into_smi);
|
||||
__ LoadPowerOf2(xmm1, ebx, HeapNumber::kMantissaBits);
|
||||
__ ucomisd(xmm0, xmm1);
|
||||
__ j(above_equal, &already_round);
|
||||
|
||||
// Save a copy of the argument.
|
||||
__ movaps(xmm2, xmm0);
|
||||
|
||||
// Compute (argument + 2^kMantissaBits) - 2^kMantissaBits.
|
||||
__ addsd(xmm0, xmm1);
|
||||
__ subsd(xmm0, xmm1);
|
||||
|
||||
// Compare the argument and the tentative result to get the right mask:
|
||||
// if xmm2 < xmm0:
|
||||
// xmm2 = 1...1
|
||||
// else:
|
||||
// xmm2 = 0...0
|
||||
__ cmpltsd(xmm2, xmm0);
|
||||
|
||||
// Subtract 1 if the argument was less than the tentative result.
|
||||
__ LoadPowerOf2(xmm1, ebx, 0);
|
||||
__ andpd(xmm1, xmm2);
|
||||
__ subsd(xmm0, xmm1);
|
||||
|
||||
// Return a new heap number.
|
||||
__ AllocateHeapNumber(eax, ebx, edx, &slow);
|
||||
__ movdbl(FieldOperand(eax, HeapNumber::kValueOffset), xmm0);
|
||||
__ ret(2 * kPointerSize);
|
||||
|
||||
// Return the argument (when it's an already round heap number).
|
||||
__ bind(&already_round);
|
||||
__ mov(eax, Operand(esp, 1 * kPointerSize));
|
||||
__ ret(2 * kPointerSize);
|
||||
|
||||
// Tail call the full function. We do not have to patch the receiver
|
||||
// because the function makes no use of it.
|
||||
__ bind(&slow);
|
||||
__ InvokeFunction(function, arguments(), JUMP_FUNCTION);
|
||||
|
||||
__ bind(&miss);
|
||||
// ecx: function name.
|
||||
Object* obj = GenerateMissBranch();
|
||||
if (obj->IsFailure()) return obj;
|
||||
|
||||
// Return the generated code.
|
||||
return (cell == NULL) ? GetCode(function) : GetCode(NORMAL, name);
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
JSObject* holder,
|
||||
JSFunction* function,
|
||||
@ -2025,7 +1894,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ j(above_equal, &miss, not_taken);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::STRING_FUNCTION_INDEX, eax, &miss);
|
||||
masm(), Context::STRING_FUNCTION_INDEX, eax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), eax, holder,
|
||||
ebx, edx, edi, name, &miss);
|
||||
}
|
||||
@ -2045,7 +1914,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, eax, &miss);
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, eax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), eax, holder,
|
||||
ebx, edx, edi, name, &miss);
|
||||
}
|
||||
@ -2066,7 +1935,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, eax, &miss);
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, eax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), eax, holder,
|
||||
ebx, edx, edi, name, &miss);
|
||||
}
|
||||
@ -2605,12 +2474,12 @@ Object* LoadStubCompiler::CompileLoadGlobal(JSObject* object,
|
||||
__ Check(not_equal, "DontDelete cells can't contain the hole");
|
||||
}
|
||||
|
||||
__ IncrementCounter(&Counters::named_load_global_stub, 1);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline, 1);
|
||||
__ mov(eax, ebx);
|
||||
__ ret(0);
|
||||
|
||||
__ bind(&miss);
|
||||
__ IncrementCounter(&Counters::named_load_global_stub_miss, 1);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline_miss, 1);
|
||||
GenerateLoadMiss(masm(), Code::LOAD_IC);
|
||||
|
||||
// Return the generated code.
|
||||
|
57
deps/v8/src/ic.cc
vendored
57
deps/v8/src/ic.cc
vendored
@ -299,7 +299,6 @@ void LoadIC::ClearInlinedVersion(Address address) {
|
||||
// present) to guarantee failure by holding an invalid map (the null
|
||||
// value). The offset can be patched to anything.
|
||||
PatchInlinedLoad(address, Heap::null_value(), 0);
|
||||
PatchInlinedContextualLoad(address, Heap::null_value(), Heap::null_value());
|
||||
}
|
||||
|
||||
|
||||
@ -721,14 +720,6 @@ Object* KeyedCallIC::LoadFunction(State state,
|
||||
}
|
||||
|
||||
|
||||
#ifdef DEBUG
|
||||
#define TRACE_IC_NAMED(msg, name) \
|
||||
if (FLAG_trace_ic) PrintF(msg, *(name)->ToCString())
|
||||
#else
|
||||
#define TRACE_IC_NAMED(msg, name)
|
||||
#endif
|
||||
|
||||
|
||||
Object* LoadIC::Load(State state, Handle<Object> object, Handle<String> name) {
|
||||
// If the object is undefined or null it's illegal to try to get any
|
||||
// of its properties; throw a TypeError in that case.
|
||||
@ -806,24 +797,15 @@ Object* LoadIC::Load(State state, Handle<Object> object, Handle<String> name) {
|
||||
LOG(SuspectReadEvent(*name, *object));
|
||||
}
|
||||
|
||||
bool can_be_inlined_precheck =
|
||||
bool can_be_inlined =
|
||||
FLAG_use_ic &&
|
||||
state == PREMONOMORPHIC &&
|
||||
lookup.IsProperty() &&
|
||||
lookup.IsCacheable() &&
|
||||
lookup.holder() == *object &&
|
||||
lookup.type() == FIELD &&
|
||||
!object->IsAccessCheckNeeded();
|
||||
|
||||
bool can_be_inlined =
|
||||
can_be_inlined_precheck &&
|
||||
state == PREMONOMORPHIC &&
|
||||
lookup.type() == FIELD;
|
||||
|
||||
bool can_be_inlined_contextual =
|
||||
can_be_inlined_precheck &&
|
||||
state == UNINITIALIZED &&
|
||||
lookup.holder()->IsGlobalObject() &&
|
||||
lookup.type() == NORMAL;
|
||||
|
||||
if (can_be_inlined) {
|
||||
Map* map = lookup.holder()->map();
|
||||
// Property's index in the properties array. If negative we have
|
||||
@ -834,29 +816,32 @@ Object* LoadIC::Load(State state, Handle<Object> object, Handle<String> name) {
|
||||
int offset = map->instance_size() + (index * kPointerSize);
|
||||
if (PatchInlinedLoad(address(), map, offset)) {
|
||||
set_target(megamorphic_stub());
|
||||
TRACE_IC_NAMED("[LoadIC : inline patch %s]\n", name);
|
||||
#ifdef DEBUG
|
||||
if (FLAG_trace_ic) {
|
||||
PrintF("[LoadIC : inline patch %s]\n", *name->ToCString());
|
||||
}
|
||||
#endif
|
||||
return lookup.holder()->FastPropertyAt(lookup.GetFieldIndex());
|
||||
#ifdef DEBUG
|
||||
} else {
|
||||
TRACE_IC_NAMED("[LoadIC : no inline patch %s (patching failed)]\n",
|
||||
name);
|
||||
if (FLAG_trace_ic) {
|
||||
PrintF("[LoadIC : no inline patch %s (patching failed)]\n",
|
||||
*name->ToCString());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
TRACE_IC_NAMED("[LoadIC : no inline patch %s (not inobject)]\n", name);
|
||||
if (FLAG_trace_ic) {
|
||||
PrintF("[LoadIC : no inline patch %s (not inobject)]\n",
|
||||
*name->ToCString());
|
||||
}
|
||||
} else if (can_be_inlined_contextual) {
|
||||
Map* map = lookup.holder()->map();
|
||||
JSGlobalPropertyCell* cell = JSGlobalPropertyCell::cast(
|
||||
lookup.holder()->property_dictionary()->ValueAt(
|
||||
lookup.GetDictionaryEntry()));
|
||||
if (PatchInlinedContextualLoad(address(), map, cell)) {
|
||||
set_target(megamorphic_stub());
|
||||
TRACE_IC_NAMED("[LoadIC : inline contextual patch %s]\n", name);
|
||||
ASSERT(cell->value() != Heap::the_hole_value());
|
||||
return cell->value();
|
||||
}
|
||||
} else {
|
||||
if (FLAG_use_ic && state == PREMONOMORPHIC) {
|
||||
TRACE_IC_NAMED("[LoadIC : no inline patch %s (not inlinable)]\n", name);
|
||||
if (FLAG_trace_ic) {
|
||||
PrintF("[LoadIC : no inline patch %s (not inlinable)]\n",
|
||||
*name->ToCString());
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
4
deps/v8/src/ic.h
vendored
4
deps/v8/src/ic.h
vendored
@ -298,10 +298,6 @@ class LoadIC: public IC {
|
||||
|
||||
static bool PatchInlinedLoad(Address address, Object* map, int index);
|
||||
|
||||
static bool PatchInlinedContextualLoad(Address address,
|
||||
Object* map,
|
||||
Object* cell);
|
||||
|
||||
friend class IC;
|
||||
};
|
||||
|
||||
|
4
deps/v8/src/log.cc
vendored
4
deps/v8/src/log.cc
vendored
@ -171,9 +171,7 @@ void StackTracer::Trace(TickSample* sample) {
|
||||
SafeStackTraceFrameIterator it(sample->fp, sample->sp,
|
||||
sample->sp, js_entry_sp);
|
||||
while (!it.done() && i < TickSample::kMaxFramesCount) {
|
||||
sample->stack[i++] =
|
||||
reinterpret_cast<Address>(it.frame()->function_slot_object()) -
|
||||
kHeapObjectTag;
|
||||
sample->stack[i++] = reinterpret_cast<Address>(it.frame()->function());
|
||||
it.Advance();
|
||||
}
|
||||
sample->frames_count = i;
|
||||
|
26
deps/v8/src/messages.js
vendored
26
deps/v8/src/messages.js
vendored
@ -684,11 +684,6 @@ CallSite.prototype.getEvalOrigin = function () {
|
||||
return FormatEvalOrigin(script);
|
||||
};
|
||||
|
||||
CallSite.prototype.getScriptNameOrSourceURL = function () {
|
||||
var script = %FunctionGetScript(this.fun);
|
||||
return script ? script.nameOrSourceURL() : null;
|
||||
};
|
||||
|
||||
CallSite.prototype.getFunction = function () {
|
||||
return this.fun;
|
||||
};
|
||||
@ -780,11 +775,7 @@ CallSite.prototype.isConstructor = function () {
|
||||
};
|
||||
|
||||
function FormatEvalOrigin(script) {
|
||||
var sourceURL = script.nameOrSourceURL();
|
||||
if (sourceURL)
|
||||
return sourceURL;
|
||||
|
||||
var eval_origin = "eval at ";
|
||||
var eval_origin = "";
|
||||
if (script.eval_from_function_name) {
|
||||
eval_origin += script.eval_from_function_name;
|
||||
} else {
|
||||
@ -795,9 +786,9 @@ function FormatEvalOrigin(script) {
|
||||
if (eval_from_script) {
|
||||
if (eval_from_script.compilation_type == COMPILATION_TYPE_EVAL) {
|
||||
// eval script originated from another eval.
|
||||
eval_origin += " (" + FormatEvalOrigin(eval_from_script) + ")";
|
||||
eval_origin += " (eval at " + FormatEvalOrigin(eval_from_script) + ")";
|
||||
} else {
|
||||
// eval script originated from "real" source.
|
||||
// eval script originated from "real" scource.
|
||||
if (eval_from_script.name) {
|
||||
eval_origin += " (" + eval_from_script.name;
|
||||
var location = eval_from_script.locationFromPosition(script.eval_from_script_position, true);
|
||||
@ -816,18 +807,13 @@ function FormatEvalOrigin(script) {
|
||||
};
|
||||
|
||||
function FormatSourcePosition(frame) {
|
||||
var fileName;
|
||||
var fileLocation = "";
|
||||
if (frame.isNative()) {
|
||||
fileLocation = "native";
|
||||
} else if (frame.isEval()) {
|
||||
fileName = frame.getScriptNameOrSourceURL();
|
||||
if (!fileName)
|
||||
fileLocation = frame.getEvalOrigin();
|
||||
fileLocation = "eval at " + frame.getEvalOrigin();
|
||||
} else {
|
||||
fileName = frame.getFileName();
|
||||
}
|
||||
|
||||
var fileName = frame.getFileName();
|
||||
if (fileName) {
|
||||
fileLocation += fileName;
|
||||
var lineNumber = frame.getLineNumber();
|
||||
@ -839,7 +825,7 @@ function FormatSourcePosition(frame) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
if (!fileLocation) {
|
||||
fileLocation = "unknown source";
|
||||
}
|
||||
|
10
deps/v8/src/mips/frames-mips.cc
vendored
10
deps/v8/src/mips/frames-mips.cc
vendored
@ -52,7 +52,9 @@ StackFrame::Type StackFrame::ComputeType(State* state) {
|
||||
}
|
||||
|
||||
|
||||
Address ExitFrame::ComputeStackPointer(Address fp) {
|
||||
StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) {
|
||||
if (fp == 0) return NONE;
|
||||
// Compute frame type and stack pointer.
|
||||
Address sp = fp + ExitFrameConstants::kSPDisplacement;
|
||||
const int offset = ExitFrameConstants::kCodeOffset;
|
||||
Object* code = Memory::Object_at(fp + offset);
|
||||
@ -60,7 +62,11 @@ Address ExitFrame::ComputeStackPointer(Address fp) {
|
||||
if (is_debug_exit) {
|
||||
sp -= kNumJSCallerSaved * kPointerSize;
|
||||
}
|
||||
return sp;
|
||||
// Fill in the state.
|
||||
state->sp = sp;
|
||||
state->fp = fp;
|
||||
state->pc_address = reinterpret_cast<Address*>(sp - 1 * kPointerSize);
|
||||
return EXIT;
|
||||
}
|
||||
|
||||
|
||||
|
13
deps/v8/src/objects.cc
vendored
13
deps/v8/src/objects.cc
vendored
@ -3825,7 +3825,7 @@ Object* DescriptorArray::RemoveTransitions() {
|
||||
}
|
||||
|
||||
|
||||
void DescriptorArray::SortUnchecked() {
|
||||
void DescriptorArray::Sort() {
|
||||
// In-place heap sort.
|
||||
int len = number_of_descriptors();
|
||||
|
||||
@ -3875,11 +3875,7 @@ void DescriptorArray::SortUnchecked() {
|
||||
parent_index = child_index;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void DescriptorArray::Sort() {
|
||||
SortUnchecked();
|
||||
SLOW_ASSERT(IsSortedNoDuplicates());
|
||||
}
|
||||
|
||||
@ -5273,13 +5269,6 @@ bool SharedFunctionInfo::CanGenerateInlineConstructor(Object* prototype) {
|
||||
}
|
||||
|
||||
|
||||
void SharedFunctionInfo::ForbidInlineConstructor() {
|
||||
set_compiler_hints(BooleanBit::set(compiler_hints(),
|
||||
kHasOnlySimpleThisPropertyAssignments,
|
||||
false));
|
||||
}
|
||||
|
||||
|
||||
void SharedFunctionInfo::SetThisPropertyAssignmentsInfo(
|
||||
bool only_simple_this_property_assignments,
|
||||
FixedArray* assignments) {
|
||||
|
9
deps/v8/src/objects.h
vendored
9
deps/v8/src/objects.h
vendored
@ -1892,11 +1892,6 @@ class DescriptorArray: public FixedArray {
|
||||
MUST_USE_RESULT Object* RemoveTransitions();
|
||||
|
||||
// Sort the instance descriptors by the hash codes of their keys.
|
||||
// Does not check for duplicates.
|
||||
void SortUnchecked();
|
||||
|
||||
// Sort the instance descriptors by the hash codes of their keys.
|
||||
// Checks the result for duplicates.
|
||||
void Sort();
|
||||
|
||||
// Search the instance descriptors for given name.
|
||||
@ -3547,10 +3542,6 @@ class SharedFunctionInfo: public HeapObject {
|
||||
// prototype.
|
||||
bool CanGenerateInlineConstructor(Object* prototype);
|
||||
|
||||
// Prevents further attempts to generate inline constructors.
|
||||
// To be called if generation failed for any reason.
|
||||
void ForbidInlineConstructor();
|
||||
|
||||
// For functions which only contains this property assignments this provides
|
||||
// access to the names for the properties assigned.
|
||||
DECL_ACCESSORS(this_property_assignments, Object)
|
||||
|
7
deps/v8/src/parser.cc
vendored
7
deps/v8/src/parser.cc
vendored
@ -1001,7 +1001,7 @@ class CompleteParserRecorder: public PartialParserRecorder {
|
||||
Vector<Vector<const char> > symbol = symbol_entries_.AddBlock(1, literal);
|
||||
entry->key = &symbol[0];
|
||||
}
|
||||
WriteNumber(id - 1);
|
||||
symbol_store_.Add(id - 1);
|
||||
}
|
||||
|
||||
virtual Vector<unsigned> ExtractData() {
|
||||
@ -1457,7 +1457,7 @@ Parser::Parser(Handle<Script> script,
|
||||
ParserLog* log,
|
||||
ScriptDataImpl* pre_data)
|
||||
: script_(script),
|
||||
scanner_(),
|
||||
scanner_(is_pre_parsing),
|
||||
top_scope_(NULL),
|
||||
with_nesting_level_(0),
|
||||
temp_scope_(NULL),
|
||||
@ -1503,7 +1503,6 @@ FunctionLiteral* Parser::ParseProgram(Handle<String> source,
|
||||
source->TryFlatten();
|
||||
scanner_.Initialize(source, JAVASCRIPT);
|
||||
ASSERT(target_stack_ == NULL);
|
||||
if (pre_data_ != NULL) pre_data_->Initialize();
|
||||
|
||||
// Compute the parsing mode.
|
||||
mode_ = FLAG_lazy ? PARSE_LAZILY : PARSE_EAGERLY;
|
||||
@ -5493,9 +5492,7 @@ ScriptDataImpl* PartialPreParse(Handle<String> source,
|
||||
|
||||
|
||||
void ScriptDataImpl::Initialize() {
|
||||
// Prepares state for use.
|
||||
if (store_.length() >= kHeaderSize) {
|
||||
function_index_ = kHeaderSize;
|
||||
int symbol_data_offset = kHeaderSize + store_[kFunctionsSizeOffset];
|
||||
if (store_.length() > symbol_data_offset) {
|
||||
symbol_data_ = reinterpret_cast<byte*>(&store_[symbol_data_offset]);
|
||||
|
7
deps/v8/src/parser.h
vendored
7
deps/v8/src/parser.h
vendored
@ -101,7 +101,10 @@ class ScriptDataImpl : public ScriptData {
|
||||
public:
|
||||
explicit ScriptDataImpl(Vector<unsigned> store)
|
||||
: store_(store),
|
||||
owns_store_(true) { }
|
||||
function_index_(kHeaderSize),
|
||||
owns_store_(true) {
|
||||
Initialize();
|
||||
}
|
||||
|
||||
// Create an empty ScriptDataImpl that is guaranteed to not satisfy
|
||||
// a SanityCheck.
|
||||
@ -187,8 +190,10 @@ class ScriptDataImpl : public ScriptData {
|
||||
ScriptDataImpl(const char* backing_store, int length)
|
||||
: store_(reinterpret_cast<unsigned*>(const_cast<char*>(backing_store)),
|
||||
length / sizeof(unsigned)),
|
||||
function_index_(kHeaderSize),
|
||||
owns_store_(false) {
|
||||
ASSERT_EQ(0, reinterpret_cast<intptr_t>(backing_store) % sizeof(unsigned));
|
||||
Initialize();
|
||||
}
|
||||
|
||||
// Read strings written by ParserRecorder::WriteString.
|
||||
|
6
deps/v8/src/profile-generator-inl.h
vendored
6
deps/v8/src/profile-generator-inl.h
vendored
@ -46,7 +46,8 @@ const char* StringsStorage::GetFunctionName(const char* name) {
|
||||
|
||||
|
||||
CodeEntry::CodeEntry(int security_token_id)
|
||||
: tag_(Logger::FUNCTION_TAG),
|
||||
: call_uid_(0),
|
||||
tag_(Logger::FUNCTION_TAG),
|
||||
name_prefix_(kEmptyNamePrefix),
|
||||
name_(""),
|
||||
resource_name_(""),
|
||||
@ -61,7 +62,8 @@ CodeEntry::CodeEntry(Logger::LogEventsAndTags tag,
|
||||
const char* resource_name,
|
||||
int line_number,
|
||||
int security_token_id)
|
||||
: tag_(tag),
|
||||
: call_uid_(next_call_uid_++),
|
||||
tag_(tag),
|
||||
name_prefix_(name_prefix),
|
||||
name_(name),
|
||||
resource_name_(resource_name),
|
||||
|
22
deps/v8/src/profile-generator.cc
vendored
22
deps/v8/src/profile-generator.cc
vendored
@ -121,9 +121,11 @@ const char* StringsStorage::GetName(String* name) {
|
||||
|
||||
|
||||
const char* CodeEntry::kEmptyNamePrefix = "";
|
||||
unsigned CodeEntry::next_call_uid_ = 1;
|
||||
|
||||
|
||||
void CodeEntry::CopyData(const CodeEntry& source) {
|
||||
call_uid_ = source.call_uid_;
|
||||
tag_ = source.tag_;
|
||||
name_prefix_ = source.name_prefix_;
|
||||
name_ = source.name_;
|
||||
@ -132,26 +134,6 @@ void CodeEntry::CopyData(const CodeEntry& source) {
|
||||
}
|
||||
|
||||
|
||||
uint32_t CodeEntry::GetCallUid() const {
|
||||
uint32_t hash = ComputeIntegerHash(tag_);
|
||||
hash ^= static_cast<int32_t>(reinterpret_cast<intptr_t>(name_prefix_));
|
||||
hash ^= static_cast<int32_t>(reinterpret_cast<intptr_t>(name_));
|
||||
hash ^= static_cast<int32_t>(reinterpret_cast<intptr_t>(resource_name_));
|
||||
hash ^= static_cast<int32_t>(line_number_);
|
||||
return hash;
|
||||
}
|
||||
|
||||
|
||||
bool CodeEntry::IsSameAs(CodeEntry* entry) const {
|
||||
return this == entry
|
||||
|| (tag_ == entry->tag_
|
||||
&& name_prefix_ == entry->name_prefix_
|
||||
&& name_ == entry->name_
|
||||
&& resource_name_ == entry->resource_name_
|
||||
&& line_number_ == entry->line_number_);
|
||||
}
|
||||
|
||||
|
||||
ProfileNode* ProfileNode::FindChild(CodeEntry* entry) {
|
||||
HashMap::Entry* map_entry =
|
||||
children_.Lookup(entry, CodeEntryHash(entry), false);
|
||||
|
11
deps/v8/src/profile-generator.h
vendored
11
deps/v8/src/profile-generator.h
vendored
@ -100,17 +100,17 @@ class CodeEntry {
|
||||
INLINE(const char* name() const) { return name_; }
|
||||
INLINE(const char* resource_name() const) { return resource_name_; }
|
||||
INLINE(int line_number() const) { return line_number_; }
|
||||
INLINE(unsigned call_uid() const) { return call_uid_; }
|
||||
INLINE(int security_token_id() const) { return security_token_id_; }
|
||||
|
||||
INLINE(static bool is_js_function_tag(Logger::LogEventsAndTags tag));
|
||||
|
||||
void CopyData(const CodeEntry& source);
|
||||
uint32_t GetCallUid() const;
|
||||
bool IsSameAs(CodeEntry* entry) const;
|
||||
|
||||
static const char* kEmptyNamePrefix;
|
||||
|
||||
private:
|
||||
unsigned call_uid_;
|
||||
Logger::LogEventsAndTags tag_;
|
||||
const char* name_prefix_;
|
||||
const char* name_;
|
||||
@ -118,6 +118,8 @@ class CodeEntry {
|
||||
int line_number_;
|
||||
int security_token_id_;
|
||||
|
||||
static unsigned next_call_uid_;
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(CodeEntry);
|
||||
};
|
||||
|
||||
@ -145,12 +147,11 @@ class ProfileNode {
|
||||
|
||||
private:
|
||||
INLINE(static bool CodeEntriesMatch(void* entry1, void* entry2)) {
|
||||
return reinterpret_cast<CodeEntry*>(entry1)->IsSameAs(
|
||||
reinterpret_cast<CodeEntry*>(entry2));
|
||||
return entry1 == entry2;
|
||||
}
|
||||
|
||||
INLINE(static uint32_t CodeEntryHash(CodeEntry* entry)) {
|
||||
return entry->GetCallUid();
|
||||
return static_cast<int32_t>(reinterpret_cast<intptr_t>(entry));
|
||||
}
|
||||
|
||||
ProfileTree* tree_;
|
||||
|
31
deps/v8/src/runtime.cc
vendored
31
deps/v8/src/runtime.cc
vendored
@ -946,7 +946,7 @@ static Object* Runtime_DeclareContextSlot(Arguments args) {
|
||||
Handle<String> name(String::cast(args[1]));
|
||||
PropertyAttributes mode =
|
||||
static_cast<PropertyAttributes>(Smi::cast(args[2])->value());
|
||||
RUNTIME_ASSERT(mode == READ_ONLY || mode == NONE);
|
||||
ASSERT(mode == READ_ONLY || mode == NONE);
|
||||
Handle<Object> initial_value(args[3]);
|
||||
|
||||
// Declarations are always done in the function context.
|
||||
@ -8944,39 +8944,24 @@ static Object* Runtime_ClearBreakPoint(Arguments args) {
|
||||
}
|
||||
|
||||
|
||||
// Change the state of break on exceptions.
|
||||
// args[0]: Enum value indicating whether to affect caught/uncaught exceptions.
|
||||
// args[1]: Boolean indicating on/off.
|
||||
// Change the state of break on exceptions
|
||||
// args[0]: boolean indicating uncaught exceptions
|
||||
// args[1]: boolean indicating on/off
|
||||
static Object* Runtime_ChangeBreakOnException(Arguments args) {
|
||||
HandleScope scope;
|
||||
ASSERT(args.length() == 2);
|
||||
RUNTIME_ASSERT(args[0]->IsNumber());
|
||||
CONVERT_BOOLEAN_CHECKED(enable, args[1]);
|
||||
ASSERT(args[0]->IsNumber());
|
||||
ASSERT(args[1]->IsBoolean());
|
||||
|
||||
// If the number doesn't match an enum value, the ChangeBreakOnException
|
||||
// function will default to affecting caught exceptions.
|
||||
// Update break point state
|
||||
ExceptionBreakType type =
|
||||
static_cast<ExceptionBreakType>(NumberToUint32(args[0]));
|
||||
// Update break point state.
|
||||
bool enable = args[1]->ToBoolean()->IsTrue();
|
||||
Debug::ChangeBreakOnException(type, enable);
|
||||
return Heap::undefined_value();
|
||||
}
|
||||
|
||||
|
||||
// Returns the state of break on exceptions
|
||||
// args[0]: boolean indicating uncaught exceptions
|
||||
static Object* Runtime_IsBreakOnException(Arguments args) {
|
||||
HandleScope scope;
|
||||
ASSERT(args.length() == 1);
|
||||
RUNTIME_ASSERT(args[0]->IsNumber());
|
||||
|
||||
ExceptionBreakType type =
|
||||
static_cast<ExceptionBreakType>(NumberToUint32(args[0]));
|
||||
bool result = Debug::IsBreakOnException(type);
|
||||
return Smi::FromInt(result);
|
||||
}
|
||||
|
||||
|
||||
// Prepare for stepping
|
||||
// args[0]: break id for checking execution state
|
||||
// args[1]: step action from the enumeration StepAction
|
||||
|
1
deps/v8/src/runtime.h
vendored
1
deps/v8/src/runtime.h
vendored
@ -332,7 +332,6 @@ namespace internal {
|
||||
F(SetScriptBreakPoint, 3, 1) \
|
||||
F(ClearBreakPoint, 1, 1) \
|
||||
F(ChangeBreakOnException, 2, 1) \
|
||||
F(IsBreakOnException, 1, 1) \
|
||||
F(PrepareStep, 3, 1) \
|
||||
F(ClearStepping, 0, 1) \
|
||||
F(DebugEvaluate, 4, 1) \
|
||||
|
9
deps/v8/src/scanner.cc
vendored
9
deps/v8/src/scanner.cc
vendored
@ -1,4 +1,4 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Copyright 2006-2008 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
@ -342,11 +342,8 @@ void Scanner::LiteralScope::Complete() {
|
||||
// ----------------------------------------------------------------------------
|
||||
// Scanner
|
||||
|
||||
Scanner::Scanner()
|
||||
: has_line_terminator_before_next_(false),
|
||||
is_parsing_json_(false),
|
||||
source_(NULL),
|
||||
stack_overflow_(false) {}
|
||||
Scanner::Scanner(ParserMode pre)
|
||||
: is_pre_parsing_(pre == PREPARSE), stack_overflow_(false) { }
|
||||
|
||||
|
||||
void Scanner::Initialize(Handle<String> source,
|
||||
|
6
deps/v8/src/scanner.h
vendored
6
deps/v8/src/scanner.h
vendored
@ -1,4 +1,4 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Copyright 2006-2008 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
@ -281,7 +281,8 @@ class Scanner {
|
||||
bool complete_;
|
||||
};
|
||||
|
||||
Scanner();
|
||||
// Construction
|
||||
explicit Scanner(ParserMode parse_mode);
|
||||
|
||||
// Initialize the Scanner to scan source.
|
||||
void Initialize(Handle<String> source,
|
||||
@ -487,6 +488,7 @@ class Scanner {
|
||||
TokenDesc current_; // desc for current token (as returned by Next())
|
||||
TokenDesc next_; // desc for next token (one token look-ahead)
|
||||
bool has_line_terminator_before_next_;
|
||||
bool is_pre_parsing_;
|
||||
bool is_parsing_json_;
|
||||
|
||||
// Different UTF16 buffers used to pull characters from. Based on input one of
|
||||
|
2
deps/v8/src/stub-cache.cc
vendored
2
deps/v8/src/stub-cache.cc
vendored
@ -1227,7 +1227,7 @@ Object* CallStubCompiler::CompileCustomCall(int generator_id,
|
||||
String* fname) {
|
||||
ASSERT(generator_id >= 0 && generator_id < kNumCallGenerators);
|
||||
switch (generator_id) {
|
||||
#define CALL_GENERATOR_CASE(ignored1, ignored2, name) \
|
||||
#define CALL_GENERATOR_CASE(ignored1, ignored2, ignored3, name) \
|
||||
case k##name##CallGenerator: \
|
||||
return CallStubCompiler::Compile##name##Call(object, \
|
||||
holder, \
|
||||
|
44
deps/v8/src/stub-cache.h
vendored
44
deps/v8/src/stub-cache.h
vendored
@ -370,15 +370,13 @@ class StubCompiler BASE_EMBEDDED {
|
||||
Register prototype);
|
||||
|
||||
// Generates prototype loading code that uses the objects from the
|
||||
// context we were in when this function was called. If the context
|
||||
// has changed, a jump to miss is performed. This ties the generated
|
||||
// code to a particular context and so must not be used in cases
|
||||
// where the generated code is not allowed to have references to
|
||||
// objects from a context.
|
||||
// context we were in when this function was called. This ties the
|
||||
// generated code to a particular context and so must not be used in
|
||||
// cases where the generated code is not allowed to have references
|
||||
// to objects from a context.
|
||||
static void GenerateDirectLoadGlobalFunctionPrototype(MacroAssembler* masm,
|
||||
int index,
|
||||
Register prototype,
|
||||
Label* miss);
|
||||
Register prototype);
|
||||
|
||||
static void GenerateFastPropertyLoad(MacroAssembler* masm,
|
||||
Register dst, Register src,
|
||||
@ -614,25 +612,29 @@ class KeyedStoreStubCompiler: public StubCompiler {
|
||||
// Installation of custom call generators for the selected builtins is
|
||||
// handled by the bootstrapper.
|
||||
//
|
||||
// Each entry has a name of a global object property holding an object
|
||||
// optionally followed by ".prototype" (this controls whether the
|
||||
// generator is set on the object itself or, in case it's a function,
|
||||
// on the its instance prototype), a name of a builtin function on the
|
||||
// object (the one the generator is set for), and a name of the
|
||||
// generator (used to build ids and generator function names).
|
||||
// Each entry has a name of a global function (lowercased), a flag
|
||||
// controlling whether the generator is set on the function itself or
|
||||
// on its instance prototype, a name of a builtin function on the
|
||||
// function or its instance prototype (the one the generator is set
|
||||
// for), and a name of a generator itself (used to build ids and
|
||||
// generator function names).
|
||||
#define CUSTOM_CALL_IC_GENERATORS(V) \
|
||||
V(Array.prototype, push, ArrayPush) \
|
||||
V(Array.prototype, pop, ArrayPop) \
|
||||
V(String.prototype, charCodeAt, StringCharCodeAt) \
|
||||
V(String.prototype, charAt, StringCharAt) \
|
||||
V(String, fromCharCode, StringFromCharCode) \
|
||||
V(Math, floor, MathFloor)
|
||||
V(array, INSTANCE_PROTOTYPE, push, ArrayPush) \
|
||||
V(array, INSTANCE_PROTOTYPE, pop, ArrayPop) \
|
||||
V(string, INSTANCE_PROTOTYPE, charCodeAt, StringCharCodeAt) \
|
||||
V(string, INSTANCE_PROTOTYPE, charAt, StringCharAt) \
|
||||
V(string, FUNCTION, fromCharCode, StringFromCharCode)
|
||||
|
||||
|
||||
class CallStubCompiler: public StubCompiler {
|
||||
public:
|
||||
enum CustomGeneratorOwner {
|
||||
FUNCTION,
|
||||
INSTANCE_PROTOTYPE
|
||||
};
|
||||
|
||||
enum {
|
||||
#define DECLARE_CALL_GENERATOR_ID(ignored1, ignore2, name) \
|
||||
#define DECLARE_CALL_GENERATOR_ID(ignored1, ignore2, ignored3, name) \
|
||||
k##name##CallGenerator,
|
||||
CUSTOM_CALL_IC_GENERATORS(DECLARE_CALL_GENERATOR_ID)
|
||||
#undef DECLARE_CALL_GENERATOR_ID
|
||||
@ -671,7 +673,7 @@ class CallStubCompiler: public StubCompiler {
|
||||
JSFunction* function,
|
||||
String* name);
|
||||
|
||||
#define DECLARE_CALL_GENERATOR(ignored1, ignored2, name) \
|
||||
#define DECLARE_CALL_GENERATOR(ignored1, ignored2, ignored3, name) \
|
||||
Object* Compile##name##Call(Object* object, \
|
||||
JSObject* holder, \
|
||||
JSGlobalPropertyCell* cell, \
|
||||
|
2
deps/v8/src/v8-counters.h
vendored
2
deps/v8/src/v8-counters.h
vendored
@ -161,8 +161,6 @@ namespace internal {
|
||||
SC(named_load_inline_miss, V8.NamedLoadInlineMiss) \
|
||||
SC(named_load_global_inline, V8.NamedLoadGlobalInline) \
|
||||
SC(named_load_global_inline_miss, V8.NamedLoadGlobalInlineMiss) \
|
||||
SC(named_load_global_stub, V8.NamedLoadGlobalStub) \
|
||||
SC(named_load_global_stub_miss, V8.NamedLoadGlobalStubMiss) \
|
||||
SC(keyed_store_field, V8.KeyedStoreField) \
|
||||
SC(keyed_store_inline, V8.KeyedStoreInline) \
|
||||
SC(keyed_store_inline_miss, V8.KeyedStoreInlineMiss) \
|
||||
|
2
deps/v8/src/version.cc
vendored
2
deps/v8/src/version.cc
vendored
@ -34,7 +34,7 @@
|
||||
// cannot be changed without changing the SCons build script.
|
||||
#define MAJOR_VERSION 2
|
||||
#define MINOR_VERSION 4
|
||||
#define BUILD_NUMBER 5
|
||||
#define BUILD_NUMBER 4
|
||||
#define PATCH_LEVEL 0
|
||||
#define CANDIDATE_VERSION false
|
||||
|
||||
|
11
deps/v8/src/x64/code-stubs-x64.cc
vendored
11
deps/v8/src/x64/code-stubs-x64.cc
vendored
@ -1989,7 +1989,7 @@ void RegExpExecStub::Generate(MacroAssembler* masm) {
|
||||
__ j(negative, &done);
|
||||
// Read the value from the static offsets vector buffer and make it a smi.
|
||||
__ movl(rdi, Operand(rcx, rdx, times_int_size, 0));
|
||||
__ Integer32ToSmi(rdi, rdi);
|
||||
__ Integer32ToSmi(rdi, rdi, &runtime);
|
||||
// Store the smi value in the last match info.
|
||||
__ movq(FieldOperand(rbx,
|
||||
rdx,
|
||||
@ -3343,7 +3343,7 @@ void StringAddStub::Generate(MacroAssembler* masm) {
|
||||
|
||||
// Look at the length of the result of adding the two strings.
|
||||
STATIC_ASSERT(String::kMaxLength <= Smi::kMaxValue / 2);
|
||||
__ SmiAdd(rbx, rbx, rcx);
|
||||
__ SmiAdd(rbx, rbx, rcx, NULL);
|
||||
// Use the runtime system when adding two one character strings, as it
|
||||
// contains optimizations for this specific case using the symbol table.
|
||||
__ SmiCompare(rbx, Smi::FromInt(2));
|
||||
@ -3803,7 +3803,7 @@ void SubStringStub::Generate(MacroAssembler* masm) {
|
||||
__ movq(rdx, Operand(rsp, kFromOffset));
|
||||
__ JumpIfNotBothPositiveSmi(rcx, rdx, &runtime);
|
||||
|
||||
__ SmiSub(rcx, rcx, rdx); // Overflow doesn't happen.
|
||||
__ SmiSub(rcx, rcx, rdx, NULL); // Overflow doesn't happen.
|
||||
__ cmpq(FieldOperand(rax, String::kLengthOffset), rcx);
|
||||
Label return_rax;
|
||||
__ j(equal, &return_rax);
|
||||
@ -3936,7 +3936,8 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm,
|
||||
__ movq(scratch4, scratch1);
|
||||
__ SmiSub(scratch4,
|
||||
scratch4,
|
||||
FieldOperand(right, String::kLengthOffset));
|
||||
FieldOperand(right, String::kLengthOffset),
|
||||
NULL);
|
||||
// Register scratch4 now holds left.length - right.length.
|
||||
const Register length_difference = scratch4;
|
||||
Label left_shorter;
|
||||
@ -3944,7 +3945,7 @@ void StringCompareStub::GenerateCompareFlatAsciiStrings(MacroAssembler* masm,
|
||||
// The right string isn't longer that the left one.
|
||||
// Get the right string's length by subtracting the (non-negative) difference
|
||||
// from the left string's length.
|
||||
__ SmiSub(scratch1, scratch1, length_difference);
|
||||
__ SmiSub(scratch1, scratch1, length_difference, NULL);
|
||||
__ bind(&left_shorter);
|
||||
// Register scratch1 now holds Min(left.length, right.length).
|
||||
const Register min_length = scratch1;
|
||||
|
14
deps/v8/src/x64/frames-x64.cc
vendored
14
deps/v8/src/x64/frames-x64.cc
vendored
@ -35,8 +35,18 @@ namespace v8 {
|
||||
namespace internal {
|
||||
|
||||
|
||||
Address ExitFrame::ComputeStackPointer(Address fp) {
|
||||
return Memory::Address_at(fp + ExitFrameConstants::kSPOffset);
|
||||
|
||||
|
||||
StackFrame::Type ExitFrame::GetStateForFramePointer(Address fp, State* state) {
|
||||
if (fp == 0) return NONE;
|
||||
// Compute the stack pointer.
|
||||
Address sp = Memory::Address_at(fp + ExitFrameConstants::kSPOffset);
|
||||
// Fill in the state.
|
||||
state->fp = fp;
|
||||
state->sp = sp;
|
||||
state->pc_address = reinterpret_cast<Address*>(sp - 1 * kPointerSize);
|
||||
ASSERT(*state->pc_address != NULL);
|
||||
return EXIT;
|
||||
}
|
||||
|
||||
|
||||
|
102
deps/v8/src/x64/full-codegen-x64.cc
vendored
102
deps/v8/src/x64/full-codegen-x64.cc
vendored
@ -625,7 +625,10 @@ void FullCodeGenerator::EmitDeclaration(Variable* variable,
|
||||
__ pop(rdx);
|
||||
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Absence of a test rax instruction following the call
|
||||
// indicates that none of the load was inlined.
|
||||
__ nop();
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -938,7 +941,8 @@ void FullCodeGenerator::EmitLoadGlobalSlotCheckExtensions(
|
||||
RelocInfo::Mode mode = (typeof_state == INSIDE_TYPEOF)
|
||||
? RelocInfo::CODE_TARGET
|
||||
: RelocInfo::CODE_TARGET_CONTEXT;
|
||||
EmitCallIC(ic, mode);
|
||||
__ call(ic, mode);
|
||||
__ nop(); // Signal no inlined code.
|
||||
}
|
||||
|
||||
|
||||
@ -1015,7 +1019,7 @@ void FullCodeGenerator::EmitDynamicLoadFromSlotFastCase(
|
||||
slow));
|
||||
__ Move(rax, key_literal->handle());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ jmp(done);
|
||||
}
|
||||
}
|
||||
@ -1039,7 +1043,11 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
__ Move(rcx, var->name());
|
||||
__ movq(rax, CodeGenerator::GlobalObject());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET_CONTEXT);
|
||||
// A test rax instruction following the call is used by the IC to
|
||||
// indicate that the inobject property case was inlined. Ensure there
|
||||
// is no test rax instruction here.
|
||||
__ nop();
|
||||
Apply(context, rax);
|
||||
|
||||
} else if (slot != NULL && slot->type() == Slot::LOOKUP) {
|
||||
@ -1102,7 +1110,10 @@ void FullCodeGenerator::EmitVariableLoad(Variable* var,
|
||||
|
||||
// Do a keyed property load.
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Notice: We must not have a "test rax, ..." instruction after the
|
||||
// call. It is treated specially by the LoadIC code.
|
||||
__ nop();
|
||||
Apply(context, rax);
|
||||
}
|
||||
}
|
||||
@ -1201,7 +1212,8 @@ void FullCodeGenerator::VisitObjectLiteral(ObjectLiteral* expr) {
|
||||
__ Move(rcx, key->handle());
|
||||
__ movq(rdx, Operand(rsp, 0));
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
break;
|
||||
}
|
||||
// Fall through.
|
||||
@ -1413,14 +1425,16 @@ void FullCodeGenerator::EmitNamedPropertyLoad(Property* prop) {
|
||||
Literal* key = prop->key()->AsLiteral();
|
||||
__ Move(rcx, key->handle());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
}
|
||||
|
||||
|
||||
void FullCodeGenerator::EmitKeyedPropertyLoad(Property* prop) {
|
||||
SetSourcePosition(prop->position());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
}
|
||||
|
||||
|
||||
@ -1539,7 +1553,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(rax); // Restore value.
|
||||
__ Move(rcx, prop->key()->AsLiteral()->handle());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
break;
|
||||
}
|
||||
case KEYED_PROPERTY: {
|
||||
@ -1550,7 +1565,8 @@ void FullCodeGenerator::EmitAssignment(Expression* expr) {
|
||||
__ pop(rdx);
|
||||
__ pop(rax);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1573,7 +1589,8 @@ void FullCodeGenerator::EmitVariableAssignment(Variable* var,
|
||||
__ Move(rcx, var->name());
|
||||
__ movq(rdx, CodeGenerator::GlobalObject());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
|
||||
} else if (var->mode() != Variable::CONST || op == Token::INIT_CONST) {
|
||||
// Perform the assignment for non-const variables and for initialization
|
||||
@ -1657,7 +1674,8 @@ void FullCodeGenerator::EmitNamedPropertyAssignment(Assignment* expr) {
|
||||
__ pop(rdx);
|
||||
}
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop();
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -1695,7 +1713,10 @@ void FullCodeGenerator::EmitKeyedPropertyAssignment(Assignment* expr) {
|
||||
// Record source code position before IC call.
|
||||
SetSourcePosition(expr->position());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
|
||||
// If the assignment ends an initialization block, revert to fast case.
|
||||
if (expr->ends_initialization_block()) {
|
||||
@ -1744,7 +1765,7 @@ void FullCodeGenerator::EmitCallWithIC(Call* expr,
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count,
|
||||
in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ Call(ic, mode);
|
||||
// Restore context register.
|
||||
__ movq(rsi, Operand(rbp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, rax);
|
||||
@ -1768,7 +1789,7 @@ void FullCodeGenerator::EmitKeyedCallWithIC(Call* expr,
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeKeyedCallInitialize(arg_count,
|
||||
in_loop);
|
||||
EmitCallIC(ic, mode);
|
||||
__ Call(ic, mode);
|
||||
// Restore context register.
|
||||
__ movq(rsi, Operand(rbp, StandardFrameConstants::kContextOffset));
|
||||
Apply(context_, rax);
|
||||
@ -1903,7 +1924,11 @@ void FullCodeGenerator::VisitCall(Call* expr) {
|
||||
// Record source code position for IC call.
|
||||
SetSourcePosition(prop->position());
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedLoadIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// By emitting a nop we make sure that we do not have a "test rax,..."
|
||||
// instruction after the call as it is treated specially
|
||||
// by the LoadIC code.
|
||||
__ nop();
|
||||
// Pop receiver.
|
||||
__ pop(rbx);
|
||||
// Push result (function).
|
||||
@ -2816,7 +2841,7 @@ void FullCodeGenerator::VisitCallRuntime(CallRuntime* expr) {
|
||||
__ Move(rcx, expr->name());
|
||||
InLoopFlag in_loop = (loop_depth() > 0) ? IN_LOOP : NOT_IN_LOOP;
|
||||
Handle<Code> ic = CodeGenerator::ComputeCallInitialize(arg_count, in_loop);
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// Restore context register.
|
||||
__ movq(rsi, Operand(rbp, StandardFrameConstants::kContextOffset));
|
||||
} else {
|
||||
@ -3114,7 +3139,10 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ Move(rcx, prop->key()->AsLiteral()->handle());
|
||||
__ pop(rdx);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::StoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
if (expr->is_postfix()) {
|
||||
if (context_ != Expression::kEffect) {
|
||||
ApplyTOS(context_);
|
||||
@ -3128,7 +3156,10 @@ void FullCodeGenerator::VisitCountOperation(CountOperation* expr) {
|
||||
__ pop(rcx);
|
||||
__ pop(rdx);
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::KeyedStoreIC_Initialize));
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ call(ic, RelocInfo::CODE_TARGET);
|
||||
// This nop signals to the IC that there is no inlined code at the call
|
||||
// site for it to patch.
|
||||
__ nop();
|
||||
if (expr->is_postfix()) {
|
||||
if (context_ != Expression::kEffect) {
|
||||
ApplyTOS(context_);
|
||||
@ -3151,7 +3182,8 @@ void FullCodeGenerator::VisitForTypeofValue(Expression* expr, Location where) {
|
||||
Handle<Code> ic(Builtins::builtin(Builtins::LoadIC_Initialize));
|
||||
// Use a regular load, not a contextual load, to avoid a reference
|
||||
// error.
|
||||
EmitCallIC(ic, RelocInfo::CODE_TARGET);
|
||||
__ Call(ic, RelocInfo::CODE_TARGET);
|
||||
__ nop(); // Signal no inlined code.
|
||||
if (where == kStack) __ push(rax);
|
||||
} else if (proxy != NULL &&
|
||||
proxy->var()->slot() != NULL &&
|
||||
@ -3399,36 +3431,10 @@ void FullCodeGenerator::VisitThisFunction(ThisFunction* expr) {
|
||||
}
|
||||
|
||||
|
||||
Register FullCodeGenerator::result_register() {
|
||||
return rax;
|
||||
}
|
||||
Register FullCodeGenerator::result_register() { return rax; }
|
||||
|
||||
|
||||
Register FullCodeGenerator::context_register() {
|
||||
return rsi;
|
||||
}
|
||||
|
||||
|
||||
void FullCodeGenerator::EmitCallIC(Handle<Code> ic, RelocInfo::Mode mode) {
|
||||
ASSERT(mode == RelocInfo::CODE_TARGET ||
|
||||
mode == RelocInfo::CODE_TARGET_CONTEXT);
|
||||
__ call(ic, mode);
|
||||
|
||||
// If we're calling a (keyed) load or store stub, we have to mark
|
||||
// the call as containing no inlined code so we will not attempt to
|
||||
// patch it.
|
||||
switch (ic->kind()) {
|
||||
case Code::LOAD_IC:
|
||||
case Code::KEYED_LOAD_IC:
|
||||
case Code::STORE_IC:
|
||||
case Code::KEYED_STORE_IC:
|
||||
__ nop(); // Signals no inlined code.
|
||||
break;
|
||||
default:
|
||||
// Do nothing.
|
||||
break;
|
||||
}
|
||||
}
|
||||
Register FullCodeGenerator::context_register() { return rsi; }
|
||||
|
||||
|
||||
void FullCodeGenerator::StoreToFrameField(int frame_offset, Register value) {
|
||||
|
28
deps/v8/src/x64/ic-x64.cc
vendored
28
deps/v8/src/x64/ic-x64.cc
vendored
@ -730,6 +730,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
// -- rsp[0] : return address
|
||||
// -----------------------------------
|
||||
Label miss;
|
||||
Label index_out_of_range;
|
||||
|
||||
Register receiver = rdx;
|
||||
Register index = rax;
|
||||
@ -744,7 +745,7 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
result,
|
||||
&miss, // When not a string.
|
||||
&miss, // When not a number.
|
||||
&miss, // When index out of range.
|
||||
&index_out_of_range,
|
||||
STRING_INDEX_IS_ARRAY_INDEX);
|
||||
char_at_generator.GenerateFast(masm);
|
||||
__ ret(0);
|
||||
@ -752,6 +753,10 @@ void KeyedLoadIC::GenerateString(MacroAssembler* masm) {
|
||||
ICRuntimeCallHelper call_helper;
|
||||
char_at_generator.GenerateSlow(masm, call_helper);
|
||||
|
||||
__ bind(&index_out_of_range);
|
||||
__ LoadRoot(rax, Heap::kUndefinedValueRootIndex);
|
||||
__ ret(0);
|
||||
|
||||
__ bind(&miss);
|
||||
GenerateMiss(masm);
|
||||
}
|
||||
@ -842,7 +847,7 @@ void KeyedLoadIC::GenerateExternalArray(MacroAssembler* masm,
|
||||
// For the UnsignedInt array type, we need to see whether
|
||||
// the value can be represented in a Smi. If not, we need to convert
|
||||
// it to a HeapNumber.
|
||||
NearLabel box_int;
|
||||
Label box_int;
|
||||
|
||||
__ JumpIfUIntNotValidSmiValue(rcx, &box_int);
|
||||
|
||||
@ -1027,7 +1032,7 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm) {
|
||||
// No more bailouts to slow case on this path, so key not needed.
|
||||
__ SmiToInteger32(rdi, rax);
|
||||
{ // Clamp the value to [0..255].
|
||||
NearLabel done;
|
||||
Label done;
|
||||
__ testl(rdi, Immediate(0xFFFFFF00));
|
||||
__ j(zero, &done);
|
||||
__ setcc(negative, rdi); // 1 if negative, 0 if positive.
|
||||
@ -1077,7 +1082,7 @@ void KeyedStoreIC::GenerateGeneric(MacroAssembler* masm) {
|
||||
// rax: value
|
||||
// rbx: receiver's elements array (a FixedArray)
|
||||
// rcx: index
|
||||
NearLabel non_smi_value;
|
||||
Label non_smi_value;
|
||||
__ movq(FieldOperand(rbx, rcx, times_pointer_size, FixedArray::kHeaderSize),
|
||||
rax);
|
||||
__ JumpIfNotSmi(rax, &non_smi_value);
|
||||
@ -1099,7 +1104,7 @@ void KeyedStoreIC::GenerateExternalArray(MacroAssembler* masm,
|
||||
// -- rdx : receiver
|
||||
// -- rsp[0] : return address
|
||||
// -----------------------------------
|
||||
Label slow;
|
||||
Label slow, check_heap_number;
|
||||
|
||||
// Check that the object isn't a smi.
|
||||
__ JumpIfSmi(rdx, &slow);
|
||||
@ -1140,7 +1145,6 @@ void KeyedStoreIC::GenerateExternalArray(MacroAssembler* masm,
|
||||
// rdx: receiver (a JSObject)
|
||||
// rbx: elements array
|
||||
// rdi: untagged key
|
||||
NearLabel check_heap_number;
|
||||
__ JumpIfNotSmi(rax, &check_heap_number);
|
||||
// No more branches to slow case on this path. Key and receiver not needed.
|
||||
__ SmiToInteger32(rdx, rax);
|
||||
@ -1484,7 +1488,7 @@ void KeyedCallIC::GenerateMegamorphic(MacroAssembler* masm, int argc) {
|
||||
// Get the receiver of the function from the stack; 1 ~ return address.
|
||||
__ movq(rdx, Operand(rsp, (argc + 1) * kPointerSize));
|
||||
|
||||
Label do_call, slow_call, slow_load;
|
||||
Label do_call, slow_call, slow_load, slow_reload_receiver;
|
||||
Label check_number_dictionary, check_string, lookup_monomorphic_cache;
|
||||
Label index_smi, index_string;
|
||||
|
||||
@ -1726,14 +1730,6 @@ bool LoadIC::PatchInlinedLoad(Address address, Object* map, int offset) {
|
||||
}
|
||||
|
||||
|
||||
bool LoadIC::PatchInlinedContextualLoad(Address address,
|
||||
Object* map,
|
||||
Object* cell) {
|
||||
// TODO(<bug#>): implement this.
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
// The offset from the inlined patch site to the start of the inlined
|
||||
// store instruction.
|
||||
const int StoreIC::kOffsetToStoreInstruction = 20;
|
||||
@ -1884,7 +1880,7 @@ void StoreIC::GenerateNormal(MacroAssembler* masm) {
|
||||
// -- rsp[0] : return address
|
||||
// -----------------------------------
|
||||
|
||||
Label miss;
|
||||
Label miss, restore_miss;
|
||||
|
||||
GenerateStringDictionaryReceiverCheck(masm, rdx, rbx, rdi, &miss);
|
||||
|
||||
|
752
deps/v8/src/x64/macro-assembler-x64.cc
vendored
752
deps/v8/src/x64/macro-assembler-x64.cc
vendored
@ -1,4 +1,4 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Copyright 2009 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
@ -85,7 +85,7 @@ void MacroAssembler::RecordWriteHelper(Register object,
|
||||
Register scratch) {
|
||||
if (FLAG_debug_code) {
|
||||
// Check that the object is not in new space.
|
||||
NearLabel not_in_new_space;
|
||||
Label not_in_new_space;
|
||||
InNewSpace(object, scratch, not_equal, ¬_in_new_space);
|
||||
Abort("new-space object passed to RecordWriteHelper");
|
||||
bind(¬_in_new_space);
|
||||
@ -171,7 +171,7 @@ void MacroAssembler::RecordWriteNonSmi(Register object,
|
||||
Label done;
|
||||
|
||||
if (FLAG_debug_code) {
|
||||
NearLabel okay;
|
||||
Label okay;
|
||||
JumpIfNotSmi(object, &okay);
|
||||
Abort("MacroAssembler::RecordWriteNonSmi cannot deal with smis");
|
||||
bind(&okay);
|
||||
@ -221,6 +221,42 @@ void MacroAssembler::RecordWriteNonSmi(Register object,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::InNewSpace(Register object,
|
||||
Register scratch,
|
||||
Condition cc,
|
||||
Label* branch) {
|
||||
if (Serializer::enabled()) {
|
||||
// Can't do arithmetic on external references if it might get serialized.
|
||||
// The mask isn't really an address. We load it as an external reference in
|
||||
// case the size of the new space is different between the snapshot maker
|
||||
// and the running system.
|
||||
if (scratch.is(object)) {
|
||||
movq(kScratchRegister, ExternalReference::new_space_mask());
|
||||
and_(scratch, kScratchRegister);
|
||||
} else {
|
||||
movq(scratch, ExternalReference::new_space_mask());
|
||||
and_(scratch, object);
|
||||
}
|
||||
movq(kScratchRegister, ExternalReference::new_space_start());
|
||||
cmpq(scratch, kScratchRegister);
|
||||
j(cc, branch);
|
||||
} else {
|
||||
ASSERT(is_int32(static_cast<int64_t>(Heap::NewSpaceMask())));
|
||||
intptr_t new_space_start =
|
||||
reinterpret_cast<intptr_t>(Heap::NewSpaceStart());
|
||||
movq(kScratchRegister, -new_space_start, RelocInfo::NONE);
|
||||
if (scratch.is(object)) {
|
||||
addq(scratch, kScratchRegister);
|
||||
} else {
|
||||
lea(scratch, Operand(object, kScratchRegister, times_1, 0));
|
||||
}
|
||||
and_(scratch, Immediate(static_cast<int32_t>(Heap::NewSpaceMask())));
|
||||
j(cc, branch);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::Assert(Condition cc, const char* msg) {
|
||||
if (FLAG_debug_code) Check(cc, msg);
|
||||
}
|
||||
@ -228,7 +264,7 @@ void MacroAssembler::Assert(Condition cc, const char* msg) {
|
||||
|
||||
void MacroAssembler::AssertFastElements(Register elements) {
|
||||
if (FLAG_debug_code) {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
CompareRoot(FieldOperand(elements, HeapObject::kMapOffset),
|
||||
Heap::kFixedArrayMapRootIndex);
|
||||
j(equal, &ok);
|
||||
@ -242,7 +278,7 @@ void MacroAssembler::AssertFastElements(Register elements) {
|
||||
|
||||
|
||||
void MacroAssembler::Check(Condition cc, const char* msg) {
|
||||
NearLabel L;
|
||||
Label L;
|
||||
j(cc, &L);
|
||||
Abort(msg);
|
||||
// will not return here
|
||||
@ -255,7 +291,7 @@ void MacroAssembler::CheckStackAlignment() {
|
||||
int frame_alignment_mask = frame_alignment - 1;
|
||||
if (frame_alignment > kPointerSize) {
|
||||
ASSERT(IsPowerOf2(frame_alignment));
|
||||
NearLabel alignment_as_expected;
|
||||
Label alignment_as_expected;
|
||||
testq(rsp, Immediate(frame_alignment_mask));
|
||||
j(zero, &alignment_as_expected);
|
||||
// Abort if stack is not aligned.
|
||||
@ -268,7 +304,7 @@ void MacroAssembler::CheckStackAlignment() {
|
||||
void MacroAssembler::NegativeZeroTest(Register result,
|
||||
Register op,
|
||||
Label* then_label) {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
testl(result, result);
|
||||
j(not_zero, &ok);
|
||||
testl(op, op);
|
||||
@ -606,6 +642,8 @@ void MacroAssembler::Set(const Operand& dst, int64_t x) {
|
||||
// ----------------------------------------------------------------------------
|
||||
// Smi tagging, untagging and tag detection.
|
||||
|
||||
static int kSmiShift = kSmiTagSize + kSmiShiftSize;
|
||||
|
||||
Register MacroAssembler::GetSmiConstant(Smi* source) {
|
||||
int value = source->value();
|
||||
if (value == 0) {
|
||||
@ -628,7 +666,7 @@ void MacroAssembler::LoadSmiConstant(Register dst, Smi* source) {
|
||||
if (allow_stub_calls()) {
|
||||
Assert(equal, "Uninitialized kSmiConstantRegister");
|
||||
} else {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
j(equal, &ok);
|
||||
int3();
|
||||
bind(&ok);
|
||||
@ -678,7 +716,6 @@ void MacroAssembler::LoadSmiConstant(Register dst, Smi* source) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::Integer32ToSmi(Register dst, Register src) {
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
if (!dst.is(src)) {
|
||||
@ -688,10 +725,22 @@ void MacroAssembler::Integer32ToSmi(Register dst, Register src) {
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::Integer32ToSmi(Register dst,
|
||||
Register src,
|
||||
Label* on_overflow) {
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
// 32-bit integer always fits in a long smi.
|
||||
if (!dst.is(src)) {
|
||||
movl(dst, src);
|
||||
}
|
||||
shl(dst, Immediate(kSmiShift));
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::Integer32ToSmiField(const Operand& dst, Register src) {
|
||||
if (FLAG_debug_code) {
|
||||
testb(dst, Immediate(0x01));
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
j(zero, &ok);
|
||||
if (allow_stub_calls()) {
|
||||
Abort("Integer32ToSmiField writing to non-smi location");
|
||||
@ -900,6 +949,180 @@ Condition MacroAssembler::CheckUInteger32ValidSmiValue(Register src) {
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiNeg(Register dst, Register src, Label* on_smi_result) {
|
||||
if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
movq(kScratchRegister, src);
|
||||
neg(dst); // Low 32 bits are retained as zero by negation.
|
||||
// Test if result is zero or Smi::kMinValue.
|
||||
cmpq(dst, kScratchRegister);
|
||||
j(not_equal, on_smi_result);
|
||||
movq(src, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src);
|
||||
neg(dst);
|
||||
cmpq(dst, src);
|
||||
// If the result is zero or Smi::kMinValue, negation failed to create a smi.
|
||||
j(not_equal, on_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiAdd(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!dst.is(src2));
|
||||
if (on_not_smi_result == NULL) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible.
|
||||
if (dst.is(src1)) {
|
||||
addq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
addq(dst, src2);
|
||||
}
|
||||
Assert(no_overflow, "Smi addition overflow");
|
||||
} else if (dst.is(src1)) {
|
||||
movq(kScratchRegister, src1);
|
||||
addq(kScratchRegister, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
addq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiSub(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!dst.is(src2));
|
||||
if (on_not_smi_result == NULL) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible (e.g., subtracting two positive smis).
|
||||
if (dst.is(src1)) {
|
||||
subq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
}
|
||||
Assert(no_overflow, "Smi subtraction overflow");
|
||||
} else if (dst.is(src1)) {
|
||||
cmpq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
subq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiSub(Register dst,
|
||||
Register src1,
|
||||
const Operand& src2,
|
||||
Label* on_not_smi_result) {
|
||||
if (on_not_smi_result == NULL) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible (e.g., subtracting two positive smis).
|
||||
if (dst.is(src1)) {
|
||||
subq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
}
|
||||
Assert(no_overflow, "Smi subtraction overflow");
|
||||
} else if (dst.is(src1)) {
|
||||
movq(kScratchRegister, src2);
|
||||
cmpq(src1, kScratchRegister);
|
||||
j(overflow, on_not_smi_result);
|
||||
subq(src1, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
void MacroAssembler::SmiMul(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!dst.is(src2));
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
|
||||
if (dst.is(src1)) {
|
||||
Label failure, zero_correct_result;
|
||||
movq(kScratchRegister, src1); // Create backup for later testing.
|
||||
SmiToInteger64(dst, src1);
|
||||
imul(dst, src2);
|
||||
j(overflow, &failure);
|
||||
|
||||
// Check for negative zero result. If product is zero, and one
|
||||
// argument is negative, go to slow case.
|
||||
Label correct_result;
|
||||
testq(dst, dst);
|
||||
j(not_zero, &correct_result);
|
||||
|
||||
movq(dst, kScratchRegister);
|
||||
xor_(dst, src2);
|
||||
j(positive, &zero_correct_result); // Result was positive zero.
|
||||
|
||||
bind(&failure); // Reused failure exit, restores src1.
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
|
||||
bind(&zero_correct_result);
|
||||
xor_(dst, dst);
|
||||
|
||||
bind(&correct_result);
|
||||
} else {
|
||||
SmiToInteger64(dst, src1);
|
||||
imul(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
// Check for negative zero result. If product is zero, and one
|
||||
// argument is negative, go to slow case.
|
||||
Label correct_result;
|
||||
testq(dst, dst);
|
||||
j(not_zero, &correct_result);
|
||||
// One of src1 and src2 is zero, the check whether the other is
|
||||
// negative.
|
||||
movq(kScratchRegister, src1);
|
||||
xor_(kScratchRegister, src2);
|
||||
j(negative, on_not_smi_result);
|
||||
bind(&correct_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiTryAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
Label* on_not_smi_result) {
|
||||
// Does not assume that src is a smi.
|
||||
ASSERT_EQ(static_cast<int>(1), static_cast<int>(kSmiTagMask));
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src.is(kScratchRegister));
|
||||
|
||||
JumpIfNotSmi(src, on_not_smi_result);
|
||||
Register tmp = (dst.is(src) ? kScratchRegister : dst);
|
||||
LoadSmiConstant(tmp, constant);
|
||||
addq(tmp, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
if (dst.is(src)) {
|
||||
movq(dst, tmp);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiAddConstant(Register dst, Register src, Smi* constant) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
@ -956,6 +1179,29 @@ void MacroAssembler::SmiAddConstant(const Operand& dst, Smi* constant) {
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
Label* on_not_smi_result) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
movq(dst, src);
|
||||
}
|
||||
} else if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
|
||||
LoadSmiConstant(kScratchRegister, constant);
|
||||
addq(kScratchRegister, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
} else {
|
||||
LoadSmiConstant(dst, constant);
|
||||
addq(dst, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiSubConstant(Register dst, Register src, Smi* constant) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
@ -980,48 +1226,165 @@ void MacroAssembler::SmiSubConstant(Register dst, Register src, Smi* constant) {
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiAdd(Register dst,
|
||||
void MacroAssembler::SmiSubConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
Label* on_not_smi_result) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
movq(dst, src);
|
||||
}
|
||||
} else if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
if (constant->value() == Smi::kMinValue) {
|
||||
// Subtracting min-value from any non-negative value will overflow.
|
||||
// We test the non-negativeness before doing the subtraction.
|
||||
testq(src, src);
|
||||
j(not_sign, on_not_smi_result);
|
||||
LoadSmiConstant(kScratchRegister, constant);
|
||||
subq(dst, kScratchRegister);
|
||||
} else {
|
||||
// Subtract by adding the negation.
|
||||
LoadSmiConstant(kScratchRegister, Smi::FromInt(-constant->value()));
|
||||
addq(kScratchRegister, dst);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
}
|
||||
} else {
|
||||
if (constant->value() == Smi::kMinValue) {
|
||||
// Subtracting min-value from any non-negative value will overflow.
|
||||
// We test the non-negativeness before doing the subtraction.
|
||||
testq(src, src);
|
||||
j(not_sign, on_not_smi_result);
|
||||
LoadSmiConstant(dst, constant);
|
||||
// Adding and subtracting the min-value gives the same result, it only
|
||||
// differs on the overflow bit, which we don't check here.
|
||||
addq(dst, src);
|
||||
} else {
|
||||
// Subtract by adding the negation.
|
||||
LoadSmiConstant(dst, Smi::FromInt(-(constant->value())));
|
||||
addq(dst, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiDiv(Register dst,
|
||||
Register src1,
|
||||
Register src2) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible.
|
||||
ASSERT(!dst.is(src2));
|
||||
if (dst.is(src1)) {
|
||||
addq(dst, src2);
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src2.is(rax));
|
||||
ASSERT(!src2.is(rdx));
|
||||
ASSERT(!src1.is(rdx));
|
||||
|
||||
// Check for 0 divisor (result is +/-Infinity).
|
||||
Label positive_divisor;
|
||||
testq(src2, src2);
|
||||
j(zero, on_not_smi_result);
|
||||
|
||||
if (src1.is(rax)) {
|
||||
movq(kScratchRegister, src1);
|
||||
}
|
||||
SmiToInteger32(rax, src1);
|
||||
// We need to rule out dividing Smi::kMinValue by -1, since that would
|
||||
// overflow in idiv and raise an exception.
|
||||
// We combine this with negative zero test (negative zero only happens
|
||||
// when dividing zero by a negative number).
|
||||
|
||||
// We overshoot a little and go to slow case if we divide min-value
|
||||
// by any negative value, not just -1.
|
||||
Label safe_div;
|
||||
testl(rax, Immediate(0x7fffffff));
|
||||
j(not_zero, &safe_div);
|
||||
testq(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
j(positive, &safe_div);
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
addq(dst, src2);
|
||||
}
|
||||
Assert(no_overflow, "Smi addition overflow");
|
||||
j(negative, on_not_smi_result);
|
||||
}
|
||||
bind(&safe_div);
|
||||
|
||||
|
||||
void MacroAssembler::SmiSub(Register dst, Register src1, Register src2) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible (e.g., subtracting two positive smis).
|
||||
ASSERT(!dst.is(src2));
|
||||
if (dst.is(src1)) {
|
||||
subq(dst, src2);
|
||||
SmiToInteger32(src2, src2);
|
||||
// Sign extend src1 into edx:eax.
|
||||
cdq();
|
||||
idivl(src2);
|
||||
Integer32ToSmi(src2, src2);
|
||||
// Check that the remainder is zero.
|
||||
testl(rdx, rdx);
|
||||
if (src1.is(rax)) {
|
||||
Label smi_result;
|
||||
j(zero, &smi_result);
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
bind(&smi_result);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
j(not_zero, on_not_smi_result);
|
||||
}
|
||||
Assert(no_overflow, "Smi subtraction overflow");
|
||||
if (!dst.is(src1) && src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
Integer32ToSmi(dst, rax);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiSub(Register dst,
|
||||
void MacroAssembler::SmiMod(Register dst,
|
||||
Register src1,
|
||||
const Operand& src2) {
|
||||
// No overflow checking. Use only when it's known that
|
||||
// overflowing is impossible (e.g., subtracting two positive smis).
|
||||
if (dst.is(src1)) {
|
||||
subq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!src2.is(rax));
|
||||
ASSERT(!src2.is(rdx));
|
||||
ASSERT(!src1.is(rdx));
|
||||
ASSERT(!src1.is(src2));
|
||||
|
||||
testq(src2, src2);
|
||||
j(zero, on_not_smi_result);
|
||||
|
||||
if (src1.is(rax)) {
|
||||
movq(kScratchRegister, src1);
|
||||
}
|
||||
Assert(no_overflow, "Smi subtraction overflow");
|
||||
SmiToInteger32(rax, src1);
|
||||
SmiToInteger32(src2, src2);
|
||||
|
||||
// Test for the edge case of dividing Smi::kMinValue by -1 (will overflow).
|
||||
Label safe_div;
|
||||
cmpl(rax, Immediate(Smi::kMinValue));
|
||||
j(not_equal, &safe_div);
|
||||
cmpl(src2, Immediate(-1));
|
||||
j(not_equal, &safe_div);
|
||||
// Retag inputs and go slow case.
|
||||
Integer32ToSmi(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
jmp(on_not_smi_result);
|
||||
bind(&safe_div);
|
||||
|
||||
// Sign extend eax into edx:eax.
|
||||
cdq();
|
||||
idivl(src2);
|
||||
// Restore smi tags on inputs.
|
||||
Integer32ToSmi(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
// Check for a negative zero result. If the result is zero, and the
|
||||
// dividend is negative, go slow to return a floating point negative zero.
|
||||
Label smi_result;
|
||||
testl(rdx, rdx);
|
||||
j(not_zero, &smi_result);
|
||||
testq(src1, src1);
|
||||
j(negative, on_not_smi_result);
|
||||
bind(&smi_result);
|
||||
Integer32ToSmi(dst, rdx);
|
||||
}
|
||||
|
||||
|
||||
@ -1117,6 +1480,25 @@ void MacroAssembler::SmiShiftArithmeticRightConstant(Register dst,
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiShiftLogicalRightConstant(Register dst,
|
||||
Register src,
|
||||
int shift_value,
|
||||
Label* on_not_smi_result) {
|
||||
// Logic right shift interprets its result as an *unsigned* number.
|
||||
if (dst.is(src)) {
|
||||
UNIMPLEMENTED(); // Not used.
|
||||
} else {
|
||||
movq(dst, src);
|
||||
if (shift_value == 0) {
|
||||
testq(dst, dst);
|
||||
j(negative, on_not_smi_result);
|
||||
}
|
||||
shr(dst, Immediate(shift_value + kSmiShift));
|
||||
shl(dst, Immediate(kSmiShift));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiShiftLeftConstant(Register dst,
|
||||
Register src,
|
||||
int shift_value) {
|
||||
@ -1133,7 +1515,7 @@ void MacroAssembler::SmiShiftLeft(Register dst,
|
||||
Register src1,
|
||||
Register src2) {
|
||||
ASSERT(!dst.is(rcx));
|
||||
NearLabel result_ok;
|
||||
Label result_ok;
|
||||
// Untag shift amount.
|
||||
if (!dst.is(src1)) {
|
||||
movq(dst, src1);
|
||||
@ -1145,6 +1527,42 @@ void MacroAssembler::SmiShiftLeft(Register dst,
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiShiftLogicalRight(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
Label* on_not_smi_result) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(rcx));
|
||||
Label result_ok;
|
||||
if (src1.is(rcx) || src2.is(rcx)) {
|
||||
movq(kScratchRegister, rcx);
|
||||
}
|
||||
if (!dst.is(src1)) {
|
||||
movq(dst, src1);
|
||||
}
|
||||
SmiToInteger32(rcx, src2);
|
||||
orl(rcx, Immediate(kSmiShift));
|
||||
shr_cl(dst); // Shift is rcx modulo 0x1f + 32.
|
||||
shl(dst, Immediate(kSmiShift));
|
||||
testq(dst, dst);
|
||||
if (src1.is(rcx) || src2.is(rcx)) {
|
||||
Label positive_result;
|
||||
j(positive, &positive_result);
|
||||
if (src1.is(rcx)) {
|
||||
movq(src1, kScratchRegister);
|
||||
} else {
|
||||
movq(src2, kScratchRegister);
|
||||
}
|
||||
jmp(on_not_smi_result);
|
||||
bind(&positive_result);
|
||||
} else {
|
||||
j(negative, on_not_smi_result); // src2 was zero and src1 negative.
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SmiShiftArithmeticRight(Register dst,
|
||||
Register src1,
|
||||
Register src2) {
|
||||
@ -1172,6 +1590,44 @@ void MacroAssembler::SmiShiftArithmeticRight(Register dst,
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::SelectNonSmi(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
Label* on_not_smis) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(src1));
|
||||
ASSERT(!dst.is(src2));
|
||||
// Both operands must not be smis.
|
||||
#ifdef DEBUG
|
||||
if (allow_stub_calls()) { // Check contains a stub call.
|
||||
Condition not_both_smis = NegateCondition(CheckBothSmi(src1, src2));
|
||||
Check(not_both_smis, "Both registers were smis in SelectNonSmi.");
|
||||
}
|
||||
#endif
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
ASSERT_EQ(0, Smi::FromInt(0));
|
||||
movl(kScratchRegister, Immediate(kSmiTagMask));
|
||||
and_(kScratchRegister, src1);
|
||||
testl(kScratchRegister, src2);
|
||||
// If non-zero then both are smis.
|
||||
j(not_zero, on_not_smis);
|
||||
|
||||
// Exactly one operand is a smi.
|
||||
ASSERT_EQ(1, static_cast<int>(kSmiTagMask));
|
||||
// kScratchRegister still holds src1 & kSmiTag, which is either zero or one.
|
||||
subq(kScratchRegister, Immediate(1));
|
||||
// If src1 is a smi, then scratch register all 1s, else it is all 0s.
|
||||
movq(dst, src1);
|
||||
xor_(dst, src2);
|
||||
and_(dst, kScratchRegister);
|
||||
// If src1 is a smi, dst holds src1 ^ src2, else it is zero.
|
||||
xor_(dst, src1);
|
||||
// If src1 is a smi, dst is src2, else it is src1, i.e., the non-smi.
|
||||
}
|
||||
|
||||
|
||||
SmiIndex MacroAssembler::SmiToIndex(Register dst,
|
||||
Register src,
|
||||
int shift) {
|
||||
@ -1207,6 +1663,138 @@ SmiIndex MacroAssembler::SmiToNegativeIndex(Register dst,
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfSmi(Register src, Label* on_smi) {
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
Condition smi = CheckSmi(src);
|
||||
j(smi, on_smi);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotSmi(Register src, Label* on_not_smi) {
|
||||
Condition smi = CheckSmi(src);
|
||||
j(NegateCondition(smi), on_not_smi);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotPositiveSmi(Register src,
|
||||
Label* on_not_positive_smi) {
|
||||
Condition positive_smi = CheckPositiveSmi(src);
|
||||
j(NegateCondition(positive_smi), on_not_positive_smi);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfSmiEqualsConstant(Register src,
|
||||
Smi* constant,
|
||||
Label* on_equals) {
|
||||
SmiCompare(src, constant);
|
||||
j(equal, on_equals);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotValidSmiValue(Register src, Label* on_invalid) {
|
||||
Condition is_valid = CheckInteger32ValidSmiValue(src);
|
||||
j(NegateCondition(is_valid), on_invalid);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfUIntNotValidSmiValue(Register src,
|
||||
Label* on_invalid) {
|
||||
Condition is_valid = CheckUInteger32ValidSmiValue(src);
|
||||
j(NegateCondition(is_valid), on_invalid);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotBothSmi(Register src1, Register src2,
|
||||
Label* on_not_both_smi) {
|
||||
Condition both_smi = CheckBothSmi(src1, src2);
|
||||
j(NegateCondition(both_smi), on_not_both_smi);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotBothPositiveSmi(Register src1, Register src2,
|
||||
Label* on_not_both_smi) {
|
||||
Condition both_smi = CheckBothPositiveSmi(src1, src2);
|
||||
j(NegateCondition(both_smi), on_not_both_smi);
|
||||
}
|
||||
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfNotBothSequentialAsciiStrings(Register first_object,
|
||||
Register second_object,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
Label* on_fail) {
|
||||
// Check that both objects are not smis.
|
||||
Condition either_smi = CheckEitherSmi(first_object, second_object);
|
||||
j(either_smi, on_fail);
|
||||
|
||||
// Load instance type for both strings.
|
||||
movq(scratch1, FieldOperand(first_object, HeapObject::kMapOffset));
|
||||
movq(scratch2, FieldOperand(second_object, HeapObject::kMapOffset));
|
||||
movzxbl(scratch1, FieldOperand(scratch1, Map::kInstanceTypeOffset));
|
||||
movzxbl(scratch2, FieldOperand(scratch2, Map::kInstanceTypeOffset));
|
||||
|
||||
// Check that both are flat ascii strings.
|
||||
ASSERT(kNotStringTag != 0);
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
const int kFlatAsciiStringTag = ASCII_STRING_TYPE;
|
||||
|
||||
andl(scratch1, Immediate(kFlatAsciiStringMask));
|
||||
andl(scratch2, Immediate(kFlatAsciiStringMask));
|
||||
// Interleave the bits to check both scratch1 and scratch2 in one test.
|
||||
ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3));
|
||||
lea(scratch1, Operand(scratch1, scratch2, times_8, 0));
|
||||
cmpl(scratch1,
|
||||
Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3)));
|
||||
j(not_equal, on_fail);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfInstanceTypeIsNotSequentialAscii(
|
||||
Register instance_type,
|
||||
Register scratch,
|
||||
Label *failure) {
|
||||
if (!scratch.is(instance_type)) {
|
||||
movl(scratch, instance_type);
|
||||
}
|
||||
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
|
||||
andl(scratch, Immediate(kFlatAsciiStringMask));
|
||||
cmpl(scratch, Immediate(kStringTag | kSeqStringTag | kAsciiStringTag));
|
||||
j(not_equal, failure);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii(
|
||||
Register first_object_instance_type,
|
||||
Register second_object_instance_type,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
Label* on_fail) {
|
||||
// Load instance type for both strings.
|
||||
movq(scratch1, first_object_instance_type);
|
||||
movq(scratch2, second_object_instance_type);
|
||||
|
||||
// Check that both are flat ascii strings.
|
||||
ASSERT(kNotStringTag != 0);
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
const int kFlatAsciiStringTag = ASCII_STRING_TYPE;
|
||||
|
||||
andl(scratch1, Immediate(kFlatAsciiStringMask));
|
||||
andl(scratch2, Immediate(kFlatAsciiStringMask));
|
||||
// Interleave the bits to check both scratch1 and scratch2 in one test.
|
||||
ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3));
|
||||
lea(scratch1, Operand(scratch1, scratch2, times_8, 0));
|
||||
cmpl(scratch1,
|
||||
Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3)));
|
||||
j(not_equal, on_fail);
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::Move(Register dst, Handle<Object> source) {
|
||||
ASSERT(!source->IsFailure());
|
||||
if (source->IsSmi()) {
|
||||
@ -1315,6 +1903,7 @@ void MacroAssembler::Call(Address destination, RelocInfo::Mode rmode) {
|
||||
|
||||
void MacroAssembler::Call(Handle<Code> code_object, RelocInfo::Mode rmode) {
|
||||
ASSERT(RelocInfo::IsCodeTarget(rmode));
|
||||
WriteRecordedPositions();
|
||||
call(code_object, rmode);
|
||||
}
|
||||
|
||||
@ -1405,7 +1994,7 @@ void MacroAssembler::CheckMap(Register obj,
|
||||
|
||||
|
||||
void MacroAssembler::AbortIfNotNumber(Register object) {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
Condition is_smi = CheckSmi(object);
|
||||
j(is_smi, &ok);
|
||||
Cmp(FieldOperand(object, HeapObject::kMapOffset),
|
||||
@ -1416,14 +2005,14 @@ void MacroAssembler::AbortIfNotNumber(Register object) {
|
||||
|
||||
|
||||
void MacroAssembler::AbortIfSmi(Register object) {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
Condition is_smi = CheckSmi(object);
|
||||
Assert(NegateCondition(is_smi), "Operand is a smi");
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::AbortIfNotSmi(Register object) {
|
||||
NearLabel ok;
|
||||
Label ok;
|
||||
Condition is_smi = CheckSmi(object);
|
||||
Assert(is_smi, "Operand is not a smi");
|
||||
}
|
||||
@ -1463,7 +2052,7 @@ void MacroAssembler::TryGetFunctionPrototype(Register function,
|
||||
j(not_equal, miss);
|
||||
|
||||
// Make sure that the function has an instance prototype.
|
||||
NearLabel non_instance;
|
||||
Label non_instance;
|
||||
testb(FieldOperand(result, Map::kBitFieldOffset),
|
||||
Immediate(1 << Map::kHasNonInstancePrototype));
|
||||
j(not_zero, &non_instance);
|
||||
@ -1479,7 +2068,7 @@ void MacroAssembler::TryGetFunctionPrototype(Register function,
|
||||
j(equal, miss);
|
||||
|
||||
// If the function does not have an initial map, we're done.
|
||||
NearLabel done;
|
||||
Label done;
|
||||
CmpObjectType(result, MAP_TYPE, kScratchRegister);
|
||||
j(not_equal, &done);
|
||||
|
||||
@ -1544,11 +2133,76 @@ void MacroAssembler::DebugBreak() {
|
||||
#endif // ENABLE_DEBUGGER_SUPPORT
|
||||
|
||||
|
||||
void MacroAssembler::InvokePrologue(const ParameterCount& expected,
|
||||
const ParameterCount& actual,
|
||||
Handle<Code> code_constant,
|
||||
Register code_register,
|
||||
Label* done,
|
||||
InvokeFlag flag) {
|
||||
bool definitely_matches = false;
|
||||
Label invoke;
|
||||
if (expected.is_immediate()) {
|
||||
ASSERT(actual.is_immediate());
|
||||
if (expected.immediate() == actual.immediate()) {
|
||||
definitely_matches = true;
|
||||
} else {
|
||||
Set(rax, actual.immediate());
|
||||
if (expected.immediate() ==
|
||||
SharedFunctionInfo::kDontAdaptArgumentsSentinel) {
|
||||
// Don't worry about adapting arguments for built-ins that
|
||||
// don't want that done. Skip adaption code by making it look
|
||||
// like we have a match between expected and actual number of
|
||||
// arguments.
|
||||
definitely_matches = true;
|
||||
} else {
|
||||
Set(rbx, expected.immediate());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (actual.is_immediate()) {
|
||||
// Expected is in register, actual is immediate. This is the
|
||||
// case when we invoke function values without going through the
|
||||
// IC mechanism.
|
||||
cmpq(expected.reg(), Immediate(actual.immediate()));
|
||||
j(equal, &invoke);
|
||||
ASSERT(expected.reg().is(rbx));
|
||||
Set(rax, actual.immediate());
|
||||
} else if (!expected.reg().is(actual.reg())) {
|
||||
// Both expected and actual are in (different) registers. This
|
||||
// is the case when we invoke functions using call and apply.
|
||||
cmpq(expected.reg(), actual.reg());
|
||||
j(equal, &invoke);
|
||||
ASSERT(actual.reg().is(rax));
|
||||
ASSERT(expected.reg().is(rbx));
|
||||
}
|
||||
}
|
||||
|
||||
if (!definitely_matches) {
|
||||
Handle<Code> adaptor =
|
||||
Handle<Code>(Builtins::builtin(Builtins::ArgumentsAdaptorTrampoline));
|
||||
if (!code_constant.is_null()) {
|
||||
movq(rdx, code_constant, RelocInfo::EMBEDDED_OBJECT);
|
||||
addq(rdx, Immediate(Code::kHeaderSize - kHeapObjectTag));
|
||||
} else if (!code_register.is(rdx)) {
|
||||
movq(rdx, code_register);
|
||||
}
|
||||
|
||||
if (flag == CALL_FUNCTION) {
|
||||
Call(adaptor, RelocInfo::CODE_TARGET);
|
||||
jmp(done);
|
||||
} else {
|
||||
Jump(adaptor, RelocInfo::CODE_TARGET);
|
||||
}
|
||||
bind(&invoke);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MacroAssembler::InvokeCode(Register code,
|
||||
const ParameterCount& expected,
|
||||
const ParameterCount& actual,
|
||||
InvokeFlag flag) {
|
||||
NearLabel done;
|
||||
Label done;
|
||||
InvokePrologue(expected, actual, Handle<Code>::null(), code, &done, flag);
|
||||
if (flag == CALL_FUNCTION) {
|
||||
call(code);
|
||||
@ -1565,7 +2219,7 @@ void MacroAssembler::InvokeCode(Handle<Code> code,
|
||||
const ParameterCount& actual,
|
||||
RelocInfo::Mode rmode,
|
||||
InvokeFlag flag) {
|
||||
NearLabel done;
|
||||
Label done;
|
||||
Register dummy = rax;
|
||||
InvokePrologue(expected, actual, code, dummy, &done, flag);
|
||||
if (flag == CALL_FUNCTION) {
|
||||
|
795
deps/v8/src/x64/macro-assembler-x64.h
vendored
795
deps/v8/src/x64/macro-assembler-x64.h
vendored
@ -91,11 +91,10 @@ class MacroAssembler: public Assembler {
|
||||
// Check if object is in new space. The condition cc can be equal or
|
||||
// not_equal. If it is equal a jump will be done if the object is on new
|
||||
// space. The register scratch can be object itself, but it will be clobbered.
|
||||
template <typename LabelType>
|
||||
void InNewSpace(Register object,
|
||||
Register scratch,
|
||||
Condition cc,
|
||||
LabelType* branch);
|
||||
Label* branch);
|
||||
|
||||
// For page containing |object| mark region covering [object+offset]
|
||||
// dirty. |object| is the object being stored into, |value| is the
|
||||
@ -216,9 +215,14 @@ class MacroAssembler: public Assembler {
|
||||
|
||||
// Tag an integer value. The result must be known to be a valid smi value.
|
||||
// Only uses the low 32 bits of the src register. Sets the N and Z flags
|
||||
// based on the value of the resulting smi.
|
||||
// based on the value of the resulting integer.
|
||||
void Integer32ToSmi(Register dst, Register src);
|
||||
|
||||
// Tag an integer value if possible, or jump the integer value cannot be
|
||||
// represented as a smi. Only uses the low 32 bit of the src registers.
|
||||
// NOTICE: Destroys the dst register even if unsuccessful!
|
||||
void Integer32ToSmi(Register dst, Register src, Label* on_overflow);
|
||||
|
||||
// Stores an integer32 value into a memory field that already holds a smi.
|
||||
void Integer32ToSmiField(const Operand& dst, Register src);
|
||||
|
||||
@ -296,42 +300,30 @@ class MacroAssembler: public Assembler {
|
||||
// above with a conditional jump.
|
||||
|
||||
// Jump if the value cannot be represented by a smi.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotValidSmiValue(Register src, LabelType* on_invalid);
|
||||
void JumpIfNotValidSmiValue(Register src, Label* on_invalid);
|
||||
|
||||
// Jump if the unsigned integer value cannot be represented by a smi.
|
||||
template <typename LabelType>
|
||||
void JumpIfUIntNotValidSmiValue(Register src, LabelType* on_invalid);
|
||||
void JumpIfUIntNotValidSmiValue(Register src, Label* on_invalid);
|
||||
|
||||
// Jump to label if the value is a tagged smi.
|
||||
template <typename LabelType>
|
||||
void JumpIfSmi(Register src, LabelType* on_smi);
|
||||
void JumpIfSmi(Register src, Label* on_smi);
|
||||
|
||||
// Jump to label if the value is not a tagged smi.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotSmi(Register src, LabelType* on_not_smi);
|
||||
void JumpIfNotSmi(Register src, Label* on_not_smi);
|
||||
|
||||
// Jump to label if the value is not a positive tagged smi.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotPositiveSmi(Register src, LabelType* on_not_smi);
|
||||
void JumpIfNotPositiveSmi(Register src, Label* on_not_smi);
|
||||
|
||||
// Jump to label if the value, which must be a tagged smi, has value equal
|
||||
// to the constant.
|
||||
template <typename LabelType>
|
||||
void JumpIfSmiEqualsConstant(Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_equals);
|
||||
void JumpIfSmiEqualsConstant(Register src, Smi* constant, Label* on_equals);
|
||||
|
||||
// Jump if either or both register are not smi values.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotBothSmi(Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_both_smi);
|
||||
void JumpIfNotBothSmi(Register src1, Register src2, Label* on_not_both_smi);
|
||||
|
||||
// Jump if either or both register are not positive smi values.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotBothPositiveSmi(Register src1, Register src2,
|
||||
LabelType* on_not_both_smi);
|
||||
Label* on_not_both_smi);
|
||||
|
||||
// Operations on tagged smi values.
|
||||
|
||||
@ -341,11 +333,10 @@ class MacroAssembler: public Assembler {
|
||||
// Optimistically adds an integer constant to a supposed smi.
|
||||
// If the src is not a smi, or the result is not a smi, jump to
|
||||
// the label.
|
||||
template <typename LabelType>
|
||||
void SmiTryAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Add an integer constant to a tagged smi, giving a tagged smi as result.
|
||||
// No overflow testing on the result is done.
|
||||
@ -357,11 +348,10 @@ class MacroAssembler: public Assembler {
|
||||
|
||||
// Add an integer constant to a tagged smi, giving a tagged smi as result,
|
||||
// or jumping to a label if the result cannot be represented by a smi.
|
||||
template <typename LabelType>
|
||||
void SmiAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Subtract an integer constant from a tagged smi, giving a tagged smi as
|
||||
// result. No testing on the result is done. Sets the N and Z flags
|
||||
@ -370,80 +360,60 @@ class MacroAssembler: public Assembler {
|
||||
|
||||
// Subtract an integer constant from a tagged smi, giving a tagged smi as
|
||||
// result, or jumping to a label if the result cannot be represented by a smi.
|
||||
template <typename LabelType>
|
||||
void SmiSubConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Negating a smi can give a negative zero or too large positive value.
|
||||
// NOTICE: This operation jumps on success, not failure!
|
||||
template <typename LabelType>
|
||||
void SmiNeg(Register dst,
|
||||
Register src,
|
||||
LabelType* on_smi_result);
|
||||
Label* on_smi_result);
|
||||
|
||||
// Adds smi values and return the result as a smi.
|
||||
// If dst is src1, then src1 will be destroyed, even if
|
||||
// the operation is unsuccessful.
|
||||
template <typename LabelType>
|
||||
void SmiAdd(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
|
||||
void SmiAdd(Register dst,
|
||||
Register src1,
|
||||
Register src2);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Subtracts smi values and return the result as a smi.
|
||||
// If dst is src1, then src1 will be destroyed, even if
|
||||
// the operation is unsuccessful.
|
||||
template <typename LabelType>
|
||||
void SmiSub(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
void SmiSub(Register dst,
|
||||
Register src1,
|
||||
Register src2);
|
||||
|
||||
template <typename LabelType>
|
||||
void SmiSub(Register dst,
|
||||
Register src1,
|
||||
const Operand& src2,
|
||||
LabelType* on_not_smi_result);
|
||||
|
||||
void SmiSub(Register dst,
|
||||
Register src1,
|
||||
const Operand& src2);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Multiplies smi values and return the result as a smi,
|
||||
// if possible.
|
||||
// If dst is src1, then src1 will be destroyed, even if
|
||||
// the operation is unsuccessful.
|
||||
template <typename LabelType>
|
||||
void SmiMul(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Divides one smi by another and returns the quotient.
|
||||
// Clobbers rax and rdx registers.
|
||||
template <typename LabelType>
|
||||
void SmiDiv(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Divides one smi by another and returns the remainder.
|
||||
// Clobbers rax and rdx registers.
|
||||
template <typename LabelType>
|
||||
void SmiMod(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
|
||||
// Bitwise operations.
|
||||
void SmiNot(Register dst, Register src);
|
||||
@ -457,11 +427,10 @@ class MacroAssembler: public Assembler {
|
||||
void SmiShiftLeftConstant(Register dst,
|
||||
Register src,
|
||||
int shift_value);
|
||||
template <typename LabelType>
|
||||
void SmiShiftLogicalRightConstant(Register dst,
|
||||
Register src,
|
||||
int shift_value,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
void SmiShiftArithmeticRightConstant(Register dst,
|
||||
Register src,
|
||||
int shift_value);
|
||||
@ -474,11 +443,10 @@ class MacroAssembler: public Assembler {
|
||||
// Shifts a smi value to the right, shifting in zero bits at the top, and
|
||||
// returns the unsigned intepretation of the result if that is a smi.
|
||||
// Uses and clobbers rcx, so dst may not be rcx.
|
||||
template <typename LabelType>
|
||||
void SmiShiftLogicalRight(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result);
|
||||
Label* on_not_smi_result);
|
||||
// Shifts a smi value to the right, sign extending the top, and
|
||||
// returns the signed intepretation of the result. That will always
|
||||
// be a valid smi value, since it's numerically smaller than the
|
||||
@ -492,11 +460,10 @@ class MacroAssembler: public Assembler {
|
||||
|
||||
// Select the non-smi register of two registers where exactly one is a
|
||||
// smi. If neither are smis, jump to the failure label.
|
||||
template <typename LabelType>
|
||||
void SelectNonSmi(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smis);
|
||||
Label* on_not_smis);
|
||||
|
||||
// Converts, if necessary, a smi to a combination of number and
|
||||
// multiplier to be used as a scaled index.
|
||||
@ -526,29 +493,25 @@ class MacroAssembler: public Assembler {
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// String macros.
|
||||
template <typename LabelType>
|
||||
void JumpIfNotBothSequentialAsciiStrings(Register first_object,
|
||||
Register second_object,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
LabelType* on_not_both_flat_ascii);
|
||||
Label* on_not_both_flat_ascii);
|
||||
|
||||
// Check whether the instance type represents a flat ascii string. Jump to the
|
||||
// label if not. If the instance type can be scratched specify same register
|
||||
// for both instance type and scratch.
|
||||
template <typename LabelType>
|
||||
void JumpIfInstanceTypeIsNotSequentialAscii(
|
||||
Register instance_type,
|
||||
void JumpIfInstanceTypeIsNotSequentialAscii(Register instance_type,
|
||||
Register scratch,
|
||||
LabelType *on_not_flat_ascii_string);
|
||||
Label *on_not_flat_ascii_string);
|
||||
|
||||
template <typename LabelType>
|
||||
void JumpIfBothInstanceTypesAreNotSequentialAscii(
|
||||
Register first_object_instance_type,
|
||||
Register second_object_instance_type,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
LabelType* on_fail);
|
||||
Label* on_fail);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Macro instructions.
|
||||
@ -902,12 +865,11 @@ class MacroAssembler: public Assembler {
|
||||
Handle<Object> code_object_;
|
||||
|
||||
// Helper functions for generating invokes.
|
||||
template <typename LabelType>
|
||||
void InvokePrologue(const ParameterCount& expected,
|
||||
const ParameterCount& actual,
|
||||
Handle<Code> code_constant,
|
||||
Register code_register,
|
||||
LabelType* done,
|
||||
Label* done,
|
||||
InvokeFlag flag);
|
||||
|
||||
// Activation support.
|
||||
@ -999,697 +961,6 @@ extern void LogGeneratedCodeCoverage(const char* file_line);
|
||||
#define ACCESS_MASM(masm) masm->
|
||||
#endif
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Template implementations.
|
||||
|
||||
static int kSmiShift = kSmiTagSize + kSmiShiftSize;
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiNeg(Register dst,
|
||||
Register src,
|
||||
LabelType* on_smi_result) {
|
||||
if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
movq(kScratchRegister, src);
|
||||
neg(dst); // Low 32 bits are retained as zero by negation.
|
||||
// Test if result is zero or Smi::kMinValue.
|
||||
cmpq(dst, kScratchRegister);
|
||||
j(not_equal, on_smi_result);
|
||||
movq(src, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src);
|
||||
neg(dst);
|
||||
cmpq(dst, src);
|
||||
// If the result is zero or Smi::kMinValue, negation failed to create a smi.
|
||||
j(not_equal, on_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiAdd(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT_NOT_NULL(on_not_smi_result);
|
||||
ASSERT(!dst.is(src2));
|
||||
if (dst.is(src1)) {
|
||||
movq(kScratchRegister, src1);
|
||||
addq(kScratchRegister, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
addq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiSub(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT_NOT_NULL(on_not_smi_result);
|
||||
ASSERT(!dst.is(src2));
|
||||
if (dst.is(src1)) {
|
||||
cmpq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
subq(dst, src2);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiSub(Register dst,
|
||||
Register src1,
|
||||
const Operand& src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT_NOT_NULL(on_not_smi_result);
|
||||
if (dst.is(src1)) {
|
||||
movq(kScratchRegister, src2);
|
||||
cmpq(src1, kScratchRegister);
|
||||
j(overflow, on_not_smi_result);
|
||||
subq(src1, kScratchRegister);
|
||||
} else {
|
||||
movq(dst, src1);
|
||||
subq(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiMul(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT(!dst.is(src2));
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
|
||||
if (dst.is(src1)) {
|
||||
NearLabel failure, zero_correct_result;
|
||||
movq(kScratchRegister, src1); // Create backup for later testing.
|
||||
SmiToInteger64(dst, src1);
|
||||
imul(dst, src2);
|
||||
j(overflow, &failure);
|
||||
|
||||
// Check for negative zero result. If product is zero, and one
|
||||
// argument is negative, go to slow case.
|
||||
NearLabel correct_result;
|
||||
testq(dst, dst);
|
||||
j(not_zero, &correct_result);
|
||||
|
||||
movq(dst, kScratchRegister);
|
||||
xor_(dst, src2);
|
||||
j(positive, &zero_correct_result); // Result was positive zero.
|
||||
|
||||
bind(&failure); // Reused failure exit, restores src1.
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
|
||||
bind(&zero_correct_result);
|
||||
xor_(dst, dst);
|
||||
|
||||
bind(&correct_result);
|
||||
} else {
|
||||
SmiToInteger64(dst, src1);
|
||||
imul(dst, src2);
|
||||
j(overflow, on_not_smi_result);
|
||||
// Check for negative zero result. If product is zero, and one
|
||||
// argument is negative, go to slow case.
|
||||
NearLabel correct_result;
|
||||
testq(dst, dst);
|
||||
j(not_zero, &correct_result);
|
||||
// One of src1 and src2 is zero, the check whether the other is
|
||||
// negative.
|
||||
movq(kScratchRegister, src1);
|
||||
xor_(kScratchRegister, src2);
|
||||
j(negative, on_not_smi_result);
|
||||
bind(&correct_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiTryAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result) {
|
||||
// Does not assume that src is a smi.
|
||||
ASSERT_EQ(static_cast<int>(1), static_cast<int>(kSmiTagMask));
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src.is(kScratchRegister));
|
||||
|
||||
JumpIfNotSmi(src, on_not_smi_result);
|
||||
Register tmp = (dst.is(src) ? kScratchRegister : dst);
|
||||
LoadSmiConstant(tmp, constant);
|
||||
addq(tmp, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
if (dst.is(src)) {
|
||||
movq(dst, tmp);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiAddConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
movq(dst, src);
|
||||
}
|
||||
} else if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
|
||||
LoadSmiConstant(kScratchRegister, constant);
|
||||
addq(kScratchRegister, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
} else {
|
||||
LoadSmiConstant(dst, constant);
|
||||
addq(dst, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiSubConstant(Register dst,
|
||||
Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_not_smi_result) {
|
||||
if (constant->value() == 0) {
|
||||
if (!dst.is(src)) {
|
||||
movq(dst, src);
|
||||
}
|
||||
} else if (dst.is(src)) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
if (constant->value() == Smi::kMinValue) {
|
||||
// Subtracting min-value from any non-negative value will overflow.
|
||||
// We test the non-negativeness before doing the subtraction.
|
||||
testq(src, src);
|
||||
j(not_sign, on_not_smi_result);
|
||||
LoadSmiConstant(kScratchRegister, constant);
|
||||
subq(dst, kScratchRegister);
|
||||
} else {
|
||||
// Subtract by adding the negation.
|
||||
LoadSmiConstant(kScratchRegister, Smi::FromInt(-constant->value()));
|
||||
addq(kScratchRegister, dst);
|
||||
j(overflow, on_not_smi_result);
|
||||
movq(dst, kScratchRegister);
|
||||
}
|
||||
} else {
|
||||
if (constant->value() == Smi::kMinValue) {
|
||||
// Subtracting min-value from any non-negative value will overflow.
|
||||
// We test the non-negativeness before doing the subtraction.
|
||||
testq(src, src);
|
||||
j(not_sign, on_not_smi_result);
|
||||
LoadSmiConstant(dst, constant);
|
||||
// Adding and subtracting the min-value gives the same result, it only
|
||||
// differs on the overflow bit, which we don't check here.
|
||||
addq(dst, src);
|
||||
} else {
|
||||
// Subtract by adding the negation.
|
||||
LoadSmiConstant(dst, Smi::FromInt(-(constant->value())));
|
||||
addq(dst, src);
|
||||
j(overflow, on_not_smi_result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiDiv(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src2.is(rax));
|
||||
ASSERT(!src2.is(rdx));
|
||||
ASSERT(!src1.is(rdx));
|
||||
|
||||
// Check for 0 divisor (result is +/-Infinity).
|
||||
NearLabel positive_divisor;
|
||||
testq(src2, src2);
|
||||
j(zero, on_not_smi_result);
|
||||
|
||||
if (src1.is(rax)) {
|
||||
movq(kScratchRegister, src1);
|
||||
}
|
||||
SmiToInteger32(rax, src1);
|
||||
// We need to rule out dividing Smi::kMinValue by -1, since that would
|
||||
// overflow in idiv and raise an exception.
|
||||
// We combine this with negative zero test (negative zero only happens
|
||||
// when dividing zero by a negative number).
|
||||
|
||||
// We overshoot a little and go to slow case if we divide min-value
|
||||
// by any negative value, not just -1.
|
||||
NearLabel safe_div;
|
||||
testl(rax, Immediate(0x7fffffff));
|
||||
j(not_zero, &safe_div);
|
||||
testq(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
j(positive, &safe_div);
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
} else {
|
||||
j(negative, on_not_smi_result);
|
||||
}
|
||||
bind(&safe_div);
|
||||
|
||||
SmiToInteger32(src2, src2);
|
||||
// Sign extend src1 into edx:eax.
|
||||
cdq();
|
||||
idivl(src2);
|
||||
Integer32ToSmi(src2, src2);
|
||||
// Check that the remainder is zero.
|
||||
testl(rdx, rdx);
|
||||
if (src1.is(rax)) {
|
||||
NearLabel smi_result;
|
||||
j(zero, &smi_result);
|
||||
movq(src1, kScratchRegister);
|
||||
jmp(on_not_smi_result);
|
||||
bind(&smi_result);
|
||||
} else {
|
||||
j(not_zero, on_not_smi_result);
|
||||
}
|
||||
if (!dst.is(src1) && src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
Integer32ToSmi(dst, rax);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiMod(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!src2.is(rax));
|
||||
ASSERT(!src2.is(rdx));
|
||||
ASSERT(!src1.is(rdx));
|
||||
ASSERT(!src1.is(src2));
|
||||
|
||||
testq(src2, src2);
|
||||
j(zero, on_not_smi_result);
|
||||
|
||||
if (src1.is(rax)) {
|
||||
movq(kScratchRegister, src1);
|
||||
}
|
||||
SmiToInteger32(rax, src1);
|
||||
SmiToInteger32(src2, src2);
|
||||
|
||||
// Test for the edge case of dividing Smi::kMinValue by -1 (will overflow).
|
||||
NearLabel safe_div;
|
||||
cmpl(rax, Immediate(Smi::kMinValue));
|
||||
j(not_equal, &safe_div);
|
||||
cmpl(src2, Immediate(-1));
|
||||
j(not_equal, &safe_div);
|
||||
// Retag inputs and go slow case.
|
||||
Integer32ToSmi(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
jmp(on_not_smi_result);
|
||||
bind(&safe_div);
|
||||
|
||||
// Sign extend eax into edx:eax.
|
||||
cdq();
|
||||
idivl(src2);
|
||||
// Restore smi tags on inputs.
|
||||
Integer32ToSmi(src2, src2);
|
||||
if (src1.is(rax)) {
|
||||
movq(src1, kScratchRegister);
|
||||
}
|
||||
// Check for a negative zero result. If the result is zero, and the
|
||||
// dividend is negative, go slow to return a floating point negative zero.
|
||||
NearLabel smi_result;
|
||||
testl(rdx, rdx);
|
||||
j(not_zero, &smi_result);
|
||||
testq(src1, src1);
|
||||
j(negative, on_not_smi_result);
|
||||
bind(&smi_result);
|
||||
Integer32ToSmi(dst, rdx);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiShiftLogicalRightConstant(
|
||||
Register dst, Register src, int shift_value, LabelType* on_not_smi_result) {
|
||||
// Logic right shift interprets its result as an *unsigned* number.
|
||||
if (dst.is(src)) {
|
||||
UNIMPLEMENTED(); // Not used.
|
||||
} else {
|
||||
movq(dst, src);
|
||||
if (shift_value == 0) {
|
||||
testq(dst, dst);
|
||||
j(negative, on_not_smi_result);
|
||||
}
|
||||
shr(dst, Immediate(shift_value + kSmiShift));
|
||||
shl(dst, Immediate(kSmiShift));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SmiShiftLogicalRight(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smi_result) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(rcx));
|
||||
NearLabel result_ok;
|
||||
if (src1.is(rcx) || src2.is(rcx)) {
|
||||
movq(kScratchRegister, rcx);
|
||||
}
|
||||
if (!dst.is(src1)) {
|
||||
movq(dst, src1);
|
||||
}
|
||||
SmiToInteger32(rcx, src2);
|
||||
orl(rcx, Immediate(kSmiShift));
|
||||
shr_cl(dst); // Shift is rcx modulo 0x1f + 32.
|
||||
shl(dst, Immediate(kSmiShift));
|
||||
testq(dst, dst);
|
||||
if (src1.is(rcx) || src2.is(rcx)) {
|
||||
NearLabel positive_result;
|
||||
j(positive, &positive_result);
|
||||
if (src1.is(rcx)) {
|
||||
movq(src1, kScratchRegister);
|
||||
} else {
|
||||
movq(src2, kScratchRegister);
|
||||
}
|
||||
jmp(on_not_smi_result);
|
||||
bind(&positive_result);
|
||||
} else {
|
||||
j(negative, on_not_smi_result); // src2 was zero and src1 negative.
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::SelectNonSmi(Register dst,
|
||||
Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_smis) {
|
||||
ASSERT(!dst.is(kScratchRegister));
|
||||
ASSERT(!src1.is(kScratchRegister));
|
||||
ASSERT(!src2.is(kScratchRegister));
|
||||
ASSERT(!dst.is(src1));
|
||||
ASSERT(!dst.is(src2));
|
||||
// Both operands must not be smis.
|
||||
#ifdef DEBUG
|
||||
if (allow_stub_calls()) { // Check contains a stub call.
|
||||
Condition not_both_smis = NegateCondition(CheckBothSmi(src1, src2));
|
||||
Check(not_both_smis, "Both registers were smis in SelectNonSmi.");
|
||||
}
|
||||
#endif
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
ASSERT_EQ(0, Smi::FromInt(0));
|
||||
movl(kScratchRegister, Immediate(kSmiTagMask));
|
||||
and_(kScratchRegister, src1);
|
||||
testl(kScratchRegister, src2);
|
||||
// If non-zero then both are smis.
|
||||
j(not_zero, on_not_smis);
|
||||
|
||||
// Exactly one operand is a smi.
|
||||
ASSERT_EQ(1, static_cast<int>(kSmiTagMask));
|
||||
// kScratchRegister still holds src1 & kSmiTag, which is either zero or one.
|
||||
subq(kScratchRegister, Immediate(1));
|
||||
// If src1 is a smi, then scratch register all 1s, else it is all 0s.
|
||||
movq(dst, src1);
|
||||
xor_(dst, src2);
|
||||
and_(dst, kScratchRegister);
|
||||
// If src1 is a smi, dst holds src1 ^ src2, else it is zero.
|
||||
xor_(dst, src1);
|
||||
// If src1 is a smi, dst is src2, else it is src1, i.e., the non-smi.
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfSmi(Register src, LabelType* on_smi) {
|
||||
ASSERT_EQ(0, kSmiTag);
|
||||
Condition smi = CheckSmi(src);
|
||||
j(smi, on_smi);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotSmi(Register src, LabelType* on_not_smi) {
|
||||
Condition smi = CheckSmi(src);
|
||||
j(NegateCondition(smi), on_not_smi);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotPositiveSmi(Register src,
|
||||
LabelType* on_not_positive_smi) {
|
||||
Condition positive_smi = CheckPositiveSmi(src);
|
||||
j(NegateCondition(positive_smi), on_not_positive_smi);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfSmiEqualsConstant(Register src,
|
||||
Smi* constant,
|
||||
LabelType* on_equals) {
|
||||
SmiCompare(src, constant);
|
||||
j(equal, on_equals);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotValidSmiValue(Register src,
|
||||
LabelType* on_invalid) {
|
||||
Condition is_valid = CheckInteger32ValidSmiValue(src);
|
||||
j(NegateCondition(is_valid), on_invalid);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfUIntNotValidSmiValue(Register src,
|
||||
LabelType* on_invalid) {
|
||||
Condition is_valid = CheckUInteger32ValidSmiValue(src);
|
||||
j(NegateCondition(is_valid), on_invalid);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotBothSmi(Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_both_smi) {
|
||||
Condition both_smi = CheckBothSmi(src1, src2);
|
||||
j(NegateCondition(both_smi), on_not_both_smi);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotBothPositiveSmi(Register src1,
|
||||
Register src2,
|
||||
LabelType* on_not_both_smi) {
|
||||
Condition both_smi = CheckBothPositiveSmi(src1, src2);
|
||||
j(NegateCondition(both_smi), on_not_both_smi);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfNotBothSequentialAsciiStrings(Register first_object,
|
||||
Register second_object,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
LabelType* on_fail) {
|
||||
// Check that both objects are not smis.
|
||||
Condition either_smi = CheckEitherSmi(first_object, second_object);
|
||||
j(either_smi, on_fail);
|
||||
|
||||
// Load instance type for both strings.
|
||||
movq(scratch1, FieldOperand(first_object, HeapObject::kMapOffset));
|
||||
movq(scratch2, FieldOperand(second_object, HeapObject::kMapOffset));
|
||||
movzxbl(scratch1, FieldOperand(scratch1, Map::kInstanceTypeOffset));
|
||||
movzxbl(scratch2, FieldOperand(scratch2, Map::kInstanceTypeOffset));
|
||||
|
||||
// Check that both are flat ascii strings.
|
||||
ASSERT(kNotStringTag != 0);
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
const int kFlatAsciiStringTag = ASCII_STRING_TYPE;
|
||||
|
||||
andl(scratch1, Immediate(kFlatAsciiStringMask));
|
||||
andl(scratch2, Immediate(kFlatAsciiStringMask));
|
||||
// Interleave the bits to check both scratch1 and scratch2 in one test.
|
||||
ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3));
|
||||
lea(scratch1, Operand(scratch1, scratch2, times_8, 0));
|
||||
cmpl(scratch1,
|
||||
Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3)));
|
||||
j(not_equal, on_fail);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfInstanceTypeIsNotSequentialAscii(
|
||||
Register instance_type,
|
||||
Register scratch,
|
||||
LabelType *failure) {
|
||||
if (!scratch.is(instance_type)) {
|
||||
movl(scratch, instance_type);
|
||||
}
|
||||
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
|
||||
andl(scratch, Immediate(kFlatAsciiStringMask));
|
||||
cmpl(scratch, Immediate(kStringTag | kSeqStringTag | kAsciiStringTag));
|
||||
j(not_equal, failure);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::JumpIfBothInstanceTypesAreNotSequentialAscii(
|
||||
Register first_object_instance_type,
|
||||
Register second_object_instance_type,
|
||||
Register scratch1,
|
||||
Register scratch2,
|
||||
LabelType* on_fail) {
|
||||
// Load instance type for both strings.
|
||||
movq(scratch1, first_object_instance_type);
|
||||
movq(scratch2, second_object_instance_type);
|
||||
|
||||
// Check that both are flat ascii strings.
|
||||
ASSERT(kNotStringTag != 0);
|
||||
const int kFlatAsciiStringMask =
|
||||
kIsNotStringMask | kStringRepresentationMask | kStringEncodingMask;
|
||||
const int kFlatAsciiStringTag = ASCII_STRING_TYPE;
|
||||
|
||||
andl(scratch1, Immediate(kFlatAsciiStringMask));
|
||||
andl(scratch2, Immediate(kFlatAsciiStringMask));
|
||||
// Interleave the bits to check both scratch1 and scratch2 in one test.
|
||||
ASSERT_EQ(0, kFlatAsciiStringMask & (kFlatAsciiStringMask << 3));
|
||||
lea(scratch1, Operand(scratch1, scratch2, times_8, 0));
|
||||
cmpl(scratch1,
|
||||
Immediate(kFlatAsciiStringTag + (kFlatAsciiStringTag << 3)));
|
||||
j(not_equal, on_fail);
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::InNewSpace(Register object,
|
||||
Register scratch,
|
||||
Condition cc,
|
||||
LabelType* branch) {
|
||||
if (Serializer::enabled()) {
|
||||
// Can't do arithmetic on external references if it might get serialized.
|
||||
// The mask isn't really an address. We load it as an external reference in
|
||||
// case the size of the new space is different between the snapshot maker
|
||||
// and the running system.
|
||||
if (scratch.is(object)) {
|
||||
movq(kScratchRegister, ExternalReference::new_space_mask());
|
||||
and_(scratch, kScratchRegister);
|
||||
} else {
|
||||
movq(scratch, ExternalReference::new_space_mask());
|
||||
and_(scratch, object);
|
||||
}
|
||||
movq(kScratchRegister, ExternalReference::new_space_start());
|
||||
cmpq(scratch, kScratchRegister);
|
||||
j(cc, branch);
|
||||
} else {
|
||||
ASSERT(is_int32(static_cast<int64_t>(Heap::NewSpaceMask())));
|
||||
intptr_t new_space_start =
|
||||
reinterpret_cast<intptr_t>(Heap::NewSpaceStart());
|
||||
movq(kScratchRegister, -new_space_start, RelocInfo::NONE);
|
||||
if (scratch.is(object)) {
|
||||
addq(scratch, kScratchRegister);
|
||||
} else {
|
||||
lea(scratch, Operand(object, kScratchRegister, times_1, 0));
|
||||
}
|
||||
and_(scratch, Immediate(static_cast<int32_t>(Heap::NewSpaceMask())));
|
||||
j(cc, branch);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template <typename LabelType>
|
||||
void MacroAssembler::InvokePrologue(const ParameterCount& expected,
|
||||
const ParameterCount& actual,
|
||||
Handle<Code> code_constant,
|
||||
Register code_register,
|
||||
LabelType* done,
|
||||
InvokeFlag flag) {
|
||||
bool definitely_matches = false;
|
||||
NearLabel invoke;
|
||||
if (expected.is_immediate()) {
|
||||
ASSERT(actual.is_immediate());
|
||||
if (expected.immediate() == actual.immediate()) {
|
||||
definitely_matches = true;
|
||||
} else {
|
||||
Set(rax, actual.immediate());
|
||||
if (expected.immediate() ==
|
||||
SharedFunctionInfo::kDontAdaptArgumentsSentinel) {
|
||||
// Don't worry about adapting arguments for built-ins that
|
||||
// don't want that done. Skip adaption code by making it look
|
||||
// like we have a match between expected and actual number of
|
||||
// arguments.
|
||||
definitely_matches = true;
|
||||
} else {
|
||||
Set(rbx, expected.immediate());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (actual.is_immediate()) {
|
||||
// Expected is in register, actual is immediate. This is the
|
||||
// case when we invoke function values without going through the
|
||||
// IC mechanism.
|
||||
cmpq(expected.reg(), Immediate(actual.immediate()));
|
||||
j(equal, &invoke);
|
||||
ASSERT(expected.reg().is(rbx));
|
||||
Set(rax, actual.immediate());
|
||||
} else if (!expected.reg().is(actual.reg())) {
|
||||
// Both expected and actual are in (different) registers. This
|
||||
// is the case when we invoke functions using call and apply.
|
||||
cmpq(expected.reg(), actual.reg());
|
||||
j(equal, &invoke);
|
||||
ASSERT(actual.reg().is(rax));
|
||||
ASSERT(expected.reg().is(rbx));
|
||||
}
|
||||
}
|
||||
|
||||
if (!definitely_matches) {
|
||||
Handle<Code> adaptor =
|
||||
Handle<Code>(Builtins::builtin(Builtins::ArgumentsAdaptorTrampoline));
|
||||
if (!code_constant.is_null()) {
|
||||
movq(rdx, code_constant, RelocInfo::EMBEDDED_OBJECT);
|
||||
addq(rdx, Immediate(Code::kHeaderSize - kHeapObjectTag));
|
||||
} else if (!code_register.is(rdx)) {
|
||||
movq(rdx, code_register);
|
||||
}
|
||||
|
||||
if (flag == CALL_FUNCTION) {
|
||||
Call(adaptor, RelocInfo::CODE_TARGET);
|
||||
jmp(done);
|
||||
} else {
|
||||
Jump(adaptor, RelocInfo::CODE_TARGET);
|
||||
}
|
||||
bind(&invoke);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
} } // namespace v8::internal
|
||||
|
||||
|
33
deps/v8/src/x64/stub-cache-x64.cc
vendored
33
deps/v8/src/x64/stub-cache-x64.cc
vendored
@ -216,12 +216,7 @@ void StubCompiler::GenerateLoadGlobalFunctionPrototype(MacroAssembler* masm,
|
||||
|
||||
|
||||
void StubCompiler::GenerateDirectLoadGlobalFunctionPrototype(
|
||||
MacroAssembler* masm, int index, Register prototype, Label* miss) {
|
||||
// Check we're still in the same context.
|
||||
__ Move(prototype, Top::global());
|
||||
__ cmpq(Operand(rsi, Context::SlotOffset(Context::GLOBAL_INDEX)),
|
||||
prototype);
|
||||
__ j(not_equal, miss);
|
||||
MacroAssembler* masm, int index, Register prototype) {
|
||||
// Get the global function with the given index.
|
||||
JSFunction* function = JSFunction::cast(Top::global_context()->get(index));
|
||||
// Load its initial map. The global functions all have initial maps.
|
||||
@ -969,7 +964,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ j(above_equal, &miss);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::STRING_FUNCTION_INDEX, rax, &miss);
|
||||
masm(), Context::STRING_FUNCTION_INDEX, rax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), rax, holder,
|
||||
rbx, rdx, rdi, name, &miss);
|
||||
}
|
||||
@ -988,7 +983,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, rax, &miss);
|
||||
masm(), Context::NUMBER_FUNCTION_INDEX, rax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), rax, holder,
|
||||
rbx, rdx, rdi, name, &miss);
|
||||
}
|
||||
@ -1009,7 +1004,7 @@ Object* CallStubCompiler::CompileCallConstant(Object* object,
|
||||
__ bind(&fast);
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, rax, &miss);
|
||||
masm(), Context::BOOLEAN_FUNCTION_INDEX, rax);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), rax, holder,
|
||||
rbx, rdx, rdi, name, &miss);
|
||||
}
|
||||
@ -1363,8 +1358,7 @@ Object* CallStubCompiler::CompileStringCharAtCall(Object* object,
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
rax,
|
||||
&miss);
|
||||
rax);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), rax, holder,
|
||||
rbx, rdx, rdi, name, &miss);
|
||||
@ -1435,8 +1429,7 @@ Object* CallStubCompiler::CompileStringCharCodeAtCall(
|
||||
// Check that the maps starting from the prototype haven't changed.
|
||||
GenerateDirectLoadGlobalFunctionPrototype(masm(),
|
||||
Context::STRING_FUNCTION_INDEX,
|
||||
rax,
|
||||
&miss);
|
||||
rax);
|
||||
ASSERT(object != holder);
|
||||
CheckPrototypes(JSObject::cast(object->GetPrototype()), rax, holder,
|
||||
rbx, rdx, rdi, name, &miss);
|
||||
@ -1548,16 +1541,6 @@ Object* CallStubCompiler::CompileStringFromCharCodeCall(
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileMathFloorCall(Object* object,
|
||||
JSObject* holder,
|
||||
JSGlobalPropertyCell* cell,
|
||||
JSFunction* function,
|
||||
String* name) {
|
||||
// TODO(872): implement this.
|
||||
return Heap::undefined_value();
|
||||
}
|
||||
|
||||
|
||||
Object* CallStubCompiler::CompileCallInterceptor(JSObject* object,
|
||||
JSObject* holder,
|
||||
String* name) {
|
||||
@ -1862,12 +1845,12 @@ Object* LoadStubCompiler::CompileLoadGlobal(JSObject* object,
|
||||
__ Check(not_equal, "DontDelete cells can't contain the hole");
|
||||
}
|
||||
|
||||
__ IncrementCounter(&Counters::named_load_global_stub, 1);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline, 1);
|
||||
__ movq(rax, rbx);
|
||||
__ ret(0);
|
||||
|
||||
__ bind(&miss);
|
||||
__ IncrementCounter(&Counters::named_load_global_stub_miss, 1);
|
||||
__ IncrementCounter(&Counters::named_load_global_inline_miss, 1);
|
||||
GenerateLoadMiss(masm(), Code::LOAD_IC);
|
||||
|
||||
// Return the generated code.
|
||||
|
1
deps/v8/test/cctest/SConscript
vendored
1
deps/v8/test/cctest/SConscript
vendored
@ -35,7 +35,6 @@ Import('context object_files')
|
||||
SOURCES = {
|
||||
'all': [
|
||||
'gay-fixed.cc',
|
||||
'gay-precision.cc',
|
||||
'gay-shortest.cc',
|
||||
'test-accessors.cc',
|
||||
'test-alloc.cc',
|
||||
|
100050
deps/v8/test/cctest/gay-precision.cc
vendored
100050
deps/v8/test/cctest/gay-precision.cc
vendored
File diff suppressed because it is too large
Load Diff
47
deps/v8/test/cctest/gay-precision.h
vendored
47
deps/v8/test/cctest/gay-precision.h
vendored
@ -1,47 +0,0 @@
|
||||
// Copyright 2006-2008 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following
|
||||
// disclaimer in the documentation and/or other materials provided
|
||||
// with the distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived
|
||||
// from this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
#ifndef GAY_PRECISION_H_
|
||||
#define GAY_PRECISION_H_
|
||||
|
||||
namespace v8 {
|
||||
namespace internal {
|
||||
|
||||
struct PrecomputedPrecision {
|
||||
double v;
|
||||
int number_digits;
|
||||
const char* representation;
|
||||
int decimal_point;
|
||||
};
|
||||
|
||||
// Returns precomputed values of dtoa. The strings have been generated using
|
||||
// Gay's dtoa in mode "precision".
|
||||
Vector<const PrecomputedPrecision> PrecomputedPrecisionRepresentations();
|
||||
|
||||
} } // namespace v8::internal
|
||||
|
||||
#endif // GAY_PRECISION_H_
|
69
deps/v8/test/cctest/test-api.cc
vendored
69
deps/v8/test/cctest/test-api.cc
vendored
@ -11308,72 +11308,3 @@ TEST(GCInFailedAccessCheckCallback) {
|
||||
// the other tests.
|
||||
v8::V8::SetFailedAccessCheckCallbackFunction(NULL);
|
||||
}
|
||||
|
||||
|
||||
TEST(StringCheckMultipleContexts) {
|
||||
const char* code =
|
||||
"(function() { return \"a\".charAt(0); })()";
|
||||
|
||||
{
|
||||
// Run the code twice in the first context to initialize the call IC.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context1;
|
||||
ExpectString(code, "a");
|
||||
ExpectString(code, "a");
|
||||
}
|
||||
|
||||
{
|
||||
// Change the String.prototype in the second context and check
|
||||
// that the right function gets called.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context2;
|
||||
CompileRun("String.prototype.charAt = function() { return \"not a\"; }");
|
||||
ExpectString(code, "not a");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
TEST(NumberCheckMultipleContexts) {
|
||||
const char* code =
|
||||
"(function() { return (42).toString(); })()";
|
||||
|
||||
{
|
||||
// Run the code twice in the first context to initialize the call IC.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context1;
|
||||
ExpectString(code, "42");
|
||||
ExpectString(code, "42");
|
||||
}
|
||||
|
||||
{
|
||||
// Change the Number.prototype in the second context and check
|
||||
// that the right function gets called.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context2;
|
||||
CompileRun("Number.prototype.toString = function() { return \"not 42\"; }");
|
||||
ExpectString(code, "not 42");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
TEST(BooleanCheckMultipleContexts) {
|
||||
const char* code =
|
||||
"(function() { return true.toString(); })()";
|
||||
|
||||
{
|
||||
// Run the code twice in the first context to initialize the call IC.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context1;
|
||||
ExpectString(code, "true");
|
||||
ExpectString(code, "true");
|
||||
}
|
||||
|
||||
{
|
||||
// Change the Boolean.prototype in the second context and check
|
||||
// that the right function gets called.
|
||||
v8::HandleScope scope;
|
||||
LocalContext context2;
|
||||
CompileRun("Boolean.prototype.toString = function() { return \"\"; }");
|
||||
ExpectString(code, "");
|
||||
}
|
||||
}
|
||||
|
18
deps/v8/test/cctest/test-disasm-ia32.cc
vendored
18
deps/v8/test/cctest/test-disasm-ia32.cc
vendored
@ -412,24 +412,6 @@ TEST(DisasmIa320) {
|
||||
}
|
||||
}
|
||||
|
||||
// andpd, cmpltsd, movaps, psllq.
|
||||
{
|
||||
if (CpuFeatures::IsSupported(SSE2)) {
|
||||
CpuFeatures::Scope fscope(SSE2);
|
||||
__ andpd(xmm0, xmm1);
|
||||
__ andpd(xmm1, xmm2);
|
||||
|
||||
__ cmpltsd(xmm0, xmm1);
|
||||
__ cmpltsd(xmm1, xmm2);
|
||||
|
||||
__ movaps(xmm0, xmm1);
|
||||
__ movaps(xmm1, xmm2);
|
||||
|
||||
__ psllq(xmm0, 17);
|
||||
__ psllq(xmm1, 42);
|
||||
}
|
||||
}
|
||||
|
||||
__ ret(0);
|
||||
|
||||
CodeDesc desc;
|
||||
|
183
deps/v8/test/cctest/test-fast-dtoa.cc
vendored
183
deps/v8/test/cctest/test-fast-dtoa.cc
vendored
@ -9,26 +9,13 @@
|
||||
#include "diy-fp.h"
|
||||
#include "double.h"
|
||||
#include "fast-dtoa.h"
|
||||
#include "gay-precision.h"
|
||||
#include "gay-shortest.h"
|
||||
|
||||
using namespace v8::internal;
|
||||
|
||||
static const int kBufferSize = 100;
|
||||
|
||||
|
||||
// Removes trailing '0' digits.
|
||||
static void TrimRepresentation(Vector<char> representation) {
|
||||
int len = strlen(representation.start());
|
||||
int i;
|
||||
for (i = len - 1; i >= 0; --i) {
|
||||
if (representation[i] != '0') break;
|
||||
}
|
||||
representation[i + 1] = '\0';
|
||||
}
|
||||
|
||||
|
||||
TEST(FastDtoaShortestVariousDoubles) {
|
||||
TEST(FastDtoaVariousDoubles) {
|
||||
char buffer_container[kBufferSize];
|
||||
Vector<char> buffer(buffer_container, kBufferSize);
|
||||
int length;
|
||||
@ -36,45 +23,38 @@ TEST(FastDtoaShortestVariousDoubles) {
|
||||
int status;
|
||||
|
||||
double min_double = 5e-324;
|
||||
status = FastDtoa(min_double, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(min_double, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("5", buffer.start());
|
||||
CHECK_EQ(-323, point);
|
||||
|
||||
double max_double = 1.7976931348623157e308;
|
||||
status = FastDtoa(max_double, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(max_double, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("17976931348623157", buffer.start());
|
||||
CHECK_EQ(309, point);
|
||||
|
||||
status = FastDtoa(4294967272.0, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(4294967272.0, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("4294967272", buffer.start());
|
||||
CHECK_EQ(10, point);
|
||||
|
||||
status = FastDtoa(4.1855804968213567e298, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(4.1855804968213567e298, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("4185580496821357", buffer.start());
|
||||
CHECK_EQ(299, point);
|
||||
|
||||
status = FastDtoa(5.5626846462680035e-309, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(5.5626846462680035e-309, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("5562684646268003", buffer.start());
|
||||
CHECK_EQ(-308, point);
|
||||
|
||||
status = FastDtoa(2147483648.0, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(2147483648.0, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("2147483648", buffer.start());
|
||||
CHECK_EQ(10, point);
|
||||
|
||||
status = FastDtoa(3.5844466002796428e+298, FAST_DTOA_SHORTEST, 0,
|
||||
buffer, &length, &point);
|
||||
status = FastDtoa(3.5844466002796428e+298, buffer, &length, &point);
|
||||
if (status) { // Not all FastDtoa variants manage to compute this number.
|
||||
CHECK_EQ("35844466002796428", buffer.start());
|
||||
CHECK_EQ(299, point);
|
||||
@ -82,7 +62,7 @@ TEST(FastDtoaShortestVariousDoubles) {
|
||||
|
||||
uint64_t smallest_normal64 = V8_2PART_UINT64_C(0x00100000, 00000000);
|
||||
double v = Double(smallest_normal64).value();
|
||||
status = FastDtoa(v, FAST_DTOA_SHORTEST, 0, buffer, &length, &point);
|
||||
status = FastDtoa(v, buffer, &length, &point);
|
||||
if (status) {
|
||||
CHECK_EQ("22250738585072014", buffer.start());
|
||||
CHECK_EQ(-307, point);
|
||||
@ -90,7 +70,7 @@ TEST(FastDtoaShortestVariousDoubles) {
|
||||
|
||||
uint64_t largest_denormal64 = V8_2PART_UINT64_C(0x000FFFFF, FFFFFFFF);
|
||||
v = Double(largest_denormal64).value();
|
||||
status = FastDtoa(v, FAST_DTOA_SHORTEST, 0, buffer, &length, &point);
|
||||
status = FastDtoa(v, buffer, &length, &point);
|
||||
if (status) {
|
||||
CHECK_EQ("2225073858507201", buffer.start());
|
||||
CHECK_EQ(-307, point);
|
||||
@ -98,107 +78,6 @@ TEST(FastDtoaShortestVariousDoubles) {
|
||||
}
|
||||
|
||||
|
||||
TEST(FastDtoaPrecisionVariousDoubles) {
|
||||
char buffer_container[kBufferSize];
|
||||
Vector<char> buffer(buffer_container, kBufferSize);
|
||||
int length;
|
||||
int point;
|
||||
int status;
|
||||
|
||||
status = FastDtoa(1.0, FAST_DTOA_PRECISION, 3, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_GE(3, length);
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ("1", buffer.start());
|
||||
CHECK_EQ(1, point);
|
||||
|
||||
status = FastDtoa(1.5, FAST_DTOA_PRECISION, 10, buffer, &length, &point);
|
||||
if (status) {
|
||||
CHECK_GE(10, length);
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ("15", buffer.start());
|
||||
CHECK_EQ(1, point);
|
||||
}
|
||||
|
||||
double min_double = 5e-324;
|
||||
status = FastDtoa(min_double, FAST_DTOA_PRECISION, 5,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("49407", buffer.start());
|
||||
CHECK_EQ(-323, point);
|
||||
|
||||
double max_double = 1.7976931348623157e308;
|
||||
status = FastDtoa(max_double, FAST_DTOA_PRECISION, 7,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("1797693", buffer.start());
|
||||
CHECK_EQ(309, point);
|
||||
|
||||
status = FastDtoa(4294967272.0, FAST_DTOA_PRECISION, 14,
|
||||
buffer, &length, &point);
|
||||
if (status) {
|
||||
CHECK_GE(14, length);
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ("4294967272", buffer.start());
|
||||
CHECK_EQ(10, point);
|
||||
}
|
||||
|
||||
status = FastDtoa(4.1855804968213567e298, FAST_DTOA_PRECISION, 17,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("41855804968213567", buffer.start());
|
||||
CHECK_EQ(299, point);
|
||||
|
||||
status = FastDtoa(5.5626846462680035e-309, FAST_DTOA_PRECISION, 1,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("6", buffer.start());
|
||||
CHECK_EQ(-308, point);
|
||||
|
||||
status = FastDtoa(2147483648.0, FAST_DTOA_PRECISION, 5,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("21475", buffer.start());
|
||||
CHECK_EQ(10, point);
|
||||
|
||||
status = FastDtoa(3.5844466002796428e+298, FAST_DTOA_PRECISION, 10,
|
||||
buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_GE(10, length);
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ("35844466", buffer.start());
|
||||
CHECK_EQ(299, point);
|
||||
|
||||
uint64_t smallest_normal64 = V8_2PART_UINT64_C(0x00100000, 00000000);
|
||||
double v = Double(smallest_normal64).value();
|
||||
status = FastDtoa(v, FAST_DTOA_PRECISION, 17, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("22250738585072014", buffer.start());
|
||||
CHECK_EQ(-307, point);
|
||||
|
||||
uint64_t largest_denormal64 = V8_2PART_UINT64_C(0x000FFFFF, FFFFFFFF);
|
||||
v = Double(largest_denormal64).value();
|
||||
status = FastDtoa(v, FAST_DTOA_PRECISION, 17, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_GE(20, length);
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ("22250738585072009", buffer.start());
|
||||
CHECK_EQ(-307, point);
|
||||
|
||||
v = 3.3161339052167390562200598e-237;
|
||||
status = FastDtoa(v, FAST_DTOA_PRECISION, 18, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("331613390521673906", buffer.start());
|
||||
CHECK_EQ(-236, point);
|
||||
|
||||
v = 7.9885183916008099497815232e+191;
|
||||
status = FastDtoa(v, FAST_DTOA_PRECISION, 4, buffer, &length, &point);
|
||||
CHECK(status);
|
||||
CHECK_EQ("7989", buffer.start());
|
||||
CHECK_EQ(192, point);
|
||||
}
|
||||
|
||||
|
||||
TEST(FastDtoaGayShortest) {
|
||||
char buffer_container[kBufferSize];
|
||||
Vector<char> buffer(buffer_container, kBufferSize);
|
||||
@ -215,7 +94,7 @@ TEST(FastDtoaGayShortest) {
|
||||
const PrecomputedShortest current_test = precomputed[i];
|
||||
total++;
|
||||
double v = current_test.v;
|
||||
status = FastDtoa(v, FAST_DTOA_SHORTEST, 0, buffer, &length, &point);
|
||||
status = FastDtoa(v, buffer, &length, &point);
|
||||
CHECK_GE(kFastDtoaMaximalLength, length);
|
||||
if (!status) continue;
|
||||
if (length == kFastDtoaMaximalLength) needed_max_length = true;
|
||||
@ -226,43 +105,3 @@ TEST(FastDtoaGayShortest) {
|
||||
CHECK_GT(succeeded*1.0/total, 0.99);
|
||||
CHECK(needed_max_length);
|
||||
}
|
||||
|
||||
|
||||
TEST(FastDtoaGayPrecision) {
|
||||
char buffer_container[kBufferSize];
|
||||
Vector<char> buffer(buffer_container, kBufferSize);
|
||||
bool status;
|
||||
int length;
|
||||
int point;
|
||||
int succeeded = 0;
|
||||
int total = 0;
|
||||
// Count separately for entries with less than 15 requested digits.
|
||||
int succeeded_15 = 0;
|
||||
int total_15 = 0;
|
||||
|
||||
Vector<const PrecomputedPrecision> precomputed =
|
||||
PrecomputedPrecisionRepresentations();
|
||||
for (int i = 0; i < precomputed.length(); ++i) {
|
||||
const PrecomputedPrecision current_test = precomputed[i];
|
||||
double v = current_test.v;
|
||||
int number_digits = current_test.number_digits;
|
||||
total++;
|
||||
if (number_digits <= 15) total_15++;
|
||||
status = FastDtoa(v, FAST_DTOA_PRECISION, number_digits,
|
||||
buffer, &length, &point);
|
||||
CHECK_GE(number_digits, length);
|
||||
if (!status) continue;
|
||||
succeeded++;
|
||||
if (number_digits <= 15) succeeded_15++;
|
||||
TrimRepresentation(buffer);
|
||||
CHECK_EQ(current_test.decimal_point, point);
|
||||
CHECK_EQ(current_test.representation, buffer.start());
|
||||
}
|
||||
// The precomputed numbers contain many entries with many requested
|
||||
// digits. These have a high failure rate and we therefore expect a lower
|
||||
// success rate than for the shortest representation.
|
||||
CHECK_GT(succeeded*1.0/total, 0.85);
|
||||
// However with less than 15 digits almost the algorithm should almost always
|
||||
// succeed.
|
||||
CHECK_GT(succeeded_15*1.0/total_15, 0.9999);
|
||||
}
|
||||
|
35
deps/v8/test/cctest/test-log-stack-tracer.cc
vendored
35
deps/v8/test/cctest/test-log-stack-tracer.cc
vendored
@ -206,8 +206,21 @@ static Handle<JSFunction> CompileFunction(const char* source) {
|
||||
}
|
||||
|
||||
|
||||
static void CheckJSFunctionAtAddress(const char* func_name, Address addr) {
|
||||
i::Object* obj = i::HeapObject::FromAddress(addr);
|
||||
static Local<Value> GetGlobalProperty(const char* name) {
|
||||
return env->Global()->Get(String::New(name));
|
||||
}
|
||||
|
||||
|
||||
static Handle<JSFunction> GetGlobalJSFunction(const char* name) {
|
||||
Handle<JSFunction> result(JSFunction::cast(
|
||||
*v8::Utils::OpenHandle(*GetGlobalProperty(name))));
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
static void CheckObjectIsJSFunction(const char* func_name,
|
||||
Address addr) {
|
||||
i::Object* obj = reinterpret_cast<i::Object*>(addr);
|
||||
CHECK(obj->IsJSFunction());
|
||||
CHECK(JSFunction::cast(obj)->shared()->name()->IsString());
|
||||
i::SmartPointer<char> found_name =
|
||||
@ -291,6 +304,7 @@ static void CreateTraceCallerFunction(const char* func_name,
|
||||
#endif
|
||||
|
||||
SetGlobalProperty(func_name, v8::ToApi<Value>(func));
|
||||
CHECK_EQ(*func, *GetGlobalJSFunction(func_name));
|
||||
}
|
||||
|
||||
|
||||
@ -318,13 +332,13 @@ TEST(CFromJSStackTrace) {
|
||||
// script [JS]
|
||||
// JSTrace() [JS]
|
||||
// JSFuncDoTrace() [JS] [captures EBP value and encodes it as Smi]
|
||||
// trace(EBP) [native (extension)]
|
||||
// trace(EBP encoded as Smi) [native (extension)]
|
||||
// DoTrace(EBP) [native]
|
||||
// StackTracer::Trace
|
||||
CHECK_GT(sample.frames_count, 1);
|
||||
// Stack tracing will start from the first JS function, i.e. "JSFuncDoTrace"
|
||||
CheckJSFunctionAtAddress("JSFuncDoTrace", sample.stack[0]);
|
||||
CheckJSFunctionAtAddress("JSTrace", sample.stack[1]);
|
||||
CheckObjectIsJSFunction("JSFuncDoTrace", sample.stack[0]);
|
||||
CheckObjectIsJSFunction("JSTrace", sample.stack[1]);
|
||||
}
|
||||
|
||||
|
||||
@ -356,18 +370,19 @@ TEST(PureJSStackTrace) {
|
||||
// script [JS]
|
||||
// OuterJSTrace() [JS]
|
||||
// JSTrace() [JS]
|
||||
// JSFuncDoTrace() [JS]
|
||||
// js_trace(EBP) [native (extension)]
|
||||
// JSFuncDoTrace() [JS] [captures EBP value and encodes it as Smi]
|
||||
// js_trace(EBP encoded as Smi) [native (extension)]
|
||||
// DoTraceHideCEntryFPAddress(EBP) [native]
|
||||
// StackTracer::Trace
|
||||
//
|
||||
// The last JS function called. It is only visible through
|
||||
// sample.function, as its return address is above captured EBP value.
|
||||
CheckJSFunctionAtAddress("JSFuncDoTrace", sample.function);
|
||||
CHECK_EQ(GetGlobalJSFunction("JSFuncDoTrace")->address(),
|
||||
sample.function);
|
||||
CHECK_GT(sample.frames_count, 1);
|
||||
// Stack sampling will start from the caller of JSFuncDoTrace, i.e. "JSTrace"
|
||||
CheckJSFunctionAtAddress("JSTrace", sample.stack[0]);
|
||||
CheckJSFunctionAtAddress("OuterJSTrace", sample.stack[1]);
|
||||
CheckObjectIsJSFunction("JSTrace", sample.stack[0]);
|
||||
CheckObjectIsJSFunction("OuterJSTrace", sample.stack[1]);
|
||||
}
|
||||
|
||||
|
||||
|
20
deps/v8/test/cctest/test-profile-generator.cc
vendored
20
deps/v8/test/cctest/test-profile-generator.cc
vendored
@ -89,26 +89,6 @@ TEST(ProfileNodeFindOrAddChild) {
|
||||
}
|
||||
|
||||
|
||||
TEST(ProfileNodeFindOrAddChildForSameFunction) {
|
||||
const char* empty = "";
|
||||
const char* aaa = "aaa";
|
||||
ProfileNode node(NULL, NULL);
|
||||
CodeEntry entry1(i::Logger::FUNCTION_TAG, empty, aaa, empty, 0,
|
||||
TokenEnumerator::kNoSecurityToken);
|
||||
ProfileNode* childNode1 = node.FindOrAddChild(&entry1);
|
||||
CHECK_NE(NULL, childNode1);
|
||||
CHECK_EQ(childNode1, node.FindOrAddChild(&entry1));
|
||||
// The same function again.
|
||||
CodeEntry entry2(i::Logger::FUNCTION_TAG, empty, aaa, empty, 0,
|
||||
TokenEnumerator::kNoSecurityToken);
|
||||
CHECK_EQ(childNode1, node.FindOrAddChild(&entry2));
|
||||
// Now with a different security token.
|
||||
CodeEntry entry3(i::Logger::FUNCTION_TAG, empty, aaa, empty, 0,
|
||||
TokenEnumerator::kNoSecurityToken + 1);
|
||||
CHECK_EQ(childNode1, node.FindOrAddChild(&entry3));
|
||||
}
|
||||
|
||||
|
||||
namespace {
|
||||
|
||||
class ProfileTreeTestHelper {
|
||||
|
1
deps/v8/test/mjsunit/fuzz-natives.js
vendored
1
deps/v8/test/mjsunit/fuzz-natives.js
vendored
@ -129,6 +129,7 @@ var knownProblems = {
|
||||
// which means that we have to propagate errors back.
|
||||
"SetFunctionBreakPoint": true,
|
||||
"SetScriptBreakPoint": true,
|
||||
"ChangeBreakOnException": true,
|
||||
"PrepareStep": true,
|
||||
|
||||
// Too slow.
|
||||
|
118
deps/v8/test/mjsunit/math-floor.js
vendored
118
deps/v8/test/mjsunit/math-floor.js
vendored
@ -1,118 +0,0 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following
|
||||
// disclaimer in the documentation and/or other materials provided
|
||||
// with the distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived
|
||||
// from this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// Flags: --max-new-space-size=262144
|
||||
|
||||
function zero() {
|
||||
var x = 0.5;
|
||||
return (function() { return x - 0.5; })();
|
||||
}
|
||||
|
||||
function test() {
|
||||
assertEquals(0, Math.floor(0));
|
||||
assertEquals(0, Math.floor(zero()));
|
||||
assertEquals(1/-0, 1/Math.floor(-0)); // 0 == -0, so we use reciprocals.
|
||||
assertEquals(Infinity, Math.floor(Infinity));
|
||||
assertEquals(-Infinity, Math.floor(-Infinity));
|
||||
assertNaN(Math.floor(NaN));
|
||||
|
||||
assertEquals(0, Math.floor(0.1));
|
||||
assertEquals(0, Math.floor(0.5));
|
||||
assertEquals(0, Math.floor(0.7));
|
||||
assertEquals(-1, Math.floor(-0.1));
|
||||
assertEquals(-1, Math.floor(-0.5));
|
||||
assertEquals(-1, Math.floor(-0.7));
|
||||
assertEquals(1, Math.floor(1));
|
||||
assertEquals(1, Math.floor(1.1));
|
||||
assertEquals(1, Math.floor(1.5));
|
||||
assertEquals(1, Math.floor(1.7));
|
||||
assertEquals(-1, Math.floor(-1));
|
||||
assertEquals(-2, Math.floor(-1.1));
|
||||
assertEquals(-2, Math.floor(-1.5));
|
||||
assertEquals(-2, Math.floor(-1.7));
|
||||
|
||||
assertEquals(0, Math.floor(Number.MIN_VALUE));
|
||||
assertEquals(-1, Math.floor(-Number.MIN_VALUE));
|
||||
assertEquals(Number.MAX_VALUE, Math.floor(Number.MAX_VALUE));
|
||||
assertEquals(-Number.MAX_VALUE, Math.floor(-Number.MAX_VALUE));
|
||||
assertEquals(Infinity, Math.floor(Infinity));
|
||||
assertEquals(-Infinity, Math.floor(-Infinity));
|
||||
|
||||
// 2^30 is a smi boundary.
|
||||
var two_30 = 1 << 30;
|
||||
|
||||
assertEquals(two_30, Math.floor(two_30));
|
||||
assertEquals(two_30, Math.floor(two_30 + 0.1));
|
||||
assertEquals(two_30, Math.floor(two_30 + 0.5));
|
||||
assertEquals(two_30, Math.floor(two_30 + 0.7));
|
||||
|
||||
assertEquals(two_30 - 1, Math.floor(two_30 - 1));
|
||||
assertEquals(two_30 - 1, Math.floor(two_30 - 1 + 0.1));
|
||||
assertEquals(two_30 - 1, Math.floor(two_30 - 1 + 0.5));
|
||||
assertEquals(two_30 - 1, Math.floor(two_30 - 1 + 0.7));
|
||||
|
||||
assertEquals(-two_30, Math.floor(-two_30));
|
||||
assertEquals(-two_30, Math.floor(-two_30 + 0.1));
|
||||
assertEquals(-two_30, Math.floor(-two_30 + 0.5));
|
||||
assertEquals(-two_30, Math.floor(-two_30 + 0.7));
|
||||
|
||||
assertEquals(-two_30 + 1, Math.floor(-two_30 + 1));
|
||||
assertEquals(-two_30 + 1, Math.floor(-two_30 + 1 + 0.1));
|
||||
assertEquals(-two_30 + 1, Math.floor(-two_30 + 1 + 0.5));
|
||||
assertEquals(-two_30 + 1, Math.floor(-two_30 + 1 + 0.7));
|
||||
|
||||
// 2^52 is a precision boundary.
|
||||
var two_52 = (1 << 30) * (1 << 22);
|
||||
|
||||
assertEquals(two_52, Math.floor(two_52));
|
||||
assertEquals(two_52, Math.floor(two_52 + 0.1));
|
||||
assertEquals(two_52, two_52 + 0.5);
|
||||
assertEquals(two_52, Math.floor(two_52 + 0.5));
|
||||
assertEquals(two_52 + 1, two_52 + 0.7);
|
||||
assertEquals(two_52 + 1, Math.floor(two_52 + 0.7));
|
||||
|
||||
assertEquals(two_52 - 1, Math.floor(two_52 - 1));
|
||||
assertEquals(two_52 - 1, Math.floor(two_52 - 1 + 0.1));
|
||||
assertEquals(two_52 - 1, Math.floor(two_52 - 1 + 0.5));
|
||||
assertEquals(two_52 - 1, Math.floor(two_52 - 1 + 0.7));
|
||||
|
||||
assertEquals(-two_52, Math.floor(-two_52));
|
||||
assertEquals(-two_52, Math.floor(-two_52 + 0.1));
|
||||
assertEquals(-two_52, Math.floor(-two_52 + 0.5));
|
||||
assertEquals(-two_52, Math.floor(-two_52 + 0.7));
|
||||
|
||||
assertEquals(-two_52 + 1, Math.floor(-two_52 + 1));
|
||||
assertEquals(-two_52 + 1, Math.floor(-two_52 + 1 + 0.1));
|
||||
assertEquals(-two_52 + 1, Math.floor(-two_52 + 1 + 0.5));
|
||||
assertEquals(-two_52 + 1, Math.floor(-two_52 + 1 + 0.7));
|
||||
}
|
||||
|
||||
|
||||
// Test in a loop to cover the custom IC and GC-related issues.
|
||||
for (var i = 0; i < 500; i++) {
|
||||
test();
|
||||
}
|
@ -29,15 +29,6 @@ assertTrue('abc'[10] === undefined);
|
||||
String.prototype[10] = 'x';
|
||||
assertEquals('abc'[10], 'x');
|
||||
|
||||
// Test that the fast case character-at stub handles an out-of-bound
|
||||
// index correctly. We need to call the function twice to initialize
|
||||
// the character-at stub.
|
||||
function f() {
|
||||
assertEquals('abc'[10], 'x');
|
||||
}
|
||||
f();
|
||||
f();
|
||||
|
||||
assertTrue(2[11] === undefined);
|
||||
Number.prototype[11] = 'y';
|
||||
assertEquals(2[11], 'y');
|
||||
|
17
deps/v8/test/mjsunit/stack-traces.js
vendored
17
deps/v8/test/mjsunit/stack-traces.js
vendored
@ -63,16 +63,6 @@ function testNestedEval() {
|
||||
eval("function Outer() { eval('function Inner() { eval(x); }'); Inner(); }; Outer();");
|
||||
}
|
||||
|
||||
function testEvalWithSourceURL() {
|
||||
eval("function Doo() { FAIL; }; Doo();\n//@ sourceURL=res://name");
|
||||
}
|
||||
|
||||
function testNestedEvalWithSourceURL() {
|
||||
var x = "FAIL";
|
||||
var innerEval = 'function Inner() { eval(x); }\n//@ sourceURL=res://inner-eval';
|
||||
eval("function Outer() { eval(innerEval); Inner(); }; Outer();\n//@ sourceURL=res://outer-eval");
|
||||
}
|
||||
|
||||
function testValue() {
|
||||
Number.prototype.causeError = function () { FAIL; };
|
||||
(1).causeError();
|
||||
@ -120,7 +110,7 @@ function testTrace(name, fun, expected, unexpected) {
|
||||
} catch (e) {
|
||||
for (var i = 0; i < expected.length; i++) {
|
||||
assertTrue(e.stack.indexOf(expected[i]) != -1,
|
||||
name + " doesn't contain expected[" + i + "] stack = " + e.stack);
|
||||
name + " doesn't contain expected[" + i + "]");
|
||||
}
|
||||
if (unexpected) {
|
||||
for (var i = 0; i < unexpected.length; i++) {
|
||||
@ -200,11 +190,6 @@ testTrace("testMethodNameInference", testMethodNameInference, ["at Foo.bar"]);
|
||||
testTrace("testImplicitConversion", testImplicitConversion, ["at Nirk.valueOf"]);
|
||||
testTrace("testEval", testEval, ["at Doo (eval at testEval"]);
|
||||
testTrace("testNestedEval", testNestedEval, ["eval at Inner (eval at Outer"]);
|
||||
testTrace("testEvalWithSourceURL", testEvalWithSourceURL,
|
||||
[ "at Doo (res://name:1:18)" ]);
|
||||
testTrace("testNestedEvalWithSourceURL", testNestedEvalWithSourceURL,
|
||||
[" at Inner (res://inner-eval:1:20)",
|
||||
" at Outer (res://outer-eval:1:37)"]);
|
||||
testTrace("testValue", testValue, ["at Number.causeError"]);
|
||||
testTrace("testConstructor", testConstructor, ["new Plonk"]);
|
||||
testTrace("testRenamedMethod", testRenamedMethod, ["Wookie.a$b$c$d [as d]"]);
|
||||
|
41
deps/v8/test/mjsunit/this-property-assignment.js
vendored
41
deps/v8/test/mjsunit/this-property-assignment.js
vendored
@ -1,41 +0,0 @@
|
||||
// Copyright 2010 the V8 project authors. All rights reserved.
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following
|
||||
// disclaimer in the documentation and/or other materials provided
|
||||
// with the distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived
|
||||
// from this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// Tests the handling of multiple assignments to the same property in a
|
||||
// constructor that only has simple this property assignments.
|
||||
|
||||
function Node() {
|
||||
this.a = 1;
|
||||
this.a = 2;
|
||||
this.a = 3;
|
||||
}
|
||||
|
||||
var n1 = new Node();
|
||||
assertEquals(3, n1.a);
|
||||
|
||||
var n2 = new Node();
|
||||
assertEquals(3, n2.a);
|
Loading…
x
Reference in New Issue
Block a user