Blocks insert a leading `nop` instruction in order to execute a "block
call" tracepoint. Block compilation unconditionally inserts a leading
`nop` plus a label after the instruction:
641f15b1c6/prism_compile.c (L6867-L6869)
This `nop` instruction is used entirely for firing the block entry
tracepoint. The label exists so that the block can contain a loop but
the block entry tracepoint is executed only once.
For example, the following code is an infinite loop, but should only
execute the b_call tracepoint once:
```ruby
-> { redo }.call
```
Previous to this commit, we would eliminate the `nop` instruction, but
only if there were no other jump instructions inside the block. This
means that the following code would still contain a leading `nop` even
though the label following the `nop` is unused:
```ruby
-> { nil if bar }
```
```
== disasm: #<ISeq:block in <main>@test.rb:1 (1,2)-(1,17)> (catch: FALSE)
0000 nop ( 1)[Bc]
0001 putself [Li]
0002 opt_send_without_block <calldata!mid:bar, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0004 branchunless 8
0006 putnil
0007 leave [Br]
0008 putnil
0009 leave [Br]
```
This commit checks to see if the label inserted after the `nop` is
actually a jump target. If it's not a jump target, then we should be
safe to eliminate the leading `nop`:
```
> build-master/miniruby --dump=insns test.rb
== disasm: #<ISeq:<main>@test.rb:1 (1,0)-(1,17)>
0000 putspecialobject 1 ( 1)[Li]
0002 send <calldata!mid:lambda, argc:0, FCALL>, block in <main>
0005 leave
== disasm: #<ISeq:block in <main>@test.rb:1 (1,2)-(1,17)>
0000 putself ( 1)[LiBc]
0001 opt_send_without_block <calldata!mid:bar, argc:0, FCALL|VCALL|ARGS_SIMPLE>
0003 branchunless 7
0005 putnil
0006 leave [Br]
0007 putnil
0008 leave [Br]
```
We have a test for b_call tracepoints that use `redo` here:
aebf96f371/test/ruby/test_settracefunc.rb (L1728-L1736)
* Avoid pinning `storage_head` in `iseq_mark_and_move`
This refactor changes the behavior of `iseq_mark_and_move` to avoid
pinning the `storage_head`. Previously pinning was required because they
could be gc'd during `iseq_set_sequence` it would be possible to end
up with a half build array of instructions. However, in order to
implement a moving immix algorithm we can't pin these objects so this
rafactoring changes the code to mark and move. To accomplish this, it was
required to add `iseq_size`, `iseq_encoded`, and the `mark_bits` union
to the `iseq_compile_data` struct. In addition `iseq_compile_data` sets
a bool for whether there is a single or list of mark bits. While this
change is needed for moving immix, it should be better for Ruby's GC
as well.
* Don't allocate mark_offset_bits for one word
If only one word is needed, we don't need to allocate mark_offset_bits
and can instead directly write to it.
---------
Co-authored-by: Peter Zhu <peter@peterzhu.ca>
Previously, since the opt_aset_with optimization was introduced,
use of the opt_aset_with optimization inside multiple assignment
would result in a segfault or incorrect instructions.
Fixes [Bug #21012]
Co-authored-by: Nobuyoshi Nakada <nobu.nakada@gmail.com>
Compilation of NODE_HASH in compile_shareable_literal_constant does not support hash that contains keyword splat.
If there is a keyword splat, fallback to default case.
... since it is certain to be a String in this context.
Also, I want to avoid the anxious use of `StringValuePtr(str)` and
`RSTRING_LEN(str)` as arguments in the same function call.
* Add opt_duparray_send insn to skip the allocation on `#include?`
If the method isn't going to modify the array we don't need to copy it.
This avoids the allocation / array copy for things like `[:a, :b].include?(x)`.
This adds a BOP for include? and tracks redefinition for it on Array.
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* YJIT: Implement opt_duparray_send include_p
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* Update opt_newarray_send to support simple forms of include?(arg)
Similar to opt_duparray_send but for non-static arrays.
* YJIT: Implement opt_newarray_send include_p
---------
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* Use FL_USER0 for ELTS_SHARED
This makes space in RString for two bits for chilled strings.
* Mark strings returned by `Symbol#to_s` as chilled
[Feature #20350]
`STR_CHILLED` now spans on two user flags. If one bit is set it
marks a chilled string literal, if it's the other it marks a
`Symbol#to_s` chilled string.
Since it's not possible, and doesn't make much sense to include
debug info when `--debug-frozen-string-literal` is set, we can't
include allocation source, but we can safely include the symbol
name in the warning message, making it much easier to find the source
of the issue.
Co-Authored-By: Étienne Barrié <etienne.barrie@gmail.com>
---------
Co-authored-by: Étienne Barrié <etienne.barrie@gmail.com>
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
to show unused block warning strictly.
```ruby
class C
def f = nil
end
class D
def f = yield
end
[C.new, D.new].each{|obj| obj.f{}}
```
In this case, `D#f` accepts a block. However `C#f` doesn't
accept a block. There are some cases passing a block with
`obj.f{}` where `obj` is `C` or `D`. To avoid warnings on
such cases, "unused block warning" will be warned only if
there is not same name which accepts a block.
On the above example, `C.new.f{}` doesn't show any warnings
because there is a same name `D#f` which accepts a block.
We call this default behavior as "relax mode".
`strict_unused_block` new warning category changes from
"relax mode" to "strict mode", we don't check same name
methods and `C.new.f{}` will be warned.
[Feature #15554]
* YJIT: Replace Array#each only when YJIT is enabled
* Add comments about BUILTIN_ATTR_C_TRACE
* Make Ruby Array#each available with --yjit as well
* Fix all paths that expect a C location
* Use method_basic_definition_p to detect patches
* Copy a comment about C_TRACE flag to compilers
* Rephrase a comment about add_yjit_hook
* Give METHOD_ENTRY_BASIC flag to Array#each
* Add --yjit-c-builtin option
* Allow inconsistent source_location in test-spec
* Refactor a check of BUILTIN_ATTR_C_TRACE
* Set METHOD_ENTRY_BASIC without touching vm->running
[Feature #20205]
The warning now suggests running with --debug-frozen-string-literal:
```
test.rb:3: warning: literal string will be frozen in the future (run with --debug-frozen-string-literal for more information)
```
When using --debug-frozen-string-literal, the location where the string
was created is shown:
```
test.rb:3: warning: literal string will be frozen in the future
test.rb:1: info: the string was created here
```
When resurrecting strings and debug mode is not enabled, the overhead is a simple FL_TEST_RAW.
When mutating chilled strings and deprecation warnings are not enabled,
the overhead is a simple warning category enabled check.
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
This caused an issue when `defined?` was in the `if` condition. Its
instructions weren't appended to the instruction sequence even though it was compiled
if a compile-time known logical short-circuit happened before the `defined?`. The catch table
entry (`defined?` compilation produces a catch table entry) was still on the iseq even though the
instructions weren't there. This caused faulty exception handling in the method.
The solution is to no add the catch table entry for `defined?` after a compile-time known logical
short circuit.
This shouldn't touch much code, it's only for cases like the following,
which can occur during debugging:
if false && defined?(Some::CONSTANT)
"more code..."
end
Fixes [Bug #20501]
add_adjust_info will increment the insns_info_index, so we need to set
the node_id to -1 to prevent a "Conditional jump or move depends on
uninitialised value" in Valgrind.
Previously, this would delete the key in `h` before keyword
splatting `h`. This goes against how ruby handles `f(*a, &a.pop)`
and similar expressions.
Fix this by having the compiler check whether the block pass
expression is safe. If it is not safe, then dup the keyword
splatted hash before evaluating the block pass expression.
For expression: `h=nil; f(**h, &h.delete(:key))`
VM instructions before:
```
0000 putnil ( 1)[Li]
0001 setlocal_WC_0 h@0
0003 putself
0004 getlocal_WC_0 h@0
0006 getlocal_WC_0 h@0
0008 putobject :key
0010 opt_send_without_block <calldata!mid:delete, argc:1, ARGS_SIMPLE>
0012 splatkw
0013 send <calldata!mid:f, argc:1, ARGS_BLOCKARG|FCALL|KW_SPLAT>, nil
0016 leave
```
VM instructions after:
```
0000 putnil ( 1)[Li]
0001 setlocal_WC_0 h@0
0003 putself
0004 putspecialobject 1
0006 newhash 0
0008 getlocal_WC_0 h@0
0010 opt_send_without_block <calldata!mid:core#hash_merge_kwd, argc:2, ARGS_SIMPLE>
0012 getlocal_WC_0 h@0
0014 putobject :key
0016 opt_send_without_block <calldata!mid:delete, argc:1, ARGS_SIMPLE>
0018 send <calldata!mid:f, argc:1, ARGS_BLOCKARG|FCALL|KW_SPLAT|KW_SPLAT_MUT>, nil
0021 leave
```
This is the same as 07d3bf4832532ae7446c9a6924d79aed60a7a9a5, except that
it removes unnecessary hash allocations when using the prism compiler.
Fixes [Bug #20640]
If a Hash which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_hash_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
If an Array which is empty or only using literals is frozen, we detect
this as a peephole optimization and change the instructions to be
`opt_ary_freeze`.
[Feature #20684]
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
The `f(arg, *arg, **arg, **arg)` case was previously not optimized.
The optimizer didn't optimize this case because of the multiple
keyword splats, and the compiler didn't optimize it because the
`f(*arg, **arg, **arg)` optimization added in
0ee3960685e283d8e75149a8777eb0109d41509a didn't apply.
I found it difficult to apply this optimization without changing
the `setup_args_core` API, since by the time you get to the ARGSCAT
case, you don't know whether you were called recursively or directly,
so I'm not sure if it was possible to know at that point whether the
array allocation could be avoided.
This changes the dup_rest argument in `setup_args_core` from an int
to a pointer to int. This allows us to track whether we have allocated
a caller side array for multiple splats or splat+post across
recursive calls. Check the pointed value (*dup_rest) to determine the
`splatarray` argument. If dup_rest is 1, then use `splatarray true`
(caller-side array allocation), then set *dup_rest back to 0, ensuring
only a single `splatarray true` per method call.
Before calling `setup_args_core`, check whether the array allocation
can be avoided safely using `splatarray false`. Optimizable cases are:
```
// f(*arg)
SPLAT
// f(1, *arg)
ARGSCAT
LIST
// f(*arg, **arg)
ARGSPUSH
SPLAT
HASH nd_brace=0
// f(1, *arg, **arg)
ARGSPUSH
ARGSCAT
LIST
HASH nd_brace=0
```
If so, dup_rest is set to 0 instead of 1 to avoid the allocation.
After calling `setup_args_core`, check the flag. If the flag
includes `VM_CALL_ARGS_SPLAT`, and the pointed value has changed,
indicating `splatarray true` was used, then also set
`VM_CALL_ARGS_SPLAT_MUT` in the flag.
My initial attempt at this broke the `f(*ary, &ary.pop)` test,
because we were not duplicating the ary in the splat even though
it was modified later (evaluation order issue). The initial attempt
would also break `f(*ary, **ary.pop)` or `f(*ary, kw: ary.pop)` cases
for the same reason. I added test cases for those evaluation
order issues.
Add setup_args_dup_rest_p static function that checks that a given
node is safe. Call that on the block pass node to determine if
the block pass node is safe. Also call it on each of the hash
key/value nodes to test that they are safe. If any are not safe,
then set dup_rest = 1 so that `splatarray true` will be used to
avoid the evaluation order issue.
This new approach has the affect of optimizing most cases of
literal keywords after positional splats. Previously, only
static keyword hashes after positional splats avoided array
allocation for the splat. Now, most dynamic keyword hashes
after positional splats also avoid array allocation.
Add allocation tests for dynamic keyword keyword hashes after
positional splats.
setup_args_dup_rest_p is currently fairly conservative. It
could definitely be expanded to handle additional node types
to reduce allocations in additional cases.
For calls such as:
m(*ary, a: 2, **h)
m(*ary, **h, **h, **h)
Where m does not take a positional argument splat, there was previously
an array allocation (splatarray true) to dup ary, even though it was not
necessary to do so. This is because the elimination of the array allocation
(splatarray false) was performed in the optimizer, and the optimizer didn't
handle this case, because the instructions for the keywords can be of
arbitrary length.
Move part of the optimization from the optimizer to the compiler,
detecting parse trees of the form:
ARGS_PUSH:
head: SPLAT
tail: HASH (without brace)
And using splatarray false instead of splatarray true for them.
Unfortunately, moving part of the optimization to the compiler broke
the hash allocation elimination optimization for calls of the
form:
m(*ary, a: 2)
That's because the compiler had already set splatarray false,
and the optimizer code was looking for splatarray true.
Split the array allocation elimination and hash allocation
elimination in the optimizer so that the hash allocation
elimination will still apply if the compiler performs the
splatarray false optimization.
While working on a separate issue we found that in some cases
`ary_heap_realloc` was being called on frozen arrays. To fix this, this
change does the following:
1) Updates `rb_ary_freeze` to assert the type is an array, return if
already frozen, and shrink the capacity if it is not embedded, shared
or a shared root.
2) Replaces `rb_obj_freeze` with `rb_ary_freeze` when the object is
always an array.
3) In `ary_heap_realloc`, ensure the new capa is set with
`ARY_SET_CAPA`. Previously the change in capa was not set.
4) Adds an assertion to `ary_heap_realloc` that the array is not frozen.
Some of this work was originally done in
https://github.com/ruby/ruby/pull/2640, referencing this issue
https://bugs.ruby-lang.org/issues/16291. There didn't appear to be any
objections to this PR, it appears to have simply lost traction.
The original PR made changes to arrays and strings at the same time,
this PR only does arrays. Also it was old enough that rather than revive
that branch I've made a new one. I added Lourens as co-author in addtion
to Aaron who helped me with this patch.
The original PR made this change for performance reasons, and while
that's still true for this PR, the goal of this PR is to avoid
calling `ary_heap_realloc` on frozen arrays. The capacity should be
shrunk _before_ the array is frozen, not after.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: methodmissing <lourens@methodmissing.com>