`ruby --yjit --zjit` already warns and exits, but it was still possible
to enable both with `ruby --zjit -e 'RubyVM:YJIT.enable`.
This commit prevents that with a warning and an early return. (We could
also exit, but that seems a bit unfriendly once we're already running
the program.)
Co-authored-by: ywenc <ywenc@github.com>
Ivars will longer be the only thing stored inline
via shapes, so keeping the `iv_index` and `ivptr` names
would be confusing.
Instance variables won't be the only thing stored inline
via shapes, so keeping the `ivptr` name would be confusing.
`field` encompass anything that can be stored in a VALUE array.
Similarly, `gen_ivtbl` becomes `gen_fields_tbl`.
Working towards having YJIT and ZJIT in the same build, we need to
deduplicate some glue code that would otherwise cause name collision.
Add jit.c for this and build it for YJIT and ZJIT builds. Update bindgen
to look at jit.c; some shuffling of functions in the output, but the set
of functions shouldn't have changed.
The instruction counter is slowing multi-Ractor applications. I had
changed it to use a thread local, but using a thread local is slowing
single threaded applications. This commit only enables the instruction
counter in YJIT stats builds until we can figure out a way to gather the
information with lower overhead.
Co-authored-by: Randy Stauner <randy.stauner@shopify.com>
`rb_vm_insns_count` is a global variable used for reporting YJIT
statistics. It is a counter that tallies the number of interpreter
instructions that have been executed, this way we can approximate how
much time we're spending in YJIT compared to the interpreter.
Unfortunately keeping this statistic means that every instruction
executed in the interpreter loop must increment the counter. Normally
this isn't a problem, but in multi-threaded situations (when Ractors are
used), incrementing this counter can become quite costly due to page
caching issues.
Additionally, since there is no locking when incrementing this global,
the count can't really make sense in a multi-threaded environment.
This commit changes `rb_vm_insns_count` to a thread local. That way each
Ractor has it's own copy of the counter and incrementing the counter
becomes quite cheap. Of course this means that in multi-threaded
situations, the value doesn't really make sense (but it didn't make
sense before because of the lack of locking).
The counter is used for YJIT statistics, and since YJIT is basically
disabled when Ractors are in use, I don't think we care about
inaccuracies (for the time being). We can revisit this counter when we
give YJIT multi-threading support, but for the time being this commit
restores multi-threaded performance.
To test this, I used the benchmark in [Bug #20489].
Here is the performance on Ruby 3.2:
```
$ time RUBY_MAX_CPU=12 ./miniruby -v ../test.rb 8 8
ruby 3.2.0 (2022-12-25 revision a528908271) [x86_64-linux]
[0...1, 1...2, 2...3, 3...4, 4...5, 5...6, 6...7, 7...8]
../test.rb:43: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
________________________________________________________
Executed in 2.53 secs fish external
usr time 19.86 secs 370.00 micros 19.86 secs
sys time 0.02 secs 320.00 micros 0.02 secs
```
We can see the regression in performance on the master branch:
```
$ time RUBY_MAX_CPU=12 ./miniruby -v ../test.rb 8 8
ruby 3.5.0dev (2025-01-10T16:22:26Z master 4a2702dafb) +PRISM [x86_64-linux]
[0...1, 1...2, 2...3, 3...4, 4...5, 5...6, 6...7, 7...8]
../test.rb:43: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
________________________________________________________
Executed in 24.87 secs fish external
usr time 195.55 secs 0.00 micros 195.55 secs
sys time 0.00 secs 716.00 micros 0.00 secs
```
Here are the stats after this commit:
```
$ time RUBY_MAX_CPU=12 ./miniruby -v ../test.rb 8 8
ruby 3.5.0dev (2025-01-10T20:37:06Z tl 3ef0432779) +PRISM [x86_64-linux]
[0...1, 1...2, 2...3, 3...4, 4...5, 5...6, 6...7, 7...8]
../test.rb:43: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
________________________________________________________
Executed in 2.46 secs fish external
usr time 19.34 secs 381.00 micros 19.34 secs
sys time 0.01 secs 321.00 micros 0.01 secs
```
[Bug #20489]
Use PR_SET_VMA_ANON_NAME to set human-readable names for anonymous
virtual memory areas mapped by `mmap()` when compiled and run on Linux
5.17 or higher. This makes it convenient for developers to debug mmap.
I was looking at some crash reports and noticed that many have a line
like the following for YJIT code memory:
<addr>-<addr> r-xp 00000000 00:00 0 [heap]
I guess YJIT confused the kernel into thinking this region is from
sbrk(). While this seems to have no consequences beyond mislabeling,
it's still a little concerning.
Probe downwards instead.
* YJIT: Replace Array#each only when YJIT is enabled
* Add comments about BUILTIN_ATTR_C_TRACE
* Make Ruby Array#each available with --yjit as well
* Fix all paths that expect a C location
* Use method_basic_definition_p to detect patches
* Copy a comment about C_TRACE flag to compilers
* Rephrase a comment about add_yjit_hook
* Give METHOD_ENTRY_BASIC flag to Array#each
* Add --yjit-c-builtin option
* Allow inconsistent source_location in test-spec
* Refactor a check of BUILTIN_ATTR_C_TRACE
* Set METHOD_ENTRY_BASIC without touching vm->running
* YJIT: Add `--yjit-compilation-log` flag to print out the compilation log at exit.
* YJIT: Add an option to enable the compilation log at runtime.
* YJIT: Fix a typo in the `IseqPayload` docs.
* YJIT: Add stubs for getting the YJIT compilation log in memory.
* YJIT: Add a compilation log based on a circular buffer to cap the log size.
* YJIT: Allow specifying either a file or directory name for the YJIT compilation log.
The compilation log will be populated as compilation events occur. If a directory is supplied, then a filename based on the PID will be used as the write target. If a file name is supplied instead, the log will be written to that file.
* YJIT: Add JIT compilation of C function substitutions to the compilation log.
* YJIT: Add compilation events to the circular buffer even if output is sent to a file.
Previously, the two modes were treated as being exclusive of one another. However, it could be beneficial to log all events to a file while also allowing for direct access of the last N events via `RubyVM::YJIT.compilation_log`.
* YJIT: Make timestamps the first element in the YJIT compilation log tuple.
* YJIT: Stream log to stderr if `--yjit-compilation-log` is supplied without an argument.
* YJIT: Eagerly compute compilation log messages to avoid hanging on to references that may GC.
* YJIT: Log all compiled blocks, not just the method entry points.
* YJIT: Remove all compilation events other than block compilation to slim down the log.
* YJIT: Replace circular buffer iterator with a consuming loop.
* YJIT: Support `--yjit-compilation-log=quiet` as a way to activate the in-memory log without printing it.
Co-authored-by: Randy Stauner <randy.stauner@shopify.com>
* YJIT: Promote the compilation log to being the one YJIT log.
Co-authored-by: Randy Stauner <randy.stauner@shopify.com>
* Update doc/yjit/yjit.md
* Update doc/yjit/yjit.md
---------
Co-authored-by: Randy Stauner <randy.stauner@shopify.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* YJIT: Encode doubles to VALUE objects and move stat generation to rust
Stats that can now be generated from rust have been moved there.
* Move object_shape_count call for runtime_stats to rust
This reduces the ruby method to a single primitive.
* Change hash_aset_usize from macro to function
YJIT currently uses the YJIT root object to mark objects during GC and
update references during compaction. This object otherwise serves no
purpose.
This commit changes it YJIT to be step when marking the GC root. This
saves some memory from being allocated from the system and the GC.
This commit splits gc.c into two files:
- gc.c now only contains code not specific to Ruby GC. This includes
code to mark objects (which the GC implementation may choose not to
use) and wrappers for internal APIs that the implementation may need
to use (e.g. locking the VM).
- gc_impl.c now contains the implementation of Ruby's GC. This includes
marking, sweeping, compaction, and statistics. Most importantly,
gc_impl.c only uses public APIs in Ruby and a limited set of functions
exposed in gc.c. This allows us to build gc_impl.c independently of
Ruby and plug Ruby's GC into itself.
This patch optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls.
Calls it optimizes look like this:
```ruby
def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
```
```ruby
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
```
```ruby
def bar(*a) = a
def foo(...)
list = [1, 2]
bar(*list, ...) # optimized
end
foo(123)
```
All variants of the above but using `super` are also optimized, including a bare super like this:
```ruby
def foo(...)
super
end
```
This patch eliminates intermediate allocations made when calling methods that accept `...`.
We can observe allocation elimination like this:
```ruby
def m
x = GC.stat(:total_allocated_objects)
yield
GC.stat(:total_allocated_objects) - x
end
def bar(a) = a
def foo(...) = bar(...)
def test
m { foo(123) }
end
test
p test # allocates 1 object on master, but 0 objects with this patch
```
```ruby
def bar(a, b:) = a + b
def foo(...) = bar(...)
def test
m { foo(1, b: 2) }
end
test
p test # allocates 2 objects on master, but 0 objects with this patch
```
How does it work?
-----------------
This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.
I think this description is kind of confusing, so let's walk through an example with code.
```ruby
def delegatee(a, b) = a + b
def delegator(...)
delegatee(...) # CI2 (FORWARDING)
end
def caller
delegator(1, 2) # CI1 (argc: 2)
end
```
Before we call the delegator method, the stack looks like this:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # |
5| delegatee(...) # CI2 (FORWARDING) |
6| end |
7| |
8| def caller |
-> 9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in
to `delegator`, it writes `CI1` on to the stack as a local variable for the
`delegator` method. The `delegator` method has a special local called `...`
that holds the caller's CI object.
Here is the ISeq disasm fo `delegator`:
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
The local called `...` will contain the caller's CI: CI1.
Here is the stack when we enter `delegator`:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
-> 4| # | CI1 (argc: 2)
5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller |
9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling `delegatee`. In this case, it will
memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much
memory to copy from the caller because `CI1` contains stack size information
(argc: 2).
Before executing the `send` instruction, we push `...` on the stack. The
`send` instruction pops `...`, and because it is tagged with `FORWARDING`, it
knows to memcopy (using the information in the CI it just popped):
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
Instruction 001 puts the caller's CI on the stack. `send` is tagged with
FORWARDING, so it reads the CI and _copies_ the callers stack to this stack:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # | CI1 (argc: 2)
-> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller | self
9| delegator(1, 2) # CI1 (argc: 2) | 1
10| end | 2
```
The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the `...` value, as well as
perfectly forward splat args, kwargs, etc.
Since we're able to copy the stack from `caller` in to `delegator`'s stack, we
can avoid allocating objects.
I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement `Class#new` in Ruby and it uses `...`.
I was able to implement `Class#new` in Ruby
[here](https://github.com/ruby/ruby/pull/9289).
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for `initialize`.
For example, this code will allocate 2 objects: one for `SomeObject`, and one
for the kwargs:
```ruby
SomeObject.new(foo: 1)
```
If we combine this technique, plus implement `Class#new` in Ruby, then we can
reduce allocations for this common operation.
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Co-Authored-By: Alan Wu <XrXr@users.noreply.github.com>
* Implement BitVector data structure for variable-length context encoding
* Rename method to make intent clearer
* Rename write_uint => push_uint to make intent clearer
* Implement debug trait for BitVector
* Fix bug in BitVector::read_uint_at(), enable more tests
* Add one more test for good measure
* Start sketching Context::encode()
* Progress on variable length context encoding
* Add tests. Fix bug.
* Encode stack state
* Add comments. Try to estimate context encoding size.
* More compact encoding for stack size
* Commit before rebase
* Change Context::encode() to take a BitVector as input
* Refactor BitVector::read_uint(), add helper read functions
* Implement Context::decode() function. Add test.
* Fix bug, add tests
* Rename methods
* Add Context::encode() and decode() methods using global data
* Make encode and decode methods use u32 indices
* Refactor YJIT to use variable-length context encoding
* Tag functions as allow unused
* Add a simple caching mechanism and stats for bytes per context etc
* Add comments, fix formatting
* Grow vector of bytes by 1.2x instead of 2x
* Add debug assert to check round-trip encoding-decoding
* Take some rustfmt formatting
* Add decoded_from field to Context to reuse previous encodings
* Remove olde context stats
* Re-add stack_size assert
* Disable decoded_from optimization for now
Previously, the update was done in the ISEQ callback. That effectively
never updated anything because the callback itself is given an intact
reference, so it could update its content, and `rb_gc_location(iseq)`
never returned a new address. Update the whole table once in the YJIT
root instead.
* Revert "Revert "YJIT: Optimize local variables when EP == BP" (#10584)"
This reverts commit c8783441952217c18e523749c821f82cd7e5d222.
* YJIT: Take care of GC references in ISEQ invariants
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
---------
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
This reverts commit 4cc58ea0b865f2fd20f1e881ddbd4c4fab0b072c.
Since the change landed call-threshold=1 CI runs have been timing out.
There has also been `verify-ctx` violations. Revert for now while we debug.
This `st_table` is used to both mark and pin classes
defined from the C API. But `vm->mark_object_ary` already
does both much more efficiently.
Currently a Ruby process starts with 252 rooted classes,
which uses `7224B` in an `st_table` or `2016B` in an `RArray`.
So a baseline of 5kB saved, but since `mark_object_ary` is
preallocated with `1024` slots but only use `405` of them,
it's a net `7kB` save.
`vm->mark_object_ary` is also being refactored.
Prior to this changes, `mark_object_ary` was a regular `RArray`, but
since this allows for references to be moved, it was marked a second
time from `rb_vm_mark()` to pin these objects.
This has the detrimental effect of marking these references on every
minors even though it's a mostly append only list.
But using a custom TypedData we can save from having to mark
all the references on minor GC runs.
Addtionally, immediate values are now ignored and not appended
to `vm->mark_object_ary` as it's just wasted space.
Usually we deal with splats by speculating that they're of a specific
size. In this case, the C method takes a pointer and a length, so
we can support changing sizes just fine.
This is the same optimization as e4272fd29 ("Avoid allocation when
passing no keywords to anonymous kwrest methods") but for YJIT. For
anonymous kwrest parameters, nil is just as good as an empty hash.
On the usage side, update `splatkw` to handle `nil` with a leaner path.
YJIT didn't guard for ruby2_keywords hash in case of splat calls that
land in methods with a rest parameter, creating incorrect results.
The compile-time checks didn't correspond to any actual effects of
ruby2_keywords, so it was masking this bug and YJIT was needlessly
refusing to compile some code. About 16% of fallback reasons in
`lobsters` was due to the ISeq check.
We already handle the tagging part with
exit_if_supplying_kw_and_has_no_kw() and should now have a dynamic guard
for all splat cases.
Note for backporting: You also need 7f51959ff1.
[Bug #20195]
```
warning: unused import: `condition::Condition`
--> src/asm/arm64/arg/mod.rs:13:9
|
13 | pub use condition::Condition;
| ^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: unused import: `rb_yjit_fix_mul_fix as rb_fix_mul_fix`
--> src/cruby.rs:188:9
|
188 | pub use rb_yjit_fix_mul_fix as rb_fix_mul_fix;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
warning: unused import: `rb_insn_len as raw_insn_len`
--> src/cruby.rs:142:9
|
142 | pub use rb_insn_len as raw_insn_len;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
```
Make asm public so it stops warning about unused public stuff in there.
* Unify length field for embedded and heap strings
The length field is of the same type and position in RString for both
embedded and heap allocated strings, so we can unify it.
* Remove RSTRING_EMBED_LEN
yjit-trace-exits appends a synthetic sample for the instruction being
exited, but we didn't increment the size of the stack. Fixing this count
correctly lets us successfully generate a flamegraph from the exits.
I also replaced the line number for instructions with 0, as I don't
think the previous value had meaning.
Co-authored-by: Adam Hess <HParker@github.com>