[ruby/json] Encoding benchmark updates
Remove `rapidjson` as it's 2x slower most benchmarks, and on par on a couple of them, so it's not telling us much here. Configure `Oj` in compat mode so it generate the same JSON on the `many to_json` benchmark. ``` == Encoding small nested array (121 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json (reuse) 220.202k i/100ms json 162.190k i/100ms oj 222.094k i/100ms Calculating ------------------------------------- json (reuse) 2.322M (± 1.3%) i/s (430.72 ns/i) - 11.671M in 5.027655s json 1.707M (± 1.2%) i/s (585.76 ns/i) - 8.596M in 5.035996s oj 2.248M (± 1.4%) i/s (444.94 ns/i) - 11.327M in 5.040712s Comparison: json (reuse): 2321686.9 i/s oj: 2247509.6 i/s - 1.03x slower json: 1707179.3 i/s - 1.36x slower == Encoding small hash (65 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json (reuse) 446.184k i/100ms json 265.594k i/100ms oj 653.226k i/100ms Calculating ------------------------------------- json (reuse) 4.980M (± 1.4%) i/s (200.82 ns/i) - 24.986M in 5.018729s json 2.763M (± 1.8%) i/s (361.94 ns/i) - 13.811M in 5.000434s oj 7.232M (± 1.4%) i/s (138.28 ns/i) - 36.581M in 5.059377s Comparison: json (reuse): 4979642.4 i/s oj: 7231624.4 i/s - 1.45x faster json: 2762890.1 i/s - 1.80x slower == Encoding mixed utf8 (5003001 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 34.000 i/100ms oj 36.000 i/100ms Calculating ------------------------------------- json 357.772 (± 4.8%) i/s (2.80 ms/i) - 1.802k in 5.047308s oj 327.521 (± 1.5%) i/s (3.05 ms/i) - 1.656k in 5.057241s Comparison: json: 357.8 i/s oj: 327.5 i/s - 1.09x slower == Encoding mostly utf8 (5001001 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 26.000 i/100ms oj 36.000 i/100ms Calculating ------------------------------------- json 294.357 (±10.5%) i/s (3.40 ms/i) - 1.456k in 5.028862s oj 352.826 (± 8.2%) i/s (2.83 ms/i) - 1.764k in 5.045651s Comparison: json: 294.4 i/s oj: 352.8 i/s - same-ish: difference falls within error == Encoding twitter.json (466906 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 206.000 i/100ms oj 229.000 i/100ms Calculating ------------------------------------- json 2.064k (± 9.3%) i/s (484.55 μs/i) - 10.300k in 5.056409s oj 2.121k (± 8.4%) i/s (471.47 μs/i) - 10.534k in 5.012315s Comparison: json: 2063.8 i/s oj: 2121.0 i/s - same-ish: difference falls within error == Encoding citm_catalog.json (500298 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 119.000 i/100ms oj 126.000 i/100ms Calculating ------------------------------------- json 1.317k (± 2.3%) i/s (759.18 μs/i) - 6.664k in 5.061781s oj 1.261k (± 2.9%) i/s (793.11 μs/i) - 6.300k in 5.000714s Comparison: json: 1317.2 i/s oj: 1260.9 i/s - same-ish: difference falls within error == Encoding canada.json (2090234 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 1.000 i/100ms oj 1.000 i/100ms Calculating ------------------------------------- json 19.590 (± 0.0%) i/s (51.05 ms/i) - 98.000 in 5.004485s oj 19.003 (± 0.0%) i/s (52.62 ms/i) - 95.000 in 5.002276s Comparison: json: 19.6 i/s oj: 19.0 i/s - 1.03x slower == Encoding many #to_json calls (2701 bytes) ruby 3.4.0preview2 (2024-10-07 master https://github.com/ruby/json/commit/32c733f57b) +YJIT +PRISM [arm64-darwin23] Warming up -------------------------------------- json 2.556k i/100ms oj 2.332k i/100ms Calculating ------------------------------------- json 25.367k (± 1.7%) i/s (39.42 μs/i) - 127.800k in 5.039438s oj 23.743k (± 1.5%) i/s (42.12 μs/i) - 118.932k in 5.010303s Comparison: json: 25367.3 i/s oj: 23743.3 i/s - 1.07x slower ``` https://github.com/ruby/json/commit/5a64fd5b6f
This commit is contained in:
parent
44aef5e852
commit
00aa1f9a1d
@ -1,7 +1,8 @@
|
||||
require "benchmark/ips"
|
||||
require "json"
|
||||
require "oj"
|
||||
require "rapidjson"
|
||||
|
||||
Oj.default_options = Oj.default_options.merge(mode: :compat)
|
||||
|
||||
if ENV["ONLY"]
|
||||
RUN = ENV["ONLY"].split(/[,: ]/).map{|x| [x.to_sym, true] }.to_h
|
||||
@ -15,12 +16,10 @@ end
|
||||
|
||||
def implementations(ruby_obj)
|
||||
state = JSON::State.new(JSON.dump_default_options)
|
||||
|
||||
{
|
||||
json_state: ["json (reuse)", proc { state.generate(ruby_obj) }],
|
||||
json: ["json", proc { JSON.dump(ruby_obj) }],
|
||||
oj: ["oj", proc { Oj.dump(ruby_obj) }],
|
||||
rapidjson: ["rapidjson", proc { RapidJSON.dump(ruby_obj) }],
|
||||
}
|
||||
end
|
||||
|
||||
@ -38,6 +37,11 @@ def benchmark_encoding(benchmark_name, ruby_obj, check_expected: true, except: [
|
||||
result = block.call
|
||||
if check_expected && expected != result
|
||||
puts "#{name} does not match expected output. Skipping"
|
||||
puts "Expected:" + '-' * 40
|
||||
puts expected
|
||||
puts "Actual:" + '-' * 40
|
||||
puts result
|
||||
puts '-' * 40
|
||||
next
|
||||
end
|
||||
rescue => error
|
||||
@ -67,12 +71,13 @@ benchmark_encoding "citm_catalog.json", JSON.load_file("#{__dir__}/data/citm_cat
|
||||
|
||||
# This benchmark spent the overwhelming majority of its time in `ruby_dtoa`. We rely on Ruby's implementation
|
||||
# which uses a relatively old version of dtoa.c from David M. Gay.
|
||||
# Oj is noticeably faster here because it limits the precision of floats, breaking roundtriping. That's not
|
||||
# something we should emulate.
|
||||
# Oj in `compat` mode is ~10% slower than `json`, but in its default mode is noticeably faster here because
|
||||
# it limits the precision of floats, breaking roundtriping. That's not something we should emulate.
|
||||
#
|
||||
# Since a few years there are now much faster float to string implementations such as Ryu, Dragonbox, etc,
|
||||
# but all these are implemented in C++11 or newer, making it hard if not impossible to include them.
|
||||
# Short of a pure C99 implementation of these newer algorithms, there isn't much that can be done to match
|
||||
# Oj speed without losing precision.
|
||||
benchmark_encoding "canada.json", JSON.load_file("#{__dir__}/data/canada.json"), check_expected: false, except: %i(json_state)
|
||||
|
||||
benchmark_encoding "many #to_json calls", [{Object.new => Object.new, 12 => 54.3, Integer => Float, Time.now => Date.today}] * 20, except: %i(json_state)
|
||||
benchmark_encoding "many #to_json calls", [{object: Object.new, int: 12, float: 54.3, class: Float, time: Time.now, date: Date.today}] * 20, except: %i(json_state)
|
||||
|
Loading…
x
Reference in New Issue
Block a user