为什么间接增量比直接增量快?

2022-08-31 20:15:19

另一位SO成员曾提出过这个问题,但令人失望地被删除了。评论说测量有缺陷,没有意义。

但是,我能够在JMH下使用一个小基准来重现原始问题:

package bench;

import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.*;
import org.openjdk.jmh.runner.options.*;
import java.util.concurrent.*;

@State(Scope.Benchmark)
public class LoopInc {

    private int getValue() {
        return ThreadLocalRandom.current().nextInt(2);
    }

    @Benchmark
    public int directInc() {
        int result = 0;
        for (int i = 0; i < 1000; i++) {
            switch (getValue()) {
                case 0:
                    break;
                case 1:
                    result++;
                    break;
            }
        }
        return result;
    }

    @Benchmark
    public int indirectInc() {
        int result = 0;
        for (int i = 0; i < 1000; i++) {
            boolean incr = false;
            switch (getValue()) {
                case 0:
                    break;
                case 1:
                    incr = true;
                    break;
            }

            if (incr) {
                result++;
            }
        }
        return result;
    }

    public static void main(String[] args) throws RunnerException {
        Options options = new OptionsBuilder()
                .include("bench.LoopInc.*")
                .warmupIterations(5)
                .measurementIterations(10)
                .forks(3)
                .timeUnit(TimeUnit.MILLISECONDS)
                .build();
        new Runner(options).run();
    }
}

基准测试显示工作速度提高了3倍,尽管“优化”根本不明显。人们会认为工作应该慢一点,因为它涉及额外的中间操作。indirectIncindirectInc

Benchmark             Mode  Cnt    Score   Error   Units
LoopInc.directInc    thrpt   30  127,301 ± 0,202  ops/ms
LoopInc.indirectInc  thrpt   30  378,147 ± 1,144  ops/ms

 

java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

是什么原因导致JIT编译得比同类更好?indirectIncdirectInc


答案 1

好吧,这就是你处理这些事情的方式。

  1. 尝试重现它。好的,它会重现:

    Benchmark             Mode  Cnt    Score   Error   Units
    LoopInc.directInc    thrpt   15  175.678 ± 1.118  ops/ms
    LoopInc.indirectInc  thrpt   15  641.413 ± 9.722  ops/ms
    
  2. 尝试使用 查看生成的程序集。长话短说,它会产生大量生成的代码,因此我们可能希望限制循环展开。但是,它可能会影响性能,并且几乎可能是原因。因此,让我们使用 重新运行 .好吧,分数较低,但差异仍然存在,非常好:-prof perfasm-XX:LoopUnrollLimit=1

    Benchmark             Mode  Cnt    Score   Error   Units
    LoopInc.directInc    thrpt   15  161.147 ± 6.101  ops/ms
    LoopInc.indirectInc  thrpt   15  489.430 ± 1.698  ops/ms
    
  3. 再看看生成的代码,仍然没有任何东西突然出现在我们眼前。嗯,这似乎很有趣。让我们正确地开始吧。我们能否描述工作负载的特征?当然,我们可以在 的帮助下,根据基准操作规范化硬件计数器。我看看:-prof perfnorm

    Benchmark                                     Mode  Cnt      Score      Error   Units
    LoopInc.directInc                            thrpt   15    161.875 ±    3.038  ops/ms
    LoopInc.directInc:·CPI                       thrpt    3      0.967 ±    0.196    #/op
    LoopInc.directInc:·L1-dcache-load-misses     thrpt    3      0.394 ±    3.663    #/op
    LoopInc.directInc:·L1-dcache-loads           thrpt    3   2149.594 ±  228.166    #/op
    LoopInc.directInc:·L1-dcache-store-misses    thrpt    3      0.114 ±    1.001    #/op
    LoopInc.directInc:·L1-dcache-stores          thrpt    3   1073.666 ±   96.066    #/op
    LoopInc.directInc:·L1-icache-load-misses     thrpt    3      0.965 ±   22.984    #/op
    LoopInc.directInc:·LLC-loads                 thrpt    3      0.204 ±    2.763    #/op
    LoopInc.directInc:·LLC-stores                thrpt    3      0.060 ±    0.633    #/op
    LoopInc.directInc:·branch-misses             thrpt    3    536.068 ±   43.293    #/op
    LoopInc.directInc:·branches                  thrpt    3   3728.890 ±  220.539    #/op
    LoopInc.directInc:·cycles                    thrpt    3  26219.146 ± 6287.590    #/op
    LoopInc.directInc:·dTLB-load-misses          thrpt    3      0.063 ±    0.124    #/op
    LoopInc.directInc:·dTLB-loads                thrpt    3   2136.942 ±  165.990    #/op
    LoopInc.directInc:·dTLB-store-misses         thrpt    3      0.022 ±    0.029    #/op
    LoopInc.directInc:·dTLB-stores               thrpt    3   1084.787 ±  417.281    #/op
    LoopInc.directInc:·iTLB-load-misses          thrpt    3      0.081 ±    0.333    #/op
    LoopInc.directInc:·iTLB-loads                thrpt    3      3.623 ±   19.955    #/op
    LoopInc.directInc:·instructions              thrpt    3  27114.052 ± 1843.720    #/op
    
    LoopInc.indirectInc                          thrpt   15    489.164 ±    2.692  ops/ms
    LoopInc.indirectInc:·CPI                     thrpt    3      0.281 ±    0.015    #/op
    LoopInc.indirectInc:·L1-dcache-load-misses   thrpt    3      0.503 ±    9.071    #/op
    LoopInc.indirectInc:·L1-dcache-loads         thrpt    3   2149.806 ±  369.040    #/op
    LoopInc.indirectInc:·L1-dcache-store-misses  thrpt    3      0.167 ±    1.370    #/op
    LoopInc.indirectInc:·L1-dcache-stores        thrpt    3   1073.895 ±  186.741    #/op
    LoopInc.indirectInc:·L1-icache-load-misses   thrpt    3      0.313 ±    1.275    #/op
    LoopInc.indirectInc:·branch-misses           thrpt    3      1.102 ±    0.375    #/op
    LoopInc.indirectInc:·branches                thrpt    3   2143.670 ±  228.475    #/op
    LoopInc.indirectInc:·cycles                  thrpt    3   8701.665 ±  706.183    #/op
    LoopInc.indirectInc:·dTLB-load-misses        thrpt    3      0.020 ±    0.301    #/op
    LoopInc.indirectInc:·dTLB-loads              thrpt    3   2141.965 ±  135.852    #/op
    LoopInc.indirectInc:·dTLB-store-misses       thrpt    3      0.002 ±    0.029    #/op
    LoopInc.indirectInc:·dTLB-stores             thrpt    3   1070.376 ±   81.445    #/op
    LoopInc.indirectInc:·iTLB-load-misses        thrpt    3      0.007 ±    0.135    #/op
    LoopInc.indirectInc:·iTLB-loads              thrpt    3      0.310 ±    5.768    #/op
    LoopInc.indirectInc:·instructions            thrpt    3  30968.207 ± 3627.540    #/op
    

    哦,两个基准测试都有相当数量的指令。较慢的周期更多(这就是为什么CPI也不理想;然而,会产生接近理想的CPI)。如果您仔细查看可能的原因:缓存未命中数不多,TLB 未命中数不多,但慢速基准测试具有许多分支未命中数。啊哈!现在我们知道在生成的代码中要查找什么了。directIncindirectInc

  4. 让我们再次看一下生成的代码。 方便地突出显示跳跃。然后你会看到这个...-prof perfasm

    directInc

                      ╭│      0x00007fa0a82a50ff: jmp    0x00007fa0a82a5116
     11.39%   16.90%  ││ ↗    0x00007fa0a82a5101: inc    %edx               ;*iinc
                      ││ │                                                  ; - org.openjdk.LoopInc::directInc@46 (line 18)
     12.52%   23.11%  ││ │↗↗  0x00007fa0a82a5103: mov    %r10,0xe8(%r11)    ;*invokevirtual putLong
                      ││ │││                                                ; - java.util.concurrent.ThreadLocalRandom::nextSeed@27 (line 241)
     12.00%    8.14%  ││ │││  0x00007fa0a82a510a: inc    %r8d               ;*iinc
                      ││ │││                                                ; - org.openjdk.LoopInc::directInc@46 (line 18)
      0.03%    0.03%  ││ │││  0x00007fa0a82a510d: cmp    $0x3e8,%r8d
                      │╰ │││  0x00007fa0a82a5114: jge    0x00007fa0a82a50c7  ;*aload_0
                      │  │││                                                ; - org.openjdk.LoopInc::directInc@11 (line 19)
      0.80%    0.91%  ↘  │││  0x00007fa0a82a5116: mov    0xf0(%r11),%r10d   ;*invokevirtual getInt
                         │││                                                ; - java.util.concurrent.ThreadLocalRandom::current@9 (line 222)
      4.28%    1.23%     │││  0x00007fa0a82a511d: test   %r10d,%r10d
                        ╭│││  0x00007fa0a82a5120: je     0x00007fa0a82a517b  ;*ifne
                        ││││                                                ; - java.util.concurrent.ThreadLocalRandom::current@12 (line 222)
      2.11%    0.01%    ││││  0x00007fa0a82a5122: movabs $0x9e3779b97f4a7c15,%r10
      0.01%    0.07%    ││││  0x00007fa0a82a512c: add    0xe8(%r11),%r10    ;*ladd
                        ││││                                                ; - java.util.concurrent.ThreadLocalRandom::nextSeed@24 (line 242)
      7.73%    1.89%    ││││  0x00007fa0a82a5133: mov    %r10,%r9
      1.21%    1.84%    ││││  0x00007fa0a82a5136: shr    $0x21,%r9
      1.90%    0.03%    ││││  0x00007fa0a82a513a: xor    %r10,%r9
      2.02%    0.03%    ││││  0x00007fa0a82a513d: movabs $0xff51afd7ed558ccd,%rcx
      0.94%    1.82%    ││││  0x00007fa0a82a5147: imul   %rcx,%r9           ;*lmul
                        ││││                                                ; - java.util.concurrent.ThreadLocalRandom::mix32@9 (line 182)
      7.01%    2.40%    ││││  0x00007fa0a82a514b: mov    %r9,%rcx
                        ││││  0x00007fa0a82a514e: shr    $0x21,%rcx
      1.89%    0.70%    ││││  0x00007fa0a82a5152: xor    %r9,%rcx
      3.11%    2.55%    ││││  0x00007fa0a82a5155: movabs $0xc4ceb9fe1a85ec53,%r9
      0.99%    1.50%    ││││  0x00007fa0a82a515f: imul   %r9,%rcx
      7.66%    2.89%    ││││  0x00007fa0a82a5163: shr    $0x20,%rcx
      3.70%    1.97%    ││││  0x00007fa0a82a5167: mov    %ecx,%r9d
               0.11%    ││││  0x00007fa0a82a516a: and    $0x1,%r9d          ;*iand
                        ││││                                                ; - java.util.concurrent.ThreadLocalRandom::nextInt@34 (line 356)
      3.76%   11.13%    ││││  0x00007fa0a82a516e: cmp    $0x1,%r9d
                        │╰││  0x00007fa0a82a5172: je     0x00007fa0a82a5101
     10.48%   16.62%    │ ││  0x00007fa0a82a5174: test   %r9d,%r9d
                        │ ╰│  0x00007fa0a82a5177: je     0x00007fa0a82a5103  ;*lookupswitch
                        │  │                                                ; - org.openjdk.LoopInc::directInc@15 (line 19)
                        │  ╰  0x00007fa0a82a5179: jmp    0x00007fa0a82a5103  ;*aload_0
                        │                                                   ; - org.openjdk.LoopInc::directInc@11 (line 19)
                        ↘     0x00007fa0a82a517b: mov    $0xffffff5d,%esi
    

    间接因素

      0.01%    0.01%  ↗  0x00007f65588d8260: mov    %edx,%r9d
      0.01%           │  0x00007f65588d8263: nopw   0x0(%rax,%rax,1)
     11.99%   11.38%  │  0x00007f65588d826c: data16 data16 xchg %ax,%ax  ;*iconst_0
                      │                                                ; - org.openjdk.LoopInc::indirectInc@11 (line 34)
                      │  0x00007f65588d8270: mov    0xf0(%r8),%r10d    ;*invokevirtual getInt
                      │                                                ; - java.util.concurrent.ThreadLocalRandom::current@9 (line 222)
                      │  0x00007f65588d8277: test   %r10d,%r10d
                      │  0x00007f65588d827a: je     0x00007f65588d8331  ;*ifne
                      │                                                ; - java.util.concurrent.ThreadLocalRandom::current@12 (line 222)
      0.01%           │  0x00007f65588d8280: movabs $0x9e3779b97f4a7c15,%r10
     11.80%   11.49%  │  0x00007f65588d828a: add    0xe8(%r8),%r10     ;*ladd
                      │                                                ; - java.util.concurrent.ThreadLocalRandom::nextSeed@24 (line 242)
      0.01%    0.01%  │  0x00007f65588d8291: mov    %r10,0xe8(%r8)     ;*invokevirtual putLong
                      │                                                ; - java.util.concurrent.ThreadLocalRandom::nextSeed@27 (line 241)
                      │  0x00007f65588d8298: mov    %r9d,%edx
      0.01%    0.01%  │  0x00007f65588d829b: inc    %edx
     11.12%   12.40%  │  0x00007f65588d829d: mov    %r10,%rcx
               0.01%  │  0x00007f65588d82a0: shr    $0x21,%rcx
      0.03%           │  0x00007f65588d82a4: xor    %r10,%rcx
      0.06%    0.03%  │  0x00007f65588d82a7: movabs $0xff51afd7ed558ccd,%r10
     12.38%   13.94%  │  0x00007f65588d82b1: imul   %r10,%rcx          ;*lmul
                      │                                                ; - java.util.concurrent.ThreadLocalRandom::mix32@9 (line 182)
      0.03%    0.01%  │  0x00007f65588d82b5: mov    %rcx,%r10
                      │  0x00007f65588d82b8: shr    $0x21,%r10
               0.03%  │  0x00007f65588d82bc: xor    %rcx,%r10
     11.43%   12.62%  │  0x00007f65588d82bf: movabs $0xc4ceb9fe1a85ec53,%rcx
               0.01%  │  0x00007f65588d82c9: imul   %rcx,%r10
      0.34%    0.30%  │  0x00007f65588d82cd: shr    $0x20,%r10
      0.85%    0.76%  │  0x00007f65588d82d1: mov    %r10d,%r10d
     11.81%   11.51%  │  0x00007f65588d82d4: and    $0x1,%r10d
      2.16%    1.78%  │  0x00007f65588d82d8: cmp    $0x1,%r10d
      3.45%    3.00%  │  0x00007f65588d82dc: cmovne %r9d,%edx         <----- HERE IT IS
     17.55%   15.86%  │  0x00007f65588d82e0: inc    %r11d              ;*iinc
                      │                                                ; - org.openjdk.LoopInc::indirectInc@56 (line 33)
                      │  0x00007f65588d82e3: cmp    $0x3e8,%r11d
                      ╰  0x00007f65588d82ea: jl     0x00007f65588d8260  ;*if_icmpge
                                                               ; - org.openjdk.LoopInc::indirectInc@8 (line 33)
    

    请注意,而不是 - 这就是为什么我们有更多“可预测”的分支。HotSpot 分析分支,并在分支配置文件分支非常平坦时发出条件移动。换句话说,通过为条件移动的额外延迟支付一点费用来规避非常可能的分支错误预测。但是,在这种情况下,switch 是特殊的:它有两个以上的备选项(0、1 和“nothing”)。这就是为什么,我推测,增量没有被折叠成cmov。(一般来说,HotSpot本来可以将零存储到“默认”中,但它吹了它,哦,好吧)cmovnejmpresultresult

  5. 为了证实这个假设,让我们做一个案例,我们仍然使用,但现在涵盖了所有情况:directCompleteIncswitch

    @Benchmark
    public int directCompleteInc() {
        int result = 0;
        for (int i = 0; i < 1000; i++) {
            switch (getValue()) {
                case 1:
                    result++;
                    break;
                default:
                    break;
            }
        }
        return result;
    }
    

    ...并测量它,这次没有任何选择,就像OP所做的那样:

    Benchmark                   Mode  Cnt    Score    Error   Units
    LoopInc.directCompleteInc  thrpt    5  644.414 ±  0.371  ops/ms
    LoopInc.directInc          thrpt    5  174.974 ±  0.103  ops/ms
    LoopInc.indirectInc        thrpt    5  644.015 ±  0.533  ops/ms
    

    那里。

  6. 确认是否与 一起使用。确实如此。directCompleteInccmov-prof perfasm

  7. 喝起来。


答案 2

推荐