Categories: Containers, Java

Running my Factorization Java App in Docker

I want to evaluate the OpenJDK serial collector using a Java program I wrote to factorize natural numbers by trial division. This post is about how to set up the app to run in a Docker container on a Linux host. Since the host is a shared machine, I put all my work under ~/swesonga (my own custom home directory). The directory structure for the container will be under ~/swesonga/container/.

Set up the Factorization App

First, log into Linux machine and download the Java binaries to test:

ssh user@IPaddress
mkdir -p ~/swesonga/container/java/binaries/jdk/x64/
cd ~/swesonga/container/java/binaries/jdk/x64/

curl -Lo microsoft-jdk-21.0.5-linux-x64.tar.gz https://aka.ms/download-jdk/microsoft-jdk-21.0.5-linux-x64.tar.gz

tar xzf microsoft-jdk-21.0.5-linux-x64.tar.gz

Clone the factorize repo and set up its dependencies:

cd ~/swesonga/container/
git clone https://github.com/swesonga/factorize

cd ~/swesonga/container/java
curl -Lo commons-cli-1.9.0-bin.tar.gz https://dlcdn.apache.org//commons/cli/binaries/commons-cli-1.9.0-bin.tar.gz
tar xzf commons-cli-1.9.0-bin.tar.gz

Compile the factorization app:

export CLASSPATH=~/swesonga/container/java/commons-cli-1.9.0/commons-cli-1.9.0.jar:.
export JAVA21_HOME=~/swesonga/container/java/binaries/jdk/x64/jdk-21.0.5+11

cd ~/swesonga/container/factorize/java/project/src/main/java/org/swesonga/math

$JAVA21_HOME/bin/javac -d . PrimalityTest.java FactorizationUtils.java Factorize.java ExecutionMode.java

Set up the Docker Environment

Create the Dockerfile

Create a Dockerfile. See Dockerizing a Java Application | Baeldung and How To Dockerize Java Application (Step-by-Step Tutorial) for examples of how to do this. There are some OpenJDK images at Microsoft Artifact Registry (did I need to get my own JDK? maybe not but for now, I know where the JDK is and what is happening).

docker pull mcr.microsoft.com/openjdk/jdk:21-ubuntu

Create the Dockerfile below, substituting the appropriate value for <user>.

FROM mcr.microsoft.com/openjdk/jdk:21-ubuntu
COPY . /swesonga/
WORKDIR /swesonga/factorize/java/project/src/main/java/org/swesonga/math

ENTRYPOINT ["/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java", "-cp", "/swesonga/java/commons-cli-1.9.0/commons-cli-1.9.0.jar:.", "-Xint", "-XX:+UseSerialGC", "-XX:+UseCompressedOops", "-XX:HeapBaseMinAddress=0x120000000000", "-Xlog:gc*=debug,safepoint:file=serialgc-jdk21.log::filecount=0", "-Xlog:pagesize=trace:file=pagesize-jdk21.log::filecount=0", "-Xlog:os=trace:file=os-jdk21.log::filecount=0", "org.swesonga.math.Factorize", "-threads", "4", "-number", "7438880205542315091371423981777", "-systemGCFrequency", "1048576"]

If you create the Dockerfile on another machine, you can copy the Dockerfile to the Linux host using scp(1) – Linux manual page:

scp Dockerfile user@IPaddress:/home/<user>/swesonga/container/

Start Docker

Verify that docker is up by running docker version. I got this output:

user@machine:~/swesonga$ docker version
Client: Docker Engine - Community
 Version:           23.0.1
 API version:       1.42
 Go version:        go1.19.5
 Git commit:        a5ee5b1
 Built:             Thu Feb  9 19:46:56 2023
 OS/Arch:           linux/amd64
 Context:           default
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Run sudo systemctl start docker as described at Start the daemon | Docker Docs.

Build the Docker Image

See docker buildx build | Docker Docs for details on how to build. I only need the -t option to tag the image as swesonga-jdk21-testapp.

cd ~/swesonga/container/
docker build -t swesonga-jdk21-testapp .

Here is some sample output:

user@machine:~/swesonga/container$ docker build -t swesonga-jdk21-testapp .
[+] Building 7.9s (6/6) FINISHED
 => [internal] load build definition from Dockerfile                                                                            0.0s
 => => transferring dockerfile: 669B                                                                                            0.0s
 => [internal] load .dockerignore                                                                                               0.0s
 => => transferring context: 2B                                                                                                 0.0s
 => [internal] load metadata for mcr.microsoft.com/openjdk/jdk:21-ubuntu                                                        0.4s
 => [1/2] FROM mcr.microsoft.com/openjdk/jdk:21-ubuntu@sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c  7.2s
 => => resolve mcr.microsoft.com/openjdk/jdk:21-ubuntu@sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c  0.0s
 => => sha256:6414378b647780fee8fd903ddb9541d134a1947ce092d08bdeb23a54cb3684ac 29.54MB / 29.54MB                                0.5s
 => => sha256:ac27f5b44782db802c5876054378d16318ba6ab095203e15acc7527778c85370 178.15MB / 178.15MB                              2.3s
 => => sha256:d42e3adbad90b3214756070b3e98acd724228f7e8d08344d7044c0788a185b66 1.38kB / 1.38kB                                  0.2s
 => => sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c 683B / 683B                                      0.0s
 => => sha256:ab80e68248a29dec58a531a5ff5a5bb873bb96fe829ac6b17f46c6f2a05cef63 899B / 899B                                      0.0s
 => => sha256:6d6be45eade816f5be7fc8935372e429b035dfee5f6e386dd7f87e6430228554 3.90kB / 3.90kB                                  0.0s
 => => extracting sha256:6414378b647780fee8fd903ddb9541d134a1947ce092d08bdeb23a54cb3684ac                                       1.3s
 => => extracting sha256:ac27f5b44782db802c5876054378d16318ba6ab095203e15acc7527778c85370                                       4.6s
 => => extracting sha256:d42e3adbad90b3214756070b3e98acd724228f7e8d08344d7044c0788a185b66                                       0.0s
 => [2/2] WORKDIR /home/<user>/swesonga/factorize/java/project/src/main/java/org/swesonga/math                                     0.2s
 => exporting to image                                                                                                          0.0s
 => => exporting layers                                                                                                         0.0s
 => => writing image sha256:4c068055cad5759153f3d3677404b9729b8ddee1c7026aa30e88b8c58c228037                                    0.0s
 => => naming to docker.io/library/swesonga-jdk21-testapp

Start the Docker Container

Before starting the container: docker ps | Docker Docs. Then docker run | Docker Docs.

docker ps
docker run -i -t swesonga-jdk21-testapp

Troubleshoot any Docker Errors

One error I ran into initially was that docker was unable to start the container process. I had missed the COPY command in the Dockerfile so the file couldn’t be found:

user@machine:~/swesonga$ docker run -i -t swesonga-jdk21-testapp
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/home/<user>/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java": stat /home/<user>/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java: no such file or directory: unknown.
ERRO[0000] error waiting for container:

Fix the Dockerfile then docker build again. My ENTRYPOINT path still wasn’t correct so the same error appeared when re-running the application. I didn’t realize this though and thought that since I had changed the entrypoint, a cached build must still be in use. A delete cached docker build – Search pointed to the post on macos – Is there a way to clean docker build cache? – Stack Overflow. This is what I tried in my ignorance:

user@machine:~/swesonga$ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          2         2         1.22GB    443.1MB (36%)
Containers      3         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     9         0         776.9MB   776.9MB
user@machine:~/swesonga$ docker system df -v
Images space usage:

REPOSITORY               TAG       IMAGE ID       CREATED          SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
swesonga-jdk21-testapp   latest    682fedf54071   11 minutes ago   1.22GB    443.1MB       776.9MB       2
<none>                   <none>    4c068055cad5   26 minutes ago   443.1MB   443.1MB       0B            1

Containers space usage:

CONTAINER ID   IMAGE                    COMMAND                  LOCAL VOLUMES   SIZE      CREATED          STATUS    NAMES
c77b69082a8a   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        6 minutes ago    Created   awesome_chatelet
58c723638dd2   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        8 minutes ago    Created   sharp_lamport
4a3be55725b5   4c068055cad5             "/home/<user>/swesonga/…"   0               0B        17 minutes ago   Created   lucid_tharp

Local Volumes space usage:

VOLUME NAME   LINKS     SIZE

Build cache usage: 776.9MB
...

I tried pruning the build cache as suggested in that post.

user@machine:~/swesonga$ docker builder prune --all
WARNING! This will remove all build cache. Are you sure you want to continue? [y/N] y
ID                                              RECLAIMABLE     SIZE            LAST ACCESSED
te4o8rbj7s6nh6pluquzummzz                       true            0B              8 minutes ago
n2wbz4gf448fluw4uuogiqxdo*                      true    776.9MB         8 minutes ago
...
Total:  1.554GB

I realized that pruning wasn’t what I needed because now the cache was empty but the containers were still there based on the next output:

user@machine:~/swesonga$ docker system df -v
Images space usage:

REPOSITORY               TAG       IMAGE ID       CREATED          SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
swesonga-jdk21-testapp   latest    682fedf54071   18 minutes ago   1.22GB    443.1MB       776.9MB       2
<none>                   <none>    4c068055cad5   33 minutes ago   443.1MB   443.1MB       0B            1

Containers space usage:

CONTAINER ID   IMAGE                    COMMAND                  LOCAL VOLUMES   SIZE      CREATED          STATUS    NAMES
c77b69082a8a   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        13 minutes ago   Created   awesome_chatelet
58c723638dd2   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        15 minutes ago   Created   sharp_lamport
4a3be55725b5   4c068055cad5             "/home/<user>/swesonga/…"   0               0B        24 minutes ago   Created   lucid_tharp

Local Volumes space usage:

VOLUME NAME   LINKS     SIZE

Build cache usage: 0B

CACHE ID   CACHE TYPE   SIZE      CREATED   LAST USED   USAGE     SHARED
user@machine:~/swesonga$

I should have been using docker ps -a instead! The -a shows the existing containers (regardless of whether they are running).

user@machine:~/swesonga$ docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS    PORTS     NAMES
c77b69082a8a   682fedf54071   "/home/<user>/swesonga/…"   16 minutes ago   Created             awesome_chatelet
58c723638dd2   682fedf54071   "/home/<user>/swesonga/…"   18 minutes ago   Created             sharp_lamport
4a3be55725b5   4c068055cad5   "/home/<user>/swesonga/…"   27 minutes ago   Created             lucid_tharp
user@machine:~/swesonga$

I started displaying the Dockerfile before building and one of the errors I ran into was because I hadn’t saved the Dockerfile. Sheesh.

cat Dockerfile
docker build -t swesonga-jdk21-testapp .
docker ps -a
docker run -i -t swesonga-jdk21-testapp

After all this experimentation, we can remove all the broken containers using docker container rm | Docker Docs to remove individual ones or docker container prune | Docker Docs to remove all stopped containers (use with caution)!

docker container prune

Collect Logs from the Container

I used CTRL+C to stop the container after some time. The next question was how to examine the files in the container. linux – Exploring Docker container’s file system – Stack Overflow suggests using docker container cp | Docker Docs. I opened another SSH session to the host running Docker then copied the logs from the container using these commands:

mkdir -p ~/swesonga/logs/tip/
cd ~/swesonga/logs/

docker ps -a
export CONTAINERID=XXXXXXXXXXX

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/serialgc-jdk21.log ~/swesonga/logs/

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/pagesize-jdk21.log ~/swesonga/logs/

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/os-jdk21.log ~/swesonga/logs/

I switched back to the development box and used scp to get the log files back to it.

scp user@IPaddress:/home/<user>/swesonga/logs/*.log .

When gathering multiple rounds of logs, it is convenient to group them into separate folders. I used this command:

export CURRDATE=`date +%Y-%m-%d_%H%M%S`; scp user@IPaddress:/home/<user>/swesonga/logs/*.log ./logs-$CURRDATE/

Setting Container Limits

To see the amount of RAM on the Docker host, run free -h since it’s an Ubuntu host. How do we set a RAM limit for a docker container? Docker – Setting Memory And CPU Limits points me to the --memory limit parameter of docker run | Docker Docs.

docker ps -a
docker run -i --memory 2GB -t swesonga-jdk21-testapp

Observe the head of the jdk21 GC log below. The total memory is now reported as 2048M. The maximum heap size is 25% of this total, as expected. The initial heap is 32MB and the minimum heap is 8MB.

[0.006s][info][gc,init] CardTable entry size: 512
[0.006s][debug][gc,heap] Minimum heap 8388608  Initial heap 33554432  Maximum heap 536870912
[0.006s][info ][gc     ] Using Serial
[0.006s][debug][gc,heap,coops] Protected page at the reserved heap base: 0x0000120000000000 / 2097152 bytes
[0.006s][debug][gc,heap,coops] Heap address: 0x0000120000200000, size: 512 MB, Compressed Oops mode: Non-zero disjoint base: 0x0000120000000000, Oop shift amount: 3
[0.007s][info ][gc,init      ] Version: 21.0.5+11-LTS (release)
[0.007s][info ][gc,init      ] CPUs: 20 total, 20 available
[0.007s][info ][gc,init      ] Memory: 2048M
[0.007s][info ][gc,init      ] Large Page Support: Disabled
[0.007s][info ][gc,init      ] NUMA Support: Disabled
[0.007s][info ][gc,init      ] Compressed Oops: Enabled (Non-zero disjoint base)
[0.007s][info ][gc,init      ] Heap Min Capacity: 8M
[0.007s][info ][gc,init      ] Heap Initial Capacity: 32M
[0.007s][info ][gc,init      ] Heap Max Capacity: 512M
[0.007s][info ][gc,init      ] Pre-touch: Disabled

The number of CPUs can be set using the --cpus option.

cat Dockerfile
docker build -t swesonga-jdktip-testapp-2core .
docker ps -a
docker run -i --cpus 2 --memory 2GB -t swesonga-jdktip-testapp-2core

Viewing Container Stats

One of the questions I need to answer is how much memory is in use and how much is available in the container. A view available memory in a docker container – Google Search pointed me to How to Use the Resource Usage Docker Extension. Turns out docker stats is good enough for my needs right now.

user@machine:~/swesonga$ docker stats

CONTAINER ID   NAME                      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O   PIDS
7f096bd5163d   condescending_goldstine   196.82%   26.03MiB / 2GiB     1.27%     806B / 0B   0B / 0B     13

Categories: Java

Investigating a Windows ZGC Test Failure

[JDK-8334475] UnsafeIntrinsicsTest.java#ZGenerationalDebug assert(!assert_on_failure) failed: Has low-order bits set – Java Bug System (openjdk.org) was brought to my attention a few months ago. I ran the test on commit b3bf31a0a08d on my x64 machine to see what happens:

cd /c/repos/scratchpad/scripts/java/tests

./run-jtreg-tests.sh /d/java/forks/openjdk/jdk /d/java/forks/openjdk/jdk/build/windows-x86_64-server-slowdebug/jdk  /c/java/binaries/jtreg-7.4+1/lib/jtreg.jar

That script shows the exact command line used to invoke the test:

/d/java/forks/openjdk/jdk/build/windows-x86_64-server-slowdebug/jdk/bin/java -Xmx512m -jar /c/java/binaries/jtreg-7.4+1/lib/jtreg.jar -agentvm -ignore:quiet -automatic -xml -vmoption:-Xmx512m -timeoutFactor:4 -concurrency:1 -testjdk:/d/java/forks/openjdk/jdk/build/windows-x86_64-server-slowdebug/jdk -verbose:fail,error,summary  test/hotspot/jtreg/compiler/gcbarriers/UnsafeIntrinsicsTest.java

The test passed on x64 but I wanted to see how to step into the test code itself so I tried this setup in Visual Studio 2022. Unfortunately, it was not straightforward to break into the correct process when launching the test in Visual Studio 2022.

Command: D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\jdk\bin\java.exe

Arguments: -Xmx512m -jar C:/java/binaries/jtreg-7.4+1/lib/jtreg.jar -agentvm -ignore:quiet -automatic -xml -vmoption:-Xmx512m -timeoutFactor:4 -concurrency:1 -testjdk:D:/java/forks/openjdk/jdk/build/windows-x86_64-server-slowdebug/jdk -verbose:fail,error,summary  test/hotspot/jtreg/compiler/gcbarriers/UnsafeIntrinsicsTest.java

Working Dir: D:/java/forks/openjdk/jdk

Test Execution Sequence

I decided to try to understand the test execution sequence and started by looking at the output generated when running the jtreg test. Notice that the CreateCoredumpOnCrash product flag is disabled. The first thing I did was to enable it so that there is a core dump to examine.

D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\jdk\bin\java 
-Dtest.vm.opts=-Xmx512m 
-Dtest.tool.vm.opts=-J-Xmx512m 
-Dtest.compiler.opts= 
-Dtest.java.opts= 
-Dtest.jdk=D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\jdk 
-Dcompile.jdk=D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\jdk 
-Dtest.timeout.factor=4.0 
-Dtest.root=D:\java\forks\openjdk\jdk\test\hotspot\jtreg 
-Dtest.name=compiler/gcbarriers/UnsafeIntrinsicsTest.java#ZGenerationalDebug 
-Dtest.file=D:\java\forks\openjdk\jdk\test\hotspot\jtreg\compiler\gcbarriers\UnsafeIntrinsicsTest.java 
-Dtest.src=D:\java\forks\openjdk\jdk\test\hotspot\jtreg\compiler\gcbarriers 
-Dtest.src.path=D:\java\forks\openjdk\jdk\test\hotspot\jtreg\compiler\gcbarriers;D:\java\forks\openjdk\jdk\test\lib 
-Dtest.classes=D:\java\forks\openjdk\jdk\JTwork\classes\compiler\gcbarriers\UnsafeIntrinsicsTest_ZGenerationalDebug.d 
-Dtest.class.path=D:\java\forks\openjdk\jdk\JTwork\classes\compiler\gcbarriers\UnsafeIntrinsicsTest_ZGenerationalDebug.d;D:\java\forks\openjdk\jdk\JTwork\classes\test\lib 
-Dtest.class.path.prefix=D:\java\forks\openjdk\jdk\JTwork\classes\compiler\gcbarriers\UnsafeIntrinsicsTest_ZGenerationalDebug.d;D:\java\forks\openjdk\jdk\test\hotspot\jtreg\compiler\gcbarriers;D:\java\forks\openjdk\jdk\JTwork\classes\test\lib 
-Dtest.modules=java.base/jdk.internal.misc:+open --add-modules java.base --add-exports java.base/jdk.internal.misc=ALL-UNNAMED --add-opens java.base/jdk.internal.misc=ALL-UNNAMED
-Xmx512m
-XX:+UseZGC
-XX:+ZGenerational
-XX:+UnlockDiagnosticVMOptions
-XX:+ZVerifyOops
-XX:ZCollectionInterval=1
-XX:-CreateCoredumpOnCrash
-XX:CompileCommand=dontinline,*::mergeImpl* com.sun.javatest.regtest.agent.MainWrapper D:\java\forks\openjdk\jdk\JTwork\compiler\gcbarriers\UnsafeIntrinsicsTest_ZGenerationalDebug.d\main.0.jta

The core dump wasn’t particularly helpful. I worked on simplifying the test so that only the failing scenarios remained. The log in the JBS issue was interpreted code only so I added the -Xint argument and could still reproduce the failure on Windows AArch64.

I came back to the idea of capturing the full command line for the final java.exe process that runs the test. Is there a way to log process start on Windows to capture all command lines? Copilot cited PowerShell and Command Line Logging | LogRhythm, which suggested enabling the use of Event ID 4688: a new process has been created.

However, the event viewer didn’t have command line arguments when I did this, which is what I needed. I just looked up Event ID 4688 and found 4688(S) A new process has been created. – Windows 10 | Microsoft Learn, which explains that you must enable “Administrative Templates\System\Audit Process Creation\Include command line in process creation events” group policy to include command line in process creation events. Hmm, I definitely didn’t do that when I was investigating this issue. I just tried this and it does the trick!

I went back to deciphering the jtreg test execution output: which class was this com.sun.javatest.regtest.agent.MainWrapper? The only openjdk/jdk match for MainWrapper.java appeared to be jdk/test/hotspot/jtreg/vmTestbase/nsk/share/MainWrapper.java at b3bf31a0a08da679ec2fd21613243fb17b1135a9 · openjdk/jdk (github.com). However, it’s from a different package! Copilot informed me that the MainWrapper class was actually jtreg/src/share/classes/com/sun/javatest/regtest/agent/MainWrapper.java at master · openjdk/jtreg (github.com)

Callstack Investigation

I stepped back to the callstack in the JBS issue and decide to trace it. Here are the links to the sources:

---------------  T H R E A D  ---------------

Current thread (0x0000020efe6a7fb0):  JavaThread "main"             [_thread_in_vm, id=19132, stack(0x0000003c68f00000,0x0000003c69000000) (1024K)]

Stack: [0x0000003c68f00000,0x0000003c69000000]
Native frames: <unavailable>
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  java.lang.Object.clone()Ljava/lang/Object;+0 java.base
j  java.util.Arrays.copyOfRange([BII)[B+11 java.base
j  java.lang.String.<init>(Ljava/lang/AbstractStringBuilder;Ljava/lang/Void;)V+32 java.base
j  java.lang.StringBuilder.toString()Ljava/lang/String;+16 java.base
j  sun.nio.cs.StandardCharsets.toLower(Ljava/lang/String;)Ljava/lang/String;+121 java.base
j  sun.nio.cs.StandardCharsets.lookup(Ljava/lang/String;)Ljava/nio/charset/Charset;+44 java.base
j  sun.nio.cs.StandardCharsets.charsetForName(Ljava/lang/String;)Ljava/nio/charset/Charset;+6 java.base
j  java.nio.charset.Charset.lookup2(Ljava/lang/String;)Ljava/nio/charset/Charset;+39 java.base
j  java.nio.charset.Charset.lookup(Ljava/lang/String;)Ljava/nio/charset/Charset;+40 java.base
j  java.nio.charset.Charset.isSupported(Ljava/lang/String;)Z+1 java.base
j  java.lang.System.initPhase1()V+37 java.base
v  ~StubRoutines::call_stub 0x0000020e81410180
Lock stack of current Java thread (top to bottom):
LockStack[0]: sun.nio.cs.StandardCharsets 
{0x0000040000018d28} - klass: 'sun/nio/cs/StandardCharsets'
 - ---- fields (total size 5 words):
 - private 'classMap' 'Ljava/util/Map;' @16  null (0x0000000000000000)
 - private 'aliasMap' 'Ljava/util/Map;' @24  null (0x0000000000000000)
 - private 'cache' 'Ljava/util/Map;' @32  null (0x0000000000000000)

There is a generated class, sun.nio.cs.StandardCharsets.charsetForName, in the stack above! I was puzzled about why jdk/src/java.base/share/classes/java/nio/charset/StandardCharsets.java at b3bf31a0a08da679ec2fd21613243fb17b1135a9 · openjdk/jdk (github.com) did not contain that method. This failure reminds me of another failure I was investigating a while back:

$JAVA_HOME/bin/java -jar /c/java/binaries/jtreg-7.4+1/lib/jtreg.jar -agentvm -timeoutFactor:4 -concurrency:4 -verbose:fail,error,summary -testjdk:/d/java/forks/openjdk/jdk/build/windows-x86_64-server-release/jdk -nativepath:/d/java/forks/openjdk/jdk/build/windows-x86_64-server-release/support/test/jdk/jtreg/native/lib test/jdk/java/foreign/TestLinker.java

In the midst of all this wrangling, I decide to write a simple test to spit out the value of the sun.jnu.encoding property. That test runs fine so next step is to use the same flags as the failing jtreg test. However, as I added the ZGC flags to the test command line, I realized that I hadn’t even tried those flags with the java -version. What an oversight! The bug reproduces without running any specific program!

$JAVA_HOME/bin/java -XX:+UseZGC -XX:+ZGenerational -XX:+ZVerifyOops -version

Debugging Minimal Reproducible Failure

Since the assert happens when cloning an array, I decided to find the native implementation of the clone method. Searched for “clone_” and the only relevant hit (from a quick glance) appeared to be in the LinkResolver::check_method_accessability method.

I had been working on the latest commits and decided to try this test on jdk21u. It failed there as well! I tried it on jdk17 but it terminated with “Unrecognized VM option ‘ZGenerational'”. Blaming jdk/src/hotspot/share/gc/shared/gc_globals.hpp at b3bf31a0a08da679ec2fd21613243fb17b1135a9 · openjdk/jdk (github.com) pointed to 8307058: Implementation of Generational ZGC · openjdk/jdk@d20034b (github.com). Looks like jdk21 was the first release with this flag as per [JDK-8307058] Implementation of Generational ZGC – Java Bug System (openjdk.org).

Back to the LinkResolver: it looked like cloning is being dispatched through the Java object but what I had been looking for is a native call. Nothing about cloning at Arrays (Java SE 11 & JDK 11 ) (oracle.com) other than the fact that the method is inherited from the Object class. Nothing helpful in jdk/src/java.base/share/classes/java/util/Arrays.java or jdk/src/java.base/share/classes/java/lang/reflect/Array.java either. jdk/src/java.base/share/native/libjava/Array.c looked promising though. Methods like Java_java_lang_reflect_Array_getByte use the java.lang.reflect.Array JNIEXPORTs declared in jdk/src/hotspot/share/include/jvm.h. Browsing through the corresponding implementations in jdk/src/hotspot/share/prims/jvm.cpp led me to the clone implementation: JVM_Clone! This is the setup I used in the Visual Studio 2022 debugger:

Command: C:/java/forks/openjdk/jdk/build/windows-aarch64-server-slowdebug/jdk/bin/java.exe

Arguments: -XX:+UseZGC -XX:+ZGenerational -XX:+ZVerifyOops -version

Working Dir: C:/java/forks/openjdk/jdk

The failure appeared to be happening in this statement: HeapAccess<>::clone(obj(), new_obj_oop, size). I got this callstack by stepping into the calls in the slowdebug build because the is_valid function is inlined in the fastdebug build, preventing me from setting a breakpoint in it.

jvm.dll!is_valid(zaddress addr, bool assert_on_failure) Line 298	C++
jvm.dll!assert_is_valid(zaddress addr) Line 321	C++
jvm.dll!to_zaddress(unsigned __int64 value) Line 339	C++
jvm.dll!to_zaddress(oopDesc * o) Line 345	C++
jvm.dll!ZBarrierSet::AccessBarrier<270432,ZBarrierSet>::clone_in_heap(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 433	C++
jvm.dll!AccessInternal::PostRuntimeDispatch<ZBarrierSet::AccessBarrier<270400,ZBarrierSet>,9,270400>::access_barrier(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 200	C++
jvm.dll!AccessInternal::RuntimeDispatch<270400,oopDesc *,9>::clone_init(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 349	C++
jvm.dll!AccessInternal::RuntimeDispatch<270400,oopDesc *,9>::clone(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 533	C++
jvm.dll!AccessInternal::PreRuntimeDispatch::clone<270400>(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 890	C++
jvm.dll!AccessInternal::clone<262144>(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 1181	C++
jvm.dll!Access<262144>::clone(oopDesc * src, oopDesc * dst, unsigned __int64 size) Line 212	C++
jvm.dll!JVM_Clone(JNIEnv_ * env, _jobject * handle) Line 698	C++
00000298b2bbf056()	Unknown
00000298a2ee5590()	Unknown
000000c053dfe908()	Unknown

Here’s the stack from stepping into the fastdebug assembly (note that line numbers might be off since this is not slowdebug):

jvm.dll!report_vm_error(const char * file, int line, const char * error_msg, const char * detail_fmt, ...) Line 181	C++
[Inline Frame] jvm.dll!oop::check_oop() Line 89	C++
[Inline Frame] jvm.dll!oop::on_usage() Line 91	C++
[Inline Frame] jvm.dll!oop::obj() Line 103	C++
[Inline Frame] jvm.dll!oop::operator=(const oop &) Line 114	C++
jvm.dll!Copy::pd_conjoint_oops_atomic(const oop * from, oop * to, unsigned __int64 count) Line 120	C++
[Inline Frame] jvm.dll!Copy::pd_conjoint_jlongs_atomic(const __int64 *) Line 106	C++
[Inline Frame] jvm.dll!Copy::conjoint_jlongs_atomic(const __int64 *) Line 142	C++
jvm.dll!AccessInternal::arraycopy_conjoint_atomic<__int64>(__int64 * src, __int64 * dst, unsigned __int64 length) Line 104	C++
jvm.dll!RawAccessBarrier<270432>::clone(oop src, oop dst, unsigned __int64 size) Line 331	C++
[Inline Frame] jvm.dll!BarrierSet::AccessBarrier<270432,ZBarrierSet>::clone_in_heap(oop) Line 313	C++
jvm.dll!ZBarrierSet::AccessBarrier<270432,ZBarrierSet>::clone_in_heap(oop src, oop dst, unsigned __int64 size) Line 451	C++
jvm.dll!AccessInternal::PostRuntimeDispatch<ZBarrierSet::AccessBarrier<270400,ZBarrierSet>,9,270400>::access_barrier(oop src, oop dst, unsigned __int64 size) Line 200	C++
jvm.dll!AccessInternal::RuntimeDispatch<270400,oop,9>::clone_init(oop src, oop dst, unsigned __int64 size) Line 349	C++
[Inline Frame] jvm.dll!AccessInternal::RuntimeDispatch<270400,oop,9>::clone(oop) Line 532	C++
[Inline Frame] jvm.dll!AccessInternal::PreRuntimeDispatch::clone(oop) Line 889	C++
[Inline Frame] jvm.dll!AccessInternal::clone(oop) Line 1180	C++
jvm.dll!Access<262144>::clone(oop src, oop dst, unsigned __int64 size) Line 211	C++
jvm.dll!JVM_Clone(JNIEnv_ * env, _jobject * handle) Line 698	C++
000001a156d3a45c()	Unknown

ZBarrierSet::AccessBarrier<270432,ZBarrierSet>::clone_in_heap is interesting because it verifies that the source is a valid z-address! Let’s take a look at the memory region and see what is there…

I looked at this and searched for UTF-8 byte 0xBD meaning – Search (bing.com) but that didn’t turn up anything meaningful. I noticed that count is 3 on my Aarch64 device so we are copying 3 oops. Was this expected given that the count is 4 on Windows x64? More importantly though, the big question I have now is why doesn’t slowdebug step into the oop checking code when pressing F11 on line 120 in the screenshot? I expected the behavior of oop::on_usage to be different, not that it wouldn’t be called at all! When browsing the sources in VSCode on my x64 desktop, I clicked on the oop type (of the src argument) and it took me to a typedef of class oopDesc*. That’s when I spotted the CHECK_UNHANDLED_OOPS ifndef. The fastdebug build must have this defined! The only non-cpp/hpp file that contains CHECK_UNHANDLED_OOPS is jdk/make/hotspot/lib/JvmFlags.gmk. Sure enough, it is only defined for fastdebug. This means that I should be able to enable it for slowdebug and release and verify whether the behavior is present there. We can therefore configure a slowdebug build with the --with-extra-cflags=-DCHECK_UNHANDLED_OOPS option.

date; time bash configure --with-jtreg=/cygdrive/c/java/binaries/jtreg-7.4+1 --with-gtest=/cygdrive/c/repos/googletest --with-boot-jdk=/cygdrive/c/java/binaries/jdk/x64/jdk-22.0.1+8 --openjdk-target=aarch64-unknown-cygwin --with-debug-level=slowdebug --with-extra-cflags=-DCHECK_UNHANDLED_OOPS

time /cygdrive/c/repos/scratchpad/scripts/java/cygwin/build-jdk.sh windows aarch64 0 slowdebug

Looking at the actual pointer, I noticed that its value is the data “cp1252..”. After further investigation, I conclude that the bug is that we’re calling oops::operator= instead of just copying the values! I test a fix that simply copies the values directly and it works! The test passes even with the -DCHECK_UNHANDLED_OOPS option!

On to the next question: is there anything else using the pd_conjoint_oops_atomic function (and will it be negatively affected by my change)? While searching for “pd_conjoint_oops_atomic”, I notice that some platforms have an assert that oops == long or smth like that. There are 2 users of pd_conjoint_oops_atomic:

  1. Copy::pd_conjoint_oops_atomic > AccessInternal::arraycopy_conjoint_oops > RawAccessBarrierArrayCopy::arraycopy. The last method’s definition is based on templates so I continue with a regex search for “Raw::.*arraycopy”. Many of the results are related to arraycopy_in_heap. Looking at all the barrierSet results makes me realize I could just search JBS for “arraycopy_in_heap”.
  2. pd_arrayof_conjoint_oops in jdk/src/hotspot/os_cpu/windows_aarch64/copy_windows_aarch64.hpp.

One idea is to run Java programs on a build linked with /PROFILE (Performance Tools Profiler) i.e. configured via --with-extra-ldflags=-profile to enable collection of code coverage data then confirming that those functions are executed. That seems cumbersome though so I try to create some array copying code to see if I can get to those functions (using breakpoints but none are hit). After taking a break: I wonder if I can just search from the bottom instead? Looking in jvm.cpp for copy reveals the JVM_ArrayCopy function. Here is my Java program:

public class CopyArray {
    public static void main(String[] args) {
        int length = 0xdeadc0d;
        int srcPos = 0;

        if (args.length > 0) {
            try {
                int userLength = Integer.parseInt(args[0]);
                length = userLength;
            }
            catch (Throwable e) {
                System.err.println("Ignoring invalid user arguments.");
            }
        }

        byte[] src = new byte[length];

        for (int i = 0; i < src.length; i++) {
            src[i] = (byte)(i % 256);
        }
        byte[] dest = new byte[length];
        System.arraycopy(src, srcPos, dest, 0, length);
    }
}

I debugged the JVM running this program on my x64 machine. This hits the target function, JVM_ArrayCopy but there are so many callers. I have to set a condition on the breakpoint (hence the magic value of the length above) before I can step in to see where my call goes. Here are the source paths (note the different commit)

jvm.dll!Copy::pd_conjoint_bytes_atomic(const void * from, void * to, unsigned __int64 count) Line 119	C++
jvm.dll!Copy::conjoint_memory_atomic(const void * from, void * to, unsigned __int64 size) Line 53	C++
jvm.dll!AccessInternal::arraycopy_conjoint_atomic<void>(void * src, void * dst, unsigned __int64 length) Line 164	C++
jvm.dll!RawAccessBarrierArrayCopy::arraycopy<136585312,void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 298	C++
jvm.dll!RawAccessBarrier<136585312>::arraycopy<void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 308	C++
jvm.dll!AccessInternal::PreRuntimeDispatch::arraycopy<136587328,void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 834	C++
jvm.dll!AccessInternal::PreRuntimeDispatch::arraycopy<136585280,void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 867	C++
jvm.dll!AccessInternal::arraycopy_reduce_types<136585280,void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 1008	C++
jvm.dll!AccessInternal::arraycopy<136577024,void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, const void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 1172	C++
jvm.dll!Access<136577024>::arraycopy<void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, const void * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, void * dst_raw, unsigned __int64 length) Line 147	C++
jvm.dll!ArrayAccess<134217728>::arraycopy<void>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, unsigned __int64 length) Line 301	C++
jvm.dll!TypeArrayKlass::copy_array(arrayOop s, int src_pos, arrayOop d, int dst_pos, int length, JavaThread * __the_thread__) Line 170	C++
jvm.dll!JVM_ArrayCopy(JNIEnv_ * env, _jclass * ignored, _jobject * src, int src_pos, _jobject * dst, int dst_pos, int length) Line 307	C++
00000244ea690702()	Unknown

Copy::conjoint_memory_atomic is interesting because it has a comment indicating that copying bytes is not aligned and so there is no need to be atomic. The if statements in that method indicate that I can change the size of elements in the array to call different paths. Looks like I need to create an array of objects.

/**
export JAVA_HOME=~/java/binaries/jdk/x64/jdk-21.0.2+13
$JAVA_HOME/bin/javac CopyArray.java
$JAVA_HOME/bin/java CopyArray
 */

public class CopyArray {
    public static void main(String[] args) {
        int length = 0xdead;
        int srcPos = 0;

        if (args.length > 0) {
            try {
                int userLength = Integer.parseInt(args[0]);
                length = userLength;
            }
            catch (Throwable e) {
                System.err.println("Ignoring invalid user arguments.");
            }
        }

        Object[] src = new Object[length];

        for (int i = 0; i < src.length; i++) {
            src[i] = new Object();
        }
        Object[] dest = new Object[length];
        System.arraycopy(src, srcPos, dest, 0, length);
    }
}

Now we are closer to the array_oops code I was trying to hit:

jvm.dll!Copy::pd_conjoint_jints_atomic(const int * from, int * to, unsigned __int64 count) Line 52	C++
jvm.dll!Copy::conjoint_oops_atomic(const narrowOop * from, narrowOop * to, unsigned __int64 count) Line 155	C++
jvm.dll!AccessInternal::arraycopy_conjoint_oops(narrowOop * src, narrowOop * dst, unsigned __int64 length) Line 54	C++
jvm.dll!RawAccessBarrierArrayCopy::arraycopy<50331750,HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 234	C++
jvm.dll!RawAccessBarrier<52715622>::arraycopy<enum narrowOop>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, narrowOop * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, narrowOop * dst_raw, unsigned __int64 length) Line 308	C++
jvm.dll!RawAccessBarrier<52715622>::oop_arraycopy<enum narrowOop>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, narrowOop * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, narrowOop * dst_raw, unsigned __int64 length) Line 130	C++
jvm.dll!ModRefBarrierSet::AccessBarrier<35938406,CardTableBarrierSet>::oop_arraycopy_in_heap<enum narrowOop>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, narrowOop * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, narrowOop * dst_raw, unsigned __int64 length) Line 109	C++
jvm.dll!AccessInternal::PostRuntimeDispatch<CardTableBarrierSet::AccessBarrier<35938406,CardTableBarrierSet>,8,35938406>::oop_access_barrier<HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 142	C++
jvm.dll!AccessInternal::RuntimeDispatch<35938374,HeapWordImpl *,8>::arraycopy(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 517	C++
jvm.dll!AccessInternal::PreRuntimeDispatch::arraycopy<35938374,HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 871	C++
jvm.dll!AccessInternal::arraycopy_reduce_types<35938372>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 1018	C++
jvm.dll!AccessInternal::arraycopy<35913732,HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * const * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 1172	C++
jvm.dll!Access<35913728>::oop_arraycopy<HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * const * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 136	C++
jvm.dll!ArrayAccess<33554432>::oop_arraycopy(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, unsigned __int64 length) Line 327	C++
jvm.dll!ObjArrayKlass::do_copy(arrayOop s, unsigned __int64 src_offset, arrayOop d, unsigned __int64 dst_offset, int length, JavaThread * __the_thread__) Line 197	C++
jvm.dll!ObjArrayKlass::copy_array(arrayOop s, int src_pos, arrayOop d, int dst_pos, int length, JavaThread * __the_thread__) Line 282	C++
jvm.dll!JVM_ArrayCopy(JNIEnv_ * env, _jclass * ignored, _jobject * src, int src_pos, _jobject * dst, int dst_pos, int length) Line 307	C++
00000202d3f00502()	Unknown
00000202cc269950()	Unknown

In this call stack, the arraycopy_conjoint_oops(narrowOop* src, narrowOop* dst, size_t length) implementation that is called has narrow oops because of the branch in ObjArrayKlass::copy_array. Launch the application using these arguments instead:

C:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\jdk\bin\java.exe -XX:-UseCompressedOops CopyArray

Now the code block of interest is hit! Hmm, I’m realizing that I should have been explicit about the collector to use. This was debugging the G1 collector (chosen ergonomically).

Notice that ZGC has different code for copying the array on Aarch64 (my fix-jlongs branch based on openjdk/jdk at ee839b7f0ebe471d3877cddd2c87019ccb8ee5ae).

jvm.dll!ZBarrierSet::AccessBarrier<52715590,ZBarrierSet>::oop_arraycopy_in_heap_no_check_cast(zpointer * dst, zpointer * src, unsigned __int64 length) Line 371	C++
jvm.dll!ZBarrierSet::AccessBarrier<35938374,ZBarrierSet>::oop_arraycopy_in_heap(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, zpointer * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, zpointer * dst_raw, unsigned __int64 length) Line 403	C++
jvm.dll!ZBarrierSet::AccessBarrier<35938374,ZBarrierSet>::oop_arraycopy_in_heap(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, oop * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, oop * dst_raw, unsigned __int64 length) Line 128	C++
jvm.dll!AccessInternal::PostRuntimeDispatch<ZBarrierSet::AccessBarrier<35938374,ZBarrierSet>,8,35938374>::oop_access_barrier<HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 142	C++
jvm.dll!AccessInternal::RuntimeDispatch<35938374,HeapWordImpl *,8>::arraycopy(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 517	C++
jvm.dll!AccessInternal::PreRuntimeDispatch::arraycopy<35938374,HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 871	C++
jvm.dll!AccessInternal::arraycopy_reduce_types<35938372>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 1018	C++
jvm.dll!AccessInternal::arraycopy<35913732,HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * const * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 1172	C++
jvm.dll!Access<35913728>::oop_arraycopy<HeapWordImpl *>(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, HeapWordImpl * const * src_raw, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, HeapWordImpl * * dst_raw, unsigned __int64 length) Line 136	C++
jvm.dll!ArrayAccess<33554432>::oop_arraycopy(arrayOop src_obj, unsigned __int64 src_offset_in_bytes, arrayOop dst_obj, unsigned __int64 dst_offset_in_bytes, unsigned __int64 length) Line 327	C++
jvm.dll!ObjArrayKlass::do_copy(arrayOop s, unsigned __int64 src_offset, arrayOop d, unsigned __int64 dst_offset, int length, JavaThread * __the_thread__) Line 198	C++
jvm.dll!ObjArrayKlass::copy_array(arrayOop s, int src_pos, arrayOop d, int dst_pos, int length, JavaThread * __the_thread__) Line 290	C++
jvm.dll!JVM_ArrayCopy(JNIEnv_ * env, _jclass * ignored, _jobject * src, int src_pos, _jobject * dst, int dst_pos, int length) Line 308	C++
000002602e8c06a8()	Unknown

I need to turn off compressed oops with the serial collector as well but it looks like there is no check_oop_function for the serial collector. That said, this exploration of array copying code was insightful, showing how the data type sizes determine which path is taken for copying primitives and objects. There didn’t appear to be any red flags about removing the oop::operator= usage so I opened 8334475: UnsafeIntrinsicsTest.java#ZGenerationalDebug assert(!assert_on_failure) failed: Has low-order bits set by swesonga · Pull Request #20390 · openjdk/jdk (github.com) to fix the assertion failure. The most interesting part of this investigation was that the bad address was a data value (cp1252) staring right at me and I missed it. This was quite educational for me though.


Categories: Java

Launching Freeplane in Eclipse

I was recently in a brainstorming session with a colleague about garbage collection scenarios. He suggested using mind mapping software, and I must confess that this was the first time I was hearing about this (as far as I can recall). We wanted a free program. Our search turned up freeplane (which can be downloaded on Sourceforge). As per my usual habits, I decided to build it from source myself (I especially get this urge when I run into Sourceforge download links). Fortunately, the sources are hosted on GitHub.

git clone https://github.com/freeplane/freeplane

Since this is a Java codebase, I searched for “eclipse java” to get an IDE. The Eclipse Packages page has the Eclipse IDE for Java Developers (I downloaded the Windows x64 version). The project uses gradle but I didn’t know how to open it in Eclipse so I asked Microsoft Copilot (Bing Chat) how to “open a gradle project in eclipse”. One of the results was from the Import existing Gradle Git project into Eclipse – Stack Overflow page, which points to the Buildship Gradle Integration plugin. I launched Eclipse, created a new workspace and tried to import the project but got an exception that ended with:

Caused by: java.lang.IllegalArgumentException: Unsupported class file major version 65
	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:199)
	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:180)
	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:166)
	at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:287)
	... 180 more

The spring – Unsupported class file major version 65 with Java 21 “workaround” – Stack Overflow post is not particularly helpful. However, I have been interested in learning gradle for some time, so I might have finally found the opportunity I needed. I reread the Import existing Gradle Git project into Eclipse – Stack Overflow page then decided to look around the Eclipse IDE to find anything similar to the “Build Model” button mentioned in one of the replies. The “Refresh Tasks for All Projects” button did the trick! It populates the Gradle Tasks pane with an actual meaningful tree!

I was not sure how to run the application. I had unloaded it and reloaded it. I asked copilot “how to run an imported gradle project in eclipse” and the last step is to right-click on the project and choose Refresh to ensure Eclipse recognizes the changes. This time, the project explorer folders are rearranged, and version numbers are appended after each project!

I right clicked on the freeplane project, selected Run As… Java Application, then chose the Main (org.knopflerfish.framework) application and clicked OK.

This text appeared in the console but no application Window appeared.

Knopflerfish OSGi framework launcher, version <unknown>
Copyright 2003-2020 Knopflerfish. All Rights Reserved.
See http://www.knopflerfish.org for more information.

Created Framework: org.knopflerfish.framework, version=8.0.11.
Framework launched

I was not even sure how to terminate the application so I asked copilot “how to stop a running java program in eclipse”. Yes, it has been that long. Forgot about this button.

I then downloaded freeplane_bin-1.11.14.zip to explore the UI and see if I could track down the strings from the menus… I found some resource files that but that was not helpful, so I continued exploring other projects. I finally stumbled into the freeplane_framework project. Clicking on “Run As… > Java Application” revealed that it has a Launcher project! I had been looking for a project like this. It failed with this error in the console window:

Exception in thread "main" java.lang.UnsupportedOperationException: The Security Manager is deprecated and will be removed in a future release
	at java.base/java.lang.System.setSecurityManager(System.java:430)
	at org.freeplane.launcher.Launcher.launchWithoutUICheck(Launcher.java:291)
	at org.freeplane.launcher.Launcher.main(Launcher.java:88)

I manually set the disableSecurityManager field to true to bypass this issue. The application finally launched!!

Launching Freeplane in Eclipse on macOS

I decided to try this process on my M1 laptop. I downloaded the macOS AArch64 Eclipse IDE for Java Developers, accepted the default workspace path, and clicked on the “Import projects…” command in the package explorer. I selected the “Existing Gradle Project” import wizard then clicked Next.

For the “Project root directory”, I browsed to the freeplane repo: /Users/saint/repos/freeplane then clicked on “Finish”. There is a helpful message in the Gradle Tasks pane (which shows up by default): Click on the Refresh Tasks button to get the structure and the tasks for project.

A message in the status bar flashed by fast but nothing else happened in the IDE. I confirmed that “Buildship: Eclipse Plug-ins for Gradle” was listed in the Installation Details (Eclipse > About Eclipse).

I didn’t have the console view of what could have happened though so I enabled it via Window > Show View > Console. Aha, looks like the same error that happened the first time in Windows (not even sure how that got resolved at this point).

FAILURE: Build failed with an exception.

* What went wrong:
Could not open cp_init generic class cache for initialization script '/Users/saint/eclipse-workspace/.metadata/.plugins/org.eclipse.buildship.core/init.d/eclipsePlugin.gradle' (/Users/saint/.gradle/caches/8.1.1/scripts/2to4is5l87jn9v7vrcgka57e).
> BUG! exception in phase 'semantic analysis' in source unit '_BuildScript_' Unsupported class file major version 65

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

CONFIGURE FAILED in 568ms

Pasting the line starting with BUG! into copilot was helpful: class file major version 65 is jdk21 as per Chapter 4. The class File Format (oracle.com) and gradle supports Java 8 through 22 – see Compatibility Matrix (gradle.org). This matrix link is from macos – BUG! exception in phase ‘semantic analysis’ in source unit ‘_BuildScript_’ Unsupported class file major version 61 on Apple Arm – Stack Overflow.

Which version of gradle is the Buildship Eclipse plugin using? It must be older than 8.8 (the default on the compatibility matrix page). I’m running Eclipse Buildship: Eclipse Plug-ins for Gradle 3.1.9 | projects.eclipse.org. This links to buildship/docs/user/Faq.md at master · eclipse/buildship (github.com), but that FAQ doesn’t address this. However, there is a gradle/wrapper/gradle-wrapper.properties file as mentioned in macos – BUG! exception in phase ‘semantic analysis’ in source unit ‘_BuildScript_’ Unsupported class file major version 61 on Apple Arm – Stack Overflow! The gradle 7.4.2 Compatibility Matrix (gradle.org) shows that it supports only Java 8 through 17! I looked around for a setting that could let me change this. Started with Eclipse > Settings… and found a Gradle subtree in the Preferences window. Gradle distribution is set to “Gradle wrapper”.

The Specific Gradle version dropdown has many options. I selected 8.8 then clicked on Apply and Close. Refreshing the Gradle Tasks still didn’t do anything. I closed the project and reopened it and now all the tasks showed up in the Gradle Tasks pane. However, the Run As option on the freeplane_framework project only had a Run Configurations… option. After some poking around, I discovered that just like on Windows, I need to expand the project, right click on the build.gradle file, then select the Gradle > Refresh Gradle Project to get the Package Explorer to refresh. This hides the top level files in the project. Now I could right click on the project and use the Run As > Java Application command. Selecting the “Launcher” Java application resulted in previously mentioned error message so I manually set the disableSecurityManager field to true. Freeplane finally launched successfully!

Summary: How to launch Freeplane in Eclipse

  1. Clone the repo: git clone https://github.com/freeplane/freeplane
  2. Download the Eclipse IDE for Java Developers
  3. Set the gradle distribution to 8.8 (on Windows/Linux, use Window > Preferences, on macOS, use Eclipse > Preferences).
  4. Click on Import projects… in the Package Explorer.
  5. Select Existing Gradle Project then browse to the repo for the “Project root directory” (you can click Next to override the workspace settings to choose a specific Gradle version).
  6. Click on Refresh Gradle Tasks.
  7. Right click on the root build.gradle file and click on Refresh Tasks (if necessary, i.e. this file is shown at the root level of the package explorer)
  8. Manually set the disableSecurityManager field to true to bypass the UnsupportedOperationException.
  9. Run the “build” Gradle task in the Gradle Tasks window.
  10. Right click on the freeplane_framework project, Run As > Java Application then select the Launcher application.
  11. Select the freeplane dist option in the “Select Java Application” Window and click OK.
  12. Use the launched freeplane application!

Categories: Benchmarks, Java

Running the JMH Benchmark

To build and run the JMH benchmark, ensure Maven is installed (the mvn command should be available in the shell). On Ubuntu, this is as easy as running sudo apt install maven then setting up JMH using the steps below.

git clone https://github.com/swesonga/benchmarks
cd benchmarks
cd java/jmh
./setup_jmh_jdk_micros.sh
./run_jmh_jdk_micros.sh Parallel 1 2 10 5 5

The setup_jmh_jdk_micros.sh script builds the JMH JDK microbenchmarks and the run_jmh_jdk_micros.sh run the benchmark. Someone recently asked why there are 2 java processes shown in top when the run_jmh_jdk_micros.sh script runs given that it launches only 1 java process. I learned from this site how to use top to see java processes only: run top, type ‘o’, then type ‘COMMAND=java’ and press ENTER. This is the resulting output from the top command:

top - 22:15:21 up  8:15,  1 user,  load average: 1.73, 0.66, 0.30
Tasks: 330 total,   1 running, 329 sleeping,   0 stopped,   0 zombie
%Cpu(s): 20.9 us,  0.6 sy,  0.0 ni, 78.3 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
MiB Mem :  15415.5 total,   5837.9 free,   5436.0 used,   4141.6 buff/cache
MiB Swap:   2048.0 total,   2048.0 free,      0.0 used.   9568.6 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                          
  22007 saint     20   0 5009436   1.1g  22464 S 345.3   7.2   1:02.74 java                                                                                             
  21927 saint     20   0 5607592   1.1g  23100 S   0.3   7.2   0:01.08 java

Sure enough, there are 2 java processes, each with 1.1g RES. This can also be confirmed by running the ps -aux | grep java command.

saint      21927  0.2  7.2 5607592 1141808 pts/2 Sl+  22:15   0:01 /home/saint/java/binaries/jdk/x64/jdk-21.0.1+12/bin/java -Xms1G -Xmx1G -XX:+AlwaysPreTouch -XX:+UseLargePages -jar jmh-jdk-microbenchmarks/micros-uber/target/micros-uberpackage-1.0-SNAPSHOT.jar -f 1 -si true -w 10 -i 5 -wi 5 -t 2 -o ./applogs/Parallel-1G-2024-02-02_22-15-03.txt -rff ./applogs/Parallel-1G-2024-02-02_22-15-03_machine.txt -rf text (?i)\.*(atomic|lock|volatile|ConcurrentHashMap|ProducerConsumer|Queues|ThreadLocalRandomNextInt)\.*
saint      22237  200  7.1 4876316 1125396 pts/2 Sl+  22:21   2:16 /home/saint/java/binaries/jdk/x64/jdk-21.0.1+12/bin/java -Xms1G -Xmx1G -XX:+AlwaysPreTouch -XX:+UseLargePages -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -DcompilerBlackholesEnabled=true -XX:CompileCommandFile=/tmp/jmh12734456823702200189compilecommand -cp jmh-jdk-microbenchmarks/micros-uber/target/micros-uberpackage-1.0-SNAPSHOT.jar org.openjdk.jmh.runner.ForkedMain 127.0.0.1 36835

Notice that one of the processes has an IP address and what is most likely a port number. Without knowing anything else, this would suggest a client/server model in use. To better understand why there are 2 java processes, let us see look at how the run script launches Java. It passes flags like -f 1 to the benchmark jar file. What parses these flags?

The benchmark JAR file is created from a pom.xml file containing a mainClass attribute of org.openjdk.jmh.Main. That’s the class responsible for parsing these flags. This Main class uses the CommandLineOptions class to parse arguments like -f 1 then executes the Runner.run() method. Runner.runBenchmarks() checks whether the benchmarks should be run embedded or forked. runBenchmarksEmbedded() has a warning about using non-forked runs only for debugging purposes. This answers the question of why there are 2 JVMs: we are running in forked mode. runSeparate() has a getForkedMainCommand() method, which suggests that there is most likely a way to pass custom arguments to the ForkedMain JVM.

Now that we understand why there are 2 JVMs, can we control the heap size of each of them independently? The CommandLineOptions class has a list of all the supported arguments. Notice the jvmArgs, jvmArgsAppend, and jvmArgsAppend arguments! These suggest that we can indeed control the heap sizes of each of the JVMs. Use the jvmArgsAppend flag in the run_jmh_jdk_micros.sh script (e.g. just before the benchmark_filter_regex to specify a custom heap size for the forked java process (independent of the jvm_heap_size_opts).

Installing Maven

I did not have Java set up on my Ubuntu VM. Therefore, setting up MVN installed Java and many other dependencies. I decided to list these here for future reference.

saint@vm1ubuntu:~/java$ sudo apt install maven
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  ca-certificates-java default-jre-headless java-common libaopalliance-java libapache-pom-java libatinject-jsr330-api-java libcdi-api-java libcommons-cli-java
  libcommons-io-java libcommons-lang3-java libcommons-parent-java libgeronimo-annotation-1.3-spec-java libgeronimo-interceptor-3.0-spec-java libguava-java
  libguice-java libhawtjni-runtime-java libjansi-java libjansi-native-java libjsr305-java libmaven-parent-java libmaven-resolver-java libmaven-shared-utils-java
  libmaven3-core-java libplexus-cipher-java libplexus-classworlds-java libplexus-component-annotations-java libplexus-interpolation-java libplexus-sec-dispatcher-java
  libplexus-utils2-java libsisu-inject-java libsisu-plexus-java libslf4j-java libwagon-file-java libwagon-http-shaded-java libwagon-provider-api-java
  openjdk-11-jre-headless
Suggested packages:
  default-jre libaopalliance-java-doc libatinject-jsr330-api-java-doc libel-api-java libcommons-io-java-doc libcommons-lang3-java-doc libasm-java libcglib-java
  libjsr305-java-doc libmaven-shared-utils-java-doc liblogback-java libplexus-classworlds-java-doc libplexus-sec-dispatcher-java-doc libplexus-utils2-java-doc junit4
  testng libcommons-logging-java liblog4j1.2-java fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei | fonts-wqy-zenhei
The following NEW packages will be installed:
  ca-certificates-java default-jre-headless java-common libaopalliance-java libapache-pom-java libatinject-jsr330-api-java libcdi-api-java libcommons-cli-java
  libcommons-io-java libcommons-lang3-java libcommons-parent-java libgeronimo-annotation-1.3-spec-java libgeronimo-interceptor-3.0-spec-java libguava-java
  libguice-java libhawtjni-runtime-java libjansi-java libjansi-native-java libjsr305-java libmaven-parent-java libmaven-resolver-java libmaven-shared-utils-java
  libmaven3-core-java libplexus-cipher-java libplexus-classworlds-java libplexus-component-annotations-java libplexus-interpolation-java libplexus-sec-dispatcher-java
  libplexus-utils2-java libsisu-inject-java libsisu-plexus-java libslf4j-java libwagon-file-java libwagon-http-shaded-java libwagon-provider-api-java maven
  openjdk-11-jre-headless
0 upgraded, 37 newly installed, 0 to remove and 72 not upgraded.
Need to get 52.7 MB of archives.
After this operation, 189 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Categories: Assembly, Java

Inspecting Code in JitWatch

Developers disassemble! Use Java and hsdis to see it all. (oracle.com) is an excellent introduction to using the hotspot disassembler to view the instructions generated by HotSpot for a Java program. It also introduces JITWatch.

JITWatch processes the JIT compilation logs that are output by the JVM and explains the optimization decisions made by the JIT compilers.

Developers disassemble! Use Java and hsdis to see it all. (oracle.com)

Let us try using JITWatch on the sample Factorization program I have been using to learn about systems performance. Use these instructions from that blog post to get JITWatch:

git clone https://github.com/AdoptOpenJDK/jitwatch.git
cd jitwatch
mvn clean package
# Produces an executable jar in ui/target/jitwatch-ui-shaded.jar

java -jar ui/target/jitwatch-ui-shaded.jar

Start the factorization sample application such that a hotspot log file is generated. To do so, use the flags listed in the JITWatch Instructions · AdoptOpenJDK/jitwatch Wiki (github.com). I decide to redirect the output to a file to avoid filling the script with the additional logging output.

$JAVA_HOME/bin/java -XX:+UnlockDiagnosticVMOptions -Xlog:class+load=info -XX:+LogCompilation -XX:+PrintAssembly Factorize 897151542039582592342572091 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 6 > logfile.txt

Loading the HotSpot Log

Click on the “Open Log” button in JITWatch then select the hotspot*.log file. Next, click on the Start button to process the JIT log.

Opening a HotSpot Log File
Processed HotSpot Log
Viewing JIT-compiled Class Members

Clicking on a class member opens another window with the corresponding assembly instructions generated by the JIT. I haven’t set up any source code locations but the assembly instructions are still displayed.

Setting up MVN on Windows

To run JITWatch on Windows, download the Maven binaries from Maven – Download Apache Maven and verify the hashes using certutil. Extract the downloaded .zip file using tar. Here are the instructions I used in Git Bash.

mkdir -p /c/java/binaries/apache
cd /c/java/binaries/apache

curl -Lo apache-maven-3.9.3-bin.zip https://dlcdn.apache.org/maven/maven-3/3.9.3/binaries/apache-maven-3.9.3-bin.zip

certutil -hashfile apache-maven-3.9.3-bin.zip SHA512
# shasum -a 512 apache-maven-3.9.3-bin.zip

tar xf apache-maven-3.9.3-bin.zip

Add MAVEN_HOME to the system PATH environment variable as described at How to Install Maven on Windows {Step-by-Step Guide} (phoenixnap.com) – or run these command in an admin command prompt. Note that I echo the path because if the new PATH is too long, this will happen: WARNING: The data being saved is truncated to 1024 characters but the previous value will still be onscreen if needed. See the pitfalls of setx at setx | Microsoft Learn. The quotes around the new path prevent issues like cmd – Invalid syntax. Default option is not allowed more than ‘2’ time(s) – Stack Overflow.

set MAVEN_HOME=C:\java\binaries\apache\apache-maven-3.9.3
setx /M MAVEN_HOME %MAVEN_HOME%

echo %PATH%
setx /M PATH "%PATH%;%MAVEN_HOME%\bin"

Now build the JITWatch sources in a command prompt:

cd \java\repos\AdoptOpenJDK\jitwatch
C:\java\binaries\apache\apache-maven-3.9.3\bin\mvn clean package

Categories: Java, Profiling

Experimenting with Async Profiler

I have been studying the performance of a simple Java application (for integer factorization) using async-profiler. The application’s source code is on GitHub.

async-profiler is a low overhead sampling profiler for Java that does not suffer from Safepoint bias problem.

async-profiler repo

There is also a 3-part talk about async-profiler demo-ing how it works and how to use it.

I downloaded the Linux x64 async-profiler build with these commands…

mkdir -p ~/java/binaries/async-profiler
cd ~/java/binaries/async-profiler

curl -Lo async-profiler-2.9-linux-x64.tar.gz https://github.com/jvm-profiling-tools/async-profiler/releases/download/v2.9/async-profiler-2.9-linux-x64.tar.gz

tar xzf async-profiler-2.9-linux-x64.tar.gz

… then started the application with these:

# macos:
export JAVA_HOME=~/java/binaries/jdk/x64/jdk-17.0.7+7/Contents/Home

# Linux:
export JAVA_HOME=~/java/binaries/jdk/x64/jdk-17.0.7+7/

cd ~/repos/scratchpad/demos/java/FindPrimes/
$JAVA_HOME/bin/java Factorize 91278398257125987

Once the application is running, use the profiler.sh script to attach to the Java process and start profiling it. I was interested in wall clock profiling. This is specified using the -e wall argument (see Part 2: Improving Performance with Async-profiler by Andrei Pangin. – YouTube). The command line below will profile the Java application with a 5ms sampling interval for a duration (-d) of 10 seconds.

# macos:
cd ~/java/binaries/async-profiler-2.9-macos

# Linux:
cd ~/java/binaries/async-profiler-2.9-linux

./profiler.sh -e wall -t -i 5ms -d 10 -f result.html jps

The jps argument above lets the profiler.sh script determine which Java process is running by calling The jps Command (oracle.com). If there are multiple Java processes, then run jps first to determine the process id of the one to be profiled then explicitly pass that pid to profiler.sh e.g.

jps
./profiler.sh -e wall -t -i 5ms -d 10 -f result.html 53361

Async-profiler can also be attached at application startup.

$JAVA_HOME/bin/java -agentlib:~/java/binaries/async-profiler-2.9-macos/build/libasyncProfiler.dylib=start,event=wall,threadsfile=out.html Factorize

Other Event Types

The event type we have used so far is the wall clock event. Other event types include cpu and lock. The latter mode is mentioned at 32:15 in Part 2: Improving Performance with Async-profiler by Andrei Pangin. – YouTube. Here’s a sample command:

~/java/binaries/async-profiler/async-profiler-2.9-linux-x64/profiler.sh -e lock -i 5ms -d 10 -f result.html jps

Output Formats

The profiling data can be written in various formats. Here is an example command line used to explore what the traces output looks like. See scratchpad/factorization-profile.sh · swesonga/scratchpad · GitHub for additional examples of output formats.

cd ~/java/binaries/async-profiler/async-profiler-2.9-linux-x64
./profiler.sh -e wall -i 5ms -d 10 -o flat,traces -f Factorize-flat+traces.html 69490

To convert the async-profiler data from one format to another, use converter.jar from the downloaded tar file:

$JAVA_HOME/bin/java -cp ~/java/binaries/async-profiler/async-profiler-2.9-linux-x64/build/converter.jar FlameGraph rawdata.txt output.html

Other Notes

To find out file types on macos, run file -I rawdata. In my case, I had flamegraph data that was shared as application/gzip (causing unzip to fail with End-of-central-directory signature not found. I needed to use gzip -d rawdata.


Categories: Java

Checking Symbol Availability on Windows OpenJDK Build

We use SymChk to ensure that symbols are available for Windows applications. For the OpenJDK build, this command line can be used to ensure the symbols directory contains symbols for all the Java binaries:

"C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\symchk" /r D:\java\binaries\jdk\x64\jdk-17.0.7+7\ /s D:\java\binaries\jdk\x64\jdk-17.0.7+7-debug-symbols\bin;D:\java\binaries\jdk\x64\jdk-17.0.7+7-debug-symbols\bin\server

As per the SymChk Command-Line Options docs:

Here’s the tail end of the SymChk output:

...
SYMCHK: api-ms-win-crt-utility-l1-1-0.dll FAILED  - api-ms-win-crt-utility-l1-1-0.pdb mismatched or not found
SYMCHK: msvcp140.dll         FAILED  - msvcp140.amd64.pdb mismatched or not found
SYMCHK: ucrtbase.dll         FAILED  - ucrtbase.pdb mismatched or not found
SYMCHK: vcruntime140.dll     FAILED  - vcruntime140.amd64.pdb mismatched or not found
SYMCHK: vcruntime140_1.dll   FAILED  - vcruntime140_1.amd64.pdb mismatched or not found

SYMCHK: FAILED files = 46
SYMCHK: PASSED + IGNORED files = 440

The components that have failures are binaries that are external dependencies of the OpenJDK. Those failures can therefore be safely ignored. An interesting thing to note is that java.dll and java.exe are in the same folder in the OpenJDK installation. Since their symbol files are both called java.pdb, the symbols for java.exe are placed in a subdirectory called exe. This applies to other binaries with similar PDB filename conflicts. See the Symbol Path Syntax section for more details.

The symbols provided also come with .map files. The .map vs pdb search reveals some interesting tidbits about .map files, e.g. that they are an older technology than PDB files, which superseded them (Build Time Improvement Recommendation: Turn off /MAP, use PDBs – C++ Team Blog) and they can be created from PDB files (windows – How to create a .MAP file from a .PDB file – Stack Overflow). See debugging – Why should we need the map file when pdb file is available in windows platform? – Stack Overflow also.


Categories: Java

Introduction to the Java Flight Recorder (JFR)

As a total newbie to the Java flight recorder, I found these posts helpful in understanding the history and goals of JFR:

The .java_pid file extension came up and I was not familiar with it. The answer at java – How to get rid of /tmp/.java_pid<number> files in Linux? – Stack Overflow explains that these files are created by the JVM to support debugging as part of the attach api. Here is a more recent link to VirtualMachineImpl.java (and the related VirtualMachineImpl.c). Here is a simple walk-through (from the above posts) showing how to use JFR. I used Ubuntu 22.04 for this. First, download a JDK build.

mkdir -p ~/java/binaries/jdk/x64
cd ~/java/binaries/jdk/x64

sudo apt install curl
curl -Lo microsoft-jdk-11.0.18-linux-x64.tar.gz https://aka.ms/download-jdk/microsoft-jdk-11.0.18-linux-x64.tar.gz

tar -xzf microsoft-jdk-11.0.18-linux-x64.tar.gz

Next, get and compile the Java program to use to experiment with JFR (I used the Red Hat leaks demo).

curl -Lo RedHatLeaksDemo.java https://raw.githubusercontent.com/swesonga/scratchpad/baa0263f480b6d5c5446be90f572b2a7897279fa/demos/java/RedHatLeaksDemo.java

jdk-11.0.18+10/bin/javac RedHatLeaksDemo.java

Starting JFR Using Java Command Line Flags

Use the -XX:Start flag as suggested by the referenced posts above.

jdk-11.0.18+10/bin/java -XX:StartFlightRecording=duration=5s,filename=flightRedHatLeaks.jfr RedHatLeaksDemo

A message will be displayed notifying you that recording has started.

Started recording 1. The result will be written to:

/home/saint/java/binaries/jdk/x64/flightRedHatLeaks.jfr

Starting JFR Using jcmd

To start a recording after application startup, use jcmd.

# Start the application normally
jdk-11.0.18+10/bin/java RedHatLeaksDemo

Launch a new terminal then run jcmd to start a JFR recording.

# Determine the pid of the java process
ps -a | grep java

jdk-11.0.18+10/bin/jcmd 13573 JFR.start duration=100s filename=flight-jcmd.jfr

Viewing Results Using VisualVM

Download VisualVM or get and launch it using these commands:

mkdir -p ~/java/binaries/visualvm
cd ~/java/binaries/visualvm

curl -Lo visualvm_215.zip https://github.com/oracle/visualvm/releases/download/2.1.5/visualvm_215.zip

unzip visualvm_215.zip
visualvm_215/bin/visualvm --jdkhome ~/java/binaries/jdk/x64/jdk-11.0.18+10

Once VisualVM loads, browse to and open the .jfr file.

Related


Categories: I/O, Java, Windows

Cannot Truncate Mapped File in Windows

One of my colleagues shared this simple Java program with me.

import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;

public class TruncateFile {
    public static void main(String[] args) throws IOException {
        RandomAccessFile file = new RandomAccessFile("data.txt", "rw");
        FileChannel channel = file.getChannel();
        channel.map(FileChannel.MapMode.READ_WRITE, 0, 10);
        channel.truncate(9);
    }
}

This program can be launched using source-file mode for convenience: /c/java/binaries/jdk/x64/jdk-19.0.1+10/bin/java.exe TruncateFile.java. It failed with the exception below:

Exception in thread "main" java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open
        at java.base/sun.nio.ch.FileDispatcherImpl.truncate0(Native Method)
        at java.base/sun.nio.ch.FileDispatcherImpl.truncate(FileDispatcherImpl.java:90)
        at java.base/sun.nio.ch.FileChannelImpl.truncate(FileChannelImpl.java:490)
        at TruncateFile.main(TruncateFile.java:10)

Why does it fail on Windows? I launched Process Monitor, started capturing events, and then ran the above java.exe command and finally stopped capturing events. The SetFileInformationByHandle function failed as indicated by the USER MAPPED FILE entry in its result column.

Process Monitor File Truncation Events

I created a Windows console application, TruncateFile, to also map a file and truncate it using the SetFileInformationByHandle function and therefore get some insight into whether this behavior is the result of a Java bug. Mapping the file using the CreateFileMappingW function indeed fails with ERROR_USER_MAPPED_FILE (The requested operation cannot be performed on a file with a user-mapped section open). This does appear to be a Windows limitation. It is likely the issue another user ran into at Memory mapped file truncation on windows – Stack Overflow.


Categories: Java

Large Pages for OpenJDK on Windows

I decided to take a quick peek into [JDK-8296467] Large Pages fails when allocating 8GB+ on Windows/Java 19 – Java Bug System (openjdk.org) using this command. See resultant output below.

/c/java/binaries/jdk/x64/jdk-19.0.1+10/bin/java.exe -Xms8G -Xmx8G -XX:+UseLargePages -Xlog:gc+init --version

The local build I used to investigate this issue was from this commit.

OpenJDK 64-Bit Server VM warning: JVM cannot use large page memory because it does not have enough privilege to lock pages in memory.
[0.099s][info][gc,init] CardTable entry size: 512
[0.229s][info][gc,init] Version: 19.0.1+10 (release)
[0.230s][info][gc,init] CPUs: 8 total, 8 available
[0.230s][info][gc,init] Memory: 16301M
[0.230s][info][gc,init] Large Page Support: Disabled
[0.231s][info][gc,init] NUMA Support: Disabled
[0.231s][info][gc,init] Compressed Oops: Enabled (Zero based)
[0.231s][info][gc,init] Heap Region Size: 4M
[0.231s][info][gc,init] Heap Min Capacity: 8G
[0.232s][info][gc,init] Heap Initial Capacity: 8G
[0.232s][info][gc,init] Heap Max Capacity: 8G
[0.232s][info][gc,init] Pre-touch: Disabled
[0.232s][info][gc,init] Parallel Workers: 8
[0.232s][info][gc,init] Concurrent Workers: 2
[0.233s][info][gc,init] Concurrent Refinement Workers: 8
[0.233s][info][gc,init] Periodic GC: Disabled
openjdk 19.0.1 2022-10-18
OpenJDK Runtime Environment Temurin-19.0.1+10 (build 19.0.1+10)
OpenJDK 64-Bit Server VM Temurin-19.0.1+10 (build 19.0.1+10, mixed mode, sharing)

That error message comes from the large_page_init_decide_size() function. It calls request_lock_memory_privilege(), which:

  1. Calls OpenProcessToken to open the access token associated with the process.
  2. Calls LookupPrivilegeValue to determine the locally unique identifier (LUID) used to locally represent the SeLockMemoryPrivilege privilege name.
  3. Calls AdjustTokenPrivileges to enable the SeLockMemoryPrivilege in the process access token.

On my system, AdjustTokenPrivileges failed with ERROR_NOT_ALL_ASSIGNED. This is the stack when the failure occurs.

jvm.dll!large_page_init_decide_size()	C++
jvm.dll!os::large_page_init()	C++
jvm.dll!os::init_before_ergo()	C++
jvm.dll!Threads::create_vm(JavaVMInitArgs * args, bool * canTryAgain)	C++
jvm.dll!JNI_CreateJavaVM_inner(JavaVM_ * * vm, void * * penv, void * args)	C++
jvm.dll!JNI_CreateJavaVM(JavaVM_ * * vm, void * * penv, void * args)	C++
jli.dll!InitializeJVM(const JNIInvokeInterface_ * * * pvm, const JNINativeInterface_ * * * penv, InvocationFunctions * ifn)	C
jli.dll!JavaMain(void * _args)	C
jli.dll!ThreadJavaMain(void * args)	C

The reason for the error is that the Lock pages in memory policy did not have any allowed users. As documented at AdjustTokenPrivileges, use the Local Security Policy app to change the security policy.

Lock Pages in Memory Security Policy
Adding a User to the “Lock pages in memory” policy

Adding myself to this list didn’t appear to have any effect. I came back to this experiment a few weeks later and found that the issue was gone. Perhaps a reboot was necessary to pick up the change. I also found useful these remarks on VirtualAlloc and MEM_LARGE_PAGES – The Old New Thing (microsoft.com) and the article on Large-Page Support.

The output I got once the privilege issue was addressed is in the snippet below. To find these message in the source code, run grep -Rin --include *.cpp "Failed to reserve and commit" ./src/.

OpenJDK 64-Bit Server VM warning: Failed to reserve and commit memory using large pages. req_addr: 0x0000000000000000 bytes: 251658240
[0.249s][info][gc,init] CardTable entry size: 512
OpenJDK 64-Bit Server VM warning: Failed to reserve and commit memory using large pages. req_addr: 0x0000000600000000 bytes: 8589934592
OpenJDK 64-Bit Server VM warning: Failed to reserve and commit memory using large pages. req_addr: 0x0000000000000000 bytes: 134217728
OpenJDK 64-Bit Server VM warning: Failed to reserve and commit memory using large pages. req_addr: 0x0000000000000000 bytes: 134217728
[0.374s][info][gc,init] Version: 19.0.1+10 (release)
[0.375s][info][gc,init] CPUs: 12 total, 12 available
[0.375s][info][gc,init] Memory: 32487M
[0.375s][info][gc,init] Large Page Support: Enabled
[0.376s][info][gc,init] NUMA Support: Disabled
[0.376s][info][gc,init] Compressed Oops: Enabled (Zero based)
[0.376s][info][gc,init] Heap Region Size: 4M
[0.377s][info][gc,init] Heap Min Capacity: 8G
[0.377s][info][gc,init] Heap Initial Capacity: 8G
[0.377s][info][gc,init] Heap Max Capacity: 8G
[0.378s][info][gc,init] Pre-touch: Disabled
[0.378s][info][gc,init] Parallel Workers: 10
[0.378s][info][gc,init] Concurrent Workers: 3
[0.379s][info][gc,init] Concurrent Refinement Workers: 10
[0.379s][info][gc,init] Periodic GC: Disabled
openjdk 19.0.1 2022-10-18
OpenJDK Runtime Environment Temurin-19.0.1+10 (build 19.0.1+10)
OpenJDK 64-Bit Server VM Temurin-19.0.1+10 (build 19.0.1+10, mixed mode, sharing)

Here is the call stack where the actual memory reservation happens (the call to the VirtualAlloc function):

jvm.dll!virtualAlloc(void * lpAddress, unsigned __int64 dwSize, unsigned long flAllocationType, unsigned long flProtect)	C++
jvm.dll!reserve_large_pages_single_range(unsigned __int64 size, char * req_addr, bool exec)	C++
jvm.dll!reserve_large_pages(unsigned __int64 size, char * req_addr, bool exec)	C++
jvm.dll!os::pd_reserve_memory_special(unsigned __int64 bytes, unsigned __int64 alignment, unsigned __int64 page_size, char * addr, bool exec)	C++
jvm.dll!os::reserve_memory_special(unsigned __int64 size, unsigned __int64 alignment, unsigned __int64 page_size, char * addr, bool executable)	C++
jvm.dll!reserve_memory_special(char * requested_address, const unsigned __int64 size, const unsigned __int64 alignment, const unsigned __int64 page_size, bool exec)	C++
jvm.dll!ReservedSpace::reserve(unsigned __int64 size, unsigned __int64 alignment, unsigned __int64 page_size, char * requested_address, bool executable)	C++
jvm.dll!ReservedSpace::initialize(unsigned __int64 size, unsigned __int64 alignment, unsigned __int64 page_size, char * requested_address, bool executable)	C++
jvm.dll!ReservedCodeSpace::ReservedCodeSpace(unsigned __int64 r_size, unsigned __int64 rs_align, unsigned __int64 rs_page_size)	C++
jvm.dll!CodeCache::reserve_heap_memory(unsigned __int64 size)	C++
jvm.dll!CodeCache::initialize_heaps()	C++
jvm.dll!CodeCache::initialize()	C++
jvm.dll!codeCache_init()	C++
jvm.dll!init_globals()	C++
jvm.dll!Threads::create_vm(JavaVMInitArgs * args, bool * canTryAgain)	C++
jvm.dll!JNI_CreateJavaVM_inner(JavaVM_ * * vm, void * * penv, void * args)	C++
jvm.dll!JNI_CreateJavaVM(JavaVM_ * * vm, void * * penv, void * args)	C++
jli.dll!InitializeJVM(const JNIInvokeInterface_ * * * pvm, const JNINativeInterface_ * * * penv, InvocationFunctions * ifn)	C
jli.dll!JavaMain(void * _args)	C
jli.dll!ThreadJavaMain(void * args)	C

In the reserve_memory_special and reserve_large_pages_single_range functions, I notice that there is additional logging available using the pagesize tag (trace is the most verbose product build log level as per JEP 158: Unified JVM Logging (openjdk.org)). Here’s how to write the pagesize logs to disk:

/c/java/binaries/jdk/x64/jdk-19.0.1+10/bin/java.exe -Xms8G -Xmx8G -XX:+UseLargePages -Xlog:gc+init -Xlog:pagesize=trace:file=pagesize.txt::filecount=0 --version

The virtualAlloc function uses a logging wrapper to capture and log the result of the VirtualAlloc Windows function. It uses a separate logging tag (os). To capture both OS and pagesize logs to disk, use this command:

/c/java/binaries/jdk/x64/jdk-19.0.1+10/bin/java.exe -Xms8G -Xmx8G -XX:+UseLargePages -Xlog:gc+init -Xlog:pagesize=trace:file=pagesize.txt::filecount=0 -Xlog:os=trace:file=os.txt::filecount=0 --version

Here’s a snippet of the beginning of pagesize.txt.

[0.070s][trace][pagesize] Attempt special mapping: size: 240M, alignment: 2M
[0.071s][debug][pagesize] Reserving large pages in a single large chunk.
[0.320s][info ][pagesize] CodeHeap 'non-nmethods':  min=4M max=6M base=0x000002a9c5c00000 page_size=2M size=6M
[0.320s][info ][pagesize] CodeHeap 'profiled nmethods':  min=4M max=116M base=0x000002a9be800000 page_size=2M size=116M
[0.320s][info ][pagesize] CodeHeap 'non-profiled nmethods':  min=4M max=118M base=0x000002a9c6200000 page_size=2M size=118M
[0.324s][trace][pagesize] Attempt special mapping: size: 8G, alignment: 4M
[0.324s][debug][pagesize] Reserving large pages in a single large chunk.
[0.948s][info ][pagesize] Heap:  min=8G max=8G base=0x0000000600000000 page_size=4K size=8G

And below is the beginning of the os.txt log. Notice that the arguments to VirtualAlloc are logged, as well as its return value and the error code when a failure occurs. For example, the 2nd-to-last line shows that the 8GB allocation with large pages failed with error code 1450 (ERROR_NO_SYSTEM_RESOURCES aka Insufficient system resources exist to complete the requested service).

[0.052s][debug][os] Initial active processor count set to 12
[0.059s][trace][os] VirtualAlloc(0x0000000000000000, 8192, 2000, 4) returned 0x000002a9b5c60000.
[0.060s][trace][os] VirtualAlloc(0x000002a9b5c60000, 8192, 1000, 4) returned 0x000002a9b5c60000.
[0.060s][info ][os] SafePoint Polling address, bad (protected) page:0x000002a9b5c60000, good (unprotected) page:0x000002a9b5c61000
[0.065s][trace][os] VirtualAlloc(0x000000bc22600000, 16384, 1000, 4) returned 0x000000bc22600000.
[0.066s][info ][os] attempting shared library load of C:\java\binaries\jdk\x64\jdk-19.0.1+10\bin\java.dll
[0.069s][info ][os] shared library load of C:\java\binaries\jdk\x64\jdk-19.0.1+10\bin\java.dll was successful
[0.319s][trace][os] VirtualAlloc(0x0000000000000000, 251658240, 20003000, 40) returned 0x000002a9be800000.
[0.320s][trace][os] VirtualAlloc(0x0000000000000000, 65536, 2000, 4) returned 0x000002a9b5c80000.
[0.320s][trace][os] VirtualAlloc(0x000002a9b5c80000, 32768, 1000, 4) returned 0x000002a9b5c80000.
[0.320s][trace][os] VirtualAlloc(0x0000000000000000, 983040, 2000, 4) returned 0x000002a9b5c90000.
[0.320s][trace][os] VirtualAlloc(0x000002a9b5c90000, 32768, 1000, 4) returned 0x000002a9b5c90000.
[0.320s][trace][os] VirtualAlloc(0x0000000000000000, 983040, 2000, 4) returned 0x000002a9b5d80000.
[0.320s][trace][os] VirtualAlloc(0x000002a9b5d80000, 32768, 1000, 4) returned 0x000002a9b5d80000.
[0.944s][info ][os] VirtualAlloc(0x0000000600000000, 8589934592, 20003000, 4) failed (1450).
[0.947s][trace][os] VirtualAlloc(0x0000000600000000, 8589934592, 2000, 4) returned 0x0000000600000000.

The JBS bug made it sound like this feature worked in JDK 17 so I ran the same command using JDK 17.0.5.

/c/java/binaries/jdk/x64/jdk-17.0.5+8/bin/java.exe -Xms8G -Xmx8G -XX:+UseLargePages -Xlog:gc+init -Xlog:pagesize=trace:file=pagesize17.txt::filecount=0 -Xlog:os=trace:file=os17.txt::filecount=0 --version

The head of pagesize17.txt looks quite similar to its JDK 19 counterpart.

[0.122s][trace][pagesize] Attempt special mapping: size: 240M, alignment: 2M
[0.122s][debug][pagesize] Reserving large pages in a single large chunk.
[0.434s][info ][pagesize] CodeHeap 'non-nmethods':  min=4M max=6M base=0x000001c237e00000 page_size=2M size=6M
[0.434s][info ][pagesize] CodeHeap 'profiled nmethods':  min=4M max=116M base=0x000001c238400000 page_size=2M size=116M
[0.434s][info ][pagesize] CodeHeap 'non-profiled nmethods':  min=4M max=118M base=0x000001c23f800000 page_size=2M size=118M
[0.435s][trace][pagesize] Attempt special mapping: size: 8G, alignment: 4M
[0.435s][debug][pagesize] Reserving large pages in a single large chunk.
[1.039s][info ][pagesize] Heap:  min=8G max=8G base=0x0000000600000000 page_size=4K size=8G

Interestingly, the os17.txt shows the same failure! The difference is that there is no warning message output to the console.

[0.078s][debug][os] Initial active processor count set to 12
[0.083s][trace][os] VirtualAlloc(0x0000000000000000, 8192, 2000, 4) returned 0x000001c22fa90000.
[0.083s][trace][os] VirtualAlloc(0x000001c22fa90000, 8192, 1000, 4) returned 0x000001c22fa90000.
[0.083s][info ][os] SafePoint Polling address, bad (protected) page:0x000001c22fa90000, good (unprotected) page:0x000001c22fa91000
[0.086s][trace][os] VirtualAlloc(0x000000307c500000, 16384, 1000, 4) returned 0x000000307c500000.
[0.086s][info ][os] attempting shared library load of C:\java\binaries\jdk\x64\jdk-17.0.5+8\bin\java.dll
[0.121s][info ][os] shared library load of C:\java\binaries\jdk\x64\jdk-17.0.5+8\bin\java.dll was successful
[0.433s][trace][os] VirtualAlloc(0x0000000000000000, 251658240, 20003000, 40) returned 0x000001c237e00000.
[0.434s][trace][os] VirtualAlloc(0x0000000000000000, 65536, 2000, 4) returned 0x000001c22fab0000.
[0.434s][trace][os] VirtualAlloc(0x000001c22fab0000, 32768, 1000, 4) returned 0x000001c22fab0000.
[0.434s][trace][os] VirtualAlloc(0x0000000000000000, 983040, 2000, 4) returned 0x000001c22fac0000.
[0.434s][trace][os] VirtualAlloc(0x000001c22fac0000, 32768, 1000, 4) returned 0x000001c22fac0000.
[0.434s][trace][os] VirtualAlloc(0x0000000000000000, 983040, 2000, 4) returned 0x000001c22fbb0000.
[0.434s][trace][os] VirtualAlloc(0x000001c22fbb0000, 32768, 1000, 4) returned 0x000001c22fbb0000.
[1.038s][info ][os] VirtualAlloc(0x0000000600000000, 8589934592, 20003000, 4) failed (1450).
[1.039s][trace][os] VirtualAlloc(0x0000000600000000, 8589934592, 2000, 4) returned 0x0000000600000000.

The git blame view of the JDK 19 warning message shows that it was introduced by 8271195: Use largest available large page size smaller than LargePage… · openjdk/jdk@08cadb4 (github.com). The corresponding JBS issue is [JDK-8271195] Use largest available large page size smaller than LargePageSizeInBytes when available – Java Bug System (openjdk.org) and it confirms that the message was introduced in JDK 19. So the problem indeed exists in both builds but JDK 19 provides a better experience because it warns you that it could not use large pages as you requested.

Outstanding Items