saint@MacBookPro build_llvm_Aarch64 % pwd
/Users/saint/repos/llvm/llvm-project/build_llvm_Aarch64
saint@MacBookPro build_llvm_Aarch64 % cmake ~/repos/llvm/llvm-project
CMake Warning:
Ignoring extra path from command line:
"/Users/saint/repos/llvm/llvm-project"
CMake Error: The source directory "/Users/saint/repos/llvm/llvm-project" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
saint@MacBookPro build_llvm_Aarch64 % cmake ~/repos/llvm/llvm-project/llvm
-- The C compiler identification is AppleClang 17.0.0.17000404
-- The CXX compiler identification is AppleClang 17.0.0.17000404
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:82 (message):
No build type selected. You need to pass -DCMAKE_BUILD_TYPE=<type> in
order to configure LLVM.
Available options are:
* -DCMAKE_BUILD_TYPE=Release - For an optimized build with no assertions or debug info.
* -DCMAKE_BUILD_TYPE=Debug - For an unoptimized build with assertions and debug info.
* -DCMAKE_BUILD_TYPE=RelWithDebInfo - For an optimized build with no assertions but with debug info.
* -DCMAKE_BUILD_TYPE=MinSizeRel - For a build optimized for size instead of speed.
Learn more about these options in our documentation at
https://llvm.org/docs/CMake.html#cmake-build-type
-- Configuring incomplete, errors occurred!
See also "/Users/saint/repos/llvm/llvm-project/build_llvm_Aarch64/CMakeFiles/CMakeOutput.log".
See also "/Users/saint/repos/llvm/llvm-project/build_llvm_Aarch64/CMakeFiles/CMakeError.log".
I don’t know why it wouldn’t just default to the release build. Providing that flag was sufficient for the configuration and build to succeed.
My first successful build was via the cmake --build . command, which took over 3 hours on my M1. This was longer than I expected but it was a very simple process overall when compared to Windows AArch64 cross-compilation.
Below are some example commands showing how to which assembly instruction to use in the SpinWait() call and how many of them should be used. On my Surface Pro X, the SB instruction is not supported. A good post to read about various barrier instructions is The AArch64 processor (aka arm64), part 14: Barriers – The Old New Thing.
$ $JDKTOTEST/bin/java -Xcomp -XX:-TieredCompilation -XX:+UnlockDiagnosticVMOptions -XX:OnSpinWaitInst=sb ProducerConsumerLoops
Error occurred during initialization of VM
OnSpinWaitInst is SB but current CPU does not support SB instruction
$ $JDKTOTEST/bin/java -Xcomp -XX:-TieredCompilation -XX:+UnlockDiagnosticVMOptions -XX:OnSpinWaitInst=nop -XX:OnSpinWaitInstCount=5 ProducerConsumerLoops
The snippet below shows the 4 instructions in the spin_wait stub if the isb instruction is selected with a count of 3 (after copying the Linux SpinPause implementation). Whether or not this is a good idea is not the point, this is about showing what the flags do.
000001800B9D0700 isb sy
000001800B9D0704 isb sy
000001800B9D0708 isb sy
000001800B9D070C ret
extern "C" {
int SpinPause() {
00007FFAB97A5CF8 stp fp,lr,[sp,#-0x20]!
00007FFAB97A5CFC mov fp,sp
using spin_wait_func_ptr_t = void (*)();
spin_wait_func_ptr_t func = CAST_TO_FN_PTR(spin_wait_func_ptr_t, StubRoutines::aarch64::spin_wait());
00007FFAB97A5D00 bl StubRoutines::aarch64::spin_wait (07FFAB97A63B8h)+#0xFFFF8005DA859DF6
00007FFAB97A5D04 mov x8,x0
00007FFAB97A5D08 str x8,[sp,#0x10]
assert(func != nullptr, "StubRoutines::aarch64::spin_wait must not be null.");
00007FFAB97A5D0C mov w8,#0
00007FFAB97A5D10 cmp w8,#0
00007FFAB97A5D14 bne SpinPause+34h (07FFAB97A5D2Ch)
00007FFAB97A5D18 bl DebuggingContext::is_enabled (07FFAB8619D18h)+#0xFFFF8005DF5832E8
00007FFAB97A5D1C uxtb w8,w0
00007FFAB97A5D20 mov w8,w8
00007FFAB97A5D24 cmp w8,#0
00007FFAB97A5D28 bne SpinPause+74h (07FFAB97A5D6Ch)
00007FFAB97A5D2C ldr x8,[sp,#0x10]
00007FFAB97A5D30 cmp x8,#0
00007FFAB97A5D34 bne SpinPause+74h (07FFAB97A5D6Ch)
00007FFAB97A5D38 adrp x8,g_assert_poison (07FFABAA80F88h)+#0xFFFF800635588740
00007FFAB97A5D3C ldr x9,[x8,g_assert_poison (07FFABAA80F88h)+#0xFFFF80063E9FB581]
00007FFAB97A5D40 mov w8,#0x58
00007FFAB97A5D44 strb w8,[x9]
00007FFAB97A5D48 adrp x8,siglabels+690h (07FFABA5C5000h)
00007FFAB97A5D4C add x3,x8,#0x450
00007FFAB97A5D50 adrp x8,siglabels+690h (07FFABA5C5000h)
00007FFAB97A5D54 add x2,x8,#0x488
00007FFAB97A5D58 mov w1,#0x128
00007FFAB97A5D5C adrp x8,siglabels+690h (07FFABA5C5000h)
00007FFAB97A5D60 add x0,x8,#0x4B0
00007FFAB97A5D64 bl report_vm_error (07FFAB8D15210h)+#0xFFFF8005DF046B1B
00007FFAB97A5D68 nop
00007FFAB97A5D6C mov w8,#0
00007FFAB97A5D70 cmp w8,#0
00007FFAB97A5D74 bne SpinPause+14h (07FFAB97A5D0Ch)
(*func)();
00007FFAB97A5D78 ldr x8,[sp,#0x10]
00007FFAB97A5D7C blr x8
// If StubRoutines::aarch64::spin_wait consists of only a RET,
// SpinPause can be considered implemented. There will be a sequence
// of instructions for:
// - call of SpinPause
// - load of StubRoutines::aarch64::spin_wait stub pointer
// - indirect call of the stub
// - return from the stub
// - return from SpinPause
// So '1' always is returned.
return 1;
00007FFAB97A5D80 mov w0,#1
00007FFAB97A5D84 ldp fp,lr,[sp],#0x20
00007FFAB97A5D88 ret
00007FFAB97A5D8C ?? ??????
}
SpinPause is also used by the G1 collector as shown in the callstack below:
I recently wanted an LLVM build for Windows ARM64 so I followed the instructions from my post on Building LLVM for Windows ARM64 – Saint’s Log. Unfortunately, I didn’t indicate which commits I built when I wrote that post so this time I started at commit [clang-tools-extra] Update Maintainers for Clang-Doc (#175822) · llvm/llvm-project@feeb934a3c4d825cd673f01416c39ca73ede170f. I briefly considered using the release branch instead but plowed ahead on this commit. My host platform was a Windows 11 x64 machine. I had no trouble building the native x64 LLVM configuration. The build command ended with this line: -- Installing: D:/repos/llvm/llvm-project/build_llvm/install_local/cmake/llvm/LLVMConfigExtensions.cmake, which showed that the install step succeeded. Unfortunately, I had a few hiccups with the Windows AArch64 cross-compilation.
Windows AArch64 Build Installation Error
The build failed and this was the tail of the build output:
This was strange. Why was the build trying to write into C:/Program Files (x86)/LLVM when this is an AArch64 build, setting aside that the install location was specified on the command line? The head of build_llvm/cmake/modules/cmake_install.cmake at that point is shown below:
# Install script for directory: D:/repos/llvm/llvm-project/llvm/cmake/modules
# Set the install prefix
if(NOT DEFINED CMAKE_INSTALL_PREFIX)
set(CMAKE_INSTALL_PREFIX "D:/repos/llvm/llvm-project/build_llvm/install_local")
endif()
string(REGEX REPLACE "/$" "" CMAKE_INSTALL_PREFIX "${CMAKE_INSTALL_PREFIX}")
The corresponding head of build_llvm_AArch64/cmake/modules/cmake_install.cmake set CMAKE_INSTALL_PREFIX to the problematic path!
# Install script for directory: D:/repos/llvm/llvm-project/llvm/cmake/modules
# Set the install prefix
if(NOT DEFINED CMAKE_INSTALL_PREFIX)
set(CMAKE_INSTALL_PREFIX "C:/Program Files (x86)/LLVM")
endif()
string(REGEX REPLACE "/$" "" CMAKE_INSTALL_PREFIX "${CMAKE_INSTALL_PREFIX}")
build_llvm/cmake/modules/CMakeFiles/LLVMConfig.cmake contained (among other things) these 3 lines:
The missing LLVM_TARGET_TRIPLE value prompted me to replace -DLLVM_DEFAULT_TARGET_TRIPLE=aarch64-win32-msvc with -DLLVM_TARGET_TRIPLE=aarch64-win32-msvc as follows:
cd llvm-project
mkdir build_llvm_AArch64
cd build_llvm_Aarch64
cmake ../llvm -DLLVM_TARGETS_TO_BUILD:STRING=AArch64 \
-DCMAKE_BUILD_TYPE:STRING=Release \
-DCMAKE_INSTALL_PREFIX=install_local \
-DCMAKE_CROSSCOMPILING=True \
-DLLVM_TARGET_ARCH=AArch64 \
-DLLVM_NM=D:/repos/llvm-project/build_llvm/install_local/bin/llvm-nm.exe \
-DLLVM_TABLEGEN=D:/repos/llvm-project/build_llvm/install_local/bin/llvm-tblgen.exe \
-DLLVM_TARGET_TRIPLE=aarch64-win32-msvc \
-A ARM64 \
-T host=x64
The triple was still empty! That was my clue that something else was wrong and that I needed to take a step back. Turns out I was pasting the bash command into the Developer Command Prompt! Running it as follows was the solution!
cd llvm-project
mkdir build_llvm_AArch64
cd build_llvm_Aarch64
cmake ../llvm -DLLVM_TARGETS_TO_BUILD:STRING=AArch64 -DCMAKE_BUILD_TYPE:STRING=Release -DCMAKE_INSTALL_PREFIX=install_local -DCMAKE_CROSSCOMPILING=True -DLLVM_TARGET_ARCH=AArch64 -DLLVM_NM=D:/repos/llvm-project/build_llvm/install_local/bin/llvm-nm.exe -DLLVM_TABLEGEN=D:/repos/llvm-project/build_llvm/install_local/bin/llvm-tblgen.exe -DLLVM_DEFAULT_TARGET_TRIPLE=aarch64-win32-msvc -A ARM64 -T host=x64
It was at this point that I noticed a warning I had ignored:
D:\repos\llvm\llvm-project\build_llvm_AArch64> cmake ../llvm -DLLVM_TARGETS_TO_BUILD:STRING=AArch64 \
CMake Warning:
Ignoring extra path from command line:
"\"
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.26100.0 to target Windows 10.0.26200.
-- The C compiler identification is MSVC 19.44.35222.0
-- The CXX compiler identification is MSVC 19.44.35222.0
-- The ASM compiler identification is MSVC
Invalid llvm-tblgen Paths
I could now build with cmake --build . --config Release --target install but the build fails with lots of MSB8066 errors.
...
2>Building ARMTargetParserDef.inc...
The system cannot find the path specified.
4>Building RISCVTargetParserDef.inc...
The system cannot find the path specified.
The system cannot find the batch label specified - VCEnd
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(237,5): error MSB8066: Custom build for 'D:\repos\llvm\llvm-project\build3_llvm_AArch64\CMakeFiles\4a6077c6d000b51855bb580d5619439e\ARMTargetParserDef.inc.rule' exited
with code 1. [D:\repos\llvm\llvm-project\build3_llvm_AArch64\include\llvm\TargetParser\target_parser_gen.vcxproj]
The system cannot find the batch label specified - VCEnd
3>Building PPCGenTargetFeatures.inc...
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(237,5): error MSB8066: Custom build for 'D:\repos\llvm\llvm-project\build3_llvm_AArch64\CMakeFiles\4a6077c6d000b51855bb580d5619439e\RISCVTargetParserDef.inc.rule' exite
d with code 1. [D:\repos\llvm\llvm-project\build3_llvm_AArch64\include\llvm\TargetParser\target_parser_gen.vcxproj]
The system cannot find the path specified.
The system cannot find the batch label specified - VCEnd
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(237,5): error MSB8066: Custom build for 'D:\repos\llvm\llvm-project\build3_llvm_AArch64\CMakeFiles\4a6077c6d000b51855bb580d5619439e\PPCGenTargetFeatures.inc.rule' exite
d with code 1. [D:\repos\llvm\llvm-project\build3_llvm_AArch64\include\llvm\TargetParser\target_parser_gen.vcxproj]
...
Building that release still failed with the same error but at least I was now using a tag that’s easy to refer to instead of a random commit. I tried building the release/13.x branch since LLVM was at major version 13 when I wrote the post on how to do this build.
git checkout release/13.x
After getting the same error, I opened the project in Visual Studio 2022 and built it there to see if I could get a hint about which files are missing:
Seeing where the error was happening (above) made it easier to diagnose. This explanation from Copilot in VS Code was invaluable because it got me to verify that the table-gen executable path was valid (it wasn’t)!
I updated the build instructions to check the two EXE paths before calling cmake as follows:
mkdir build_llvm_AArch64
cd build_llvm_Aarch64
set LLVM_NM=D:\repos\llvm\llvm-project\build_llvm\install_local\bin\llvm-nm.exe
set LLVM_TABLEGEN=D:\repos\llvm\llvm-project\build_llvm\install_local\bin\llvm-tblgen.exe
dir %LLVM_NM%
dir %LLVM_TABLEGEN%
cmake ../llvm -DLLVM_TARGETS_TO_BUILD:STRING=AArch64 -DCMAKE_BUILD_TYPE:STRING=Release -DCMAKE_INSTALL_PREFIX=install_local -DCMAKE_CROSSCOMPILING=True -DLLVM_TARGET_ARCH=AArch64 -DLLVM_NM=%LLVM_NM% -DLLVM_TABLEGEN=%LLVM_TABLEGEN% -DLLVM_DEFAULT_TARGET_TRIPLE=aarch64-win32-msvc -A ARM64 -T host=x64
cmake --build . --config Release --target install
I asked copilot why this linker error was happening. The Claude Opus 4.5 agent stated that “There is no ARM64 version of diaguids.lib provided by Microsoft“.
The DIA SDK aarch64 libraries – Search AI summary also claimed that “Microsoft does not ship prebuilt DIA SDK libraries for AArch64 (ARM64) targets” but this result, Getting Started (Debug Interface Access SDK) – Visual Studio (Windows) | Microsoft Learn, indicated otherwise. I verified that there was an arm64 equivalent path to the DIA on disk and continued with the prompt: “That’s not correct. “C:\Program Files\Microsoft Visual Studio\2022\Enterprise\DIA SDK\lib\arm64\diaguids.lib” exists on disk. Why doesn’t the build use it?“
This explanation made sense. Setting the environment variable before running the first cmake command resolved this issue:
set VSCMD_ARG_TGT_ARCH=arm64
The build then concluded successfully with this line: -- Installing: D:/repos/llvm/llvm-project/build_llvm_AArch64/install_local/lib/cmake/llvm/LLVMConfigExtensions.cmake
I am investigating a (Windows AArch64) test that passes on jdk17u but fails on jdk21u. I need to bisect to the first commit with a release build that fails this test. Unfortunately, Latest Releases | Adoptium doesn’t have Windows AArch64 builds between jdk17u and jdk21u so I had to build them myself. To find the commits for the intermediate releases, I used this command:
This let me identify the tag I needed on GitHub: openjdk/jdk at jdk-18-ga. I set up the jdk18u build option in my personal build script: Add config options for more jdk versions · swesonga/scratchpad@98ab982 and set up google/googletest at v1.14.0. Unfortunately, I got compilation errors stating that ‘FLAGS_gtest_internal_run_death_test’: is not a member of ‘testing::internal’. There were also errors about an identifier that could not be found in winnt.h. Here’s how I addressed them.
Identifier not found in winnt.h
There were errors in “C:\Program Files (x86)\Windows Kits\10\Include\10.0.26100.0\um\winnt.h” stating that the _CountOneBits64 identifier could not be found:
ERROR: Build failed for target 'images' in configuration 'windows-aarch64-server-release' (exit code 2)
Stopping sjavac server
=== Output from failing command(s) repeated here ===
* For target hotspot_variant-server_libjvm_gtest_launcher-objs_gtestLauncher.obj:
gtestLauncher.cpp
c:\progra~2\wi3cf2~1\10\include\100261~1.0\um\winnt.h(6343): error C3861: '_CountOneBits64': identifier not found
... (rest of output omitted)
* For target hotspot_variant-server_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_pch.obj:
BUILD_GTEST_LIBJVM_pch.cpp
c:\progra~2\wi3cf2~1\10\include\100261~1.0\um\winnt.h(6343): error C3861: '_CountOneBits64': identifier not found
... (rest of output omitted)
* For target hotspot_variant-server_libjvm_libgtest_objs_gtest-all.obj:
gtest-all.cc
c:\progra~2\wi3cf2~1\10\include\100261~1.0\um\winnt.h(6343): error C3861: '_CountOneBits64': identifier not found
... (rest of output omitted)
The Visual Studio 2019 installer showed only the Windows 10 SDK installed:
I set off on a journey of discovery, seeking to learn how the SDK is selected for the build.
VS_ENV_CMD is set in the TOOLCHAIN_CHECK_POSSIBLE_VISUAL_STUDIO_ROOT macro in toolchain_microsoft.m4 to the 64-bit path in the TOOLCHAIN_CHECK_POSSIBLE_VISUAL_STUDIO_ROOT macro. This path on my machine is “C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsamd64_arm64.bat“. It contains this line: @call "%~dp0vcvarsall.bat" x64_arm64 %*. This command calls the vcvarsall.bat file in the same directory. “build\windows-aarch64-server-release\configure-support\config.log” doesn’t seem to have any SDK-related output. vcvarsall.bat in turn calls “C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\VsDevCmd.bat“. I don’t think it passes the -winsdk argument. VsDevCmd.bat then calls “C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\vsdevcmd\core\winsdk.bat“, which has a GetWin10SdkDir function that is called if the VSCMD_ARG_WINSDK environment variable has not been set. GetWin10SdkDirHelper queries these locations:
HKLM\SOFTWARE\Wow6432Node
HKCU\SOFTWARE\Wow6432Node
HKLM\SOFTWARE
HKCU\SOFTWARE
More specifically, it searches in the Microsoft\Microsoft SDKs\Windows\v10.0 subkey of each of them for the InstallationFolder value. The WindowsSdkDir environment variable is set to the value found here. It would be set to “C:\Program Files (x86)\Windows Kits\10\” on my machine (shown below).
I tried setting the environment variable before configuring the build but that didn’t work:
AC_MSG_NOTICE([Trying to extract Visual Studio environment variables for $TARGET_CPU])
AC_MSG_NOTICE([using $VS_ENV_CMD $VS_ENV_ARGS])
configure: Using default toolchain microsoft (Microsoft Visual Studio)
configure: Found Visual Studio installation at /cygdrive/c/progra~2/micros~3/2019/Enterprise using well-known name
configure: Found Microsoft Visual Studio 2019
configure: Trying to extract Visual Studio environment variables for aarch64
configure: using /cygdrive/c/progra~2/micros~3/2019/Enterprise/vc/auxiliary/build/vcvarsamd64_arm64.bat
configure: Setting extracted environment variables for aarch64
This shows that no arguments are passed to vcvarsamd64_arm64.bat (and therefore to vcvarsall.bat as well). Since vcvarsall.bat has logic that parses 10.* strings into the __VCVARSALL_WINSDK variable (to pass on to VsDevCmd.bat), I realized that I could just specify the SDK version when calling vcvarsamd64_arm64.bat. I used this diff (on commit 0f2113cee79):
diff --git a/make/autoconf/toolchain_microsoft.m4 b/make/autoconf/toolchain_microsoft.m4
index 2600b431cfb..a7d6aaae250 100644
--- a/make/autoconf/toolchain_microsoft.m4
+++ b/make/autoconf/toolchain_microsoft.m4
@@ -349,7 +349,7 @@ AC_DEFUN([TOOLCHAIN_EXTRACT_VISUAL_STUDIO_ENV],
# We can't pass -vcvars_ver=$VCVARS_VER here because cmd.exe eats all '='
# in bat file arguments. :-(
$FIXPATH $CMD /c "$TOPDIR/make/scripts/extract-vs-env.cmd" "$VS_ENV_CMD" \
- "$VS_ENV_TMP_DIR/set-vs-env.sh" $VCVARS_VER $VS_ENV_ARGS \
+ "$VS_ENV_TMP_DIR/set-vs-env.sh" $VCVARS_VER $VS_ENV_ARGS 10.0.22621.0 \
> $VS_ENV_TMP_DIR/extract-vs-env.log | $CAT 2>&1
PATH="$OLDPATH"
This enabled the build to use the SDK version I specified.
gtest undeclared identifier Error
The remaining build failures were related to gtests:
ERROR: Build failed for target 'images' in configuration 'windows-aarch64-server-release' (exit code 2)
=== Output from failing command(s) repeated here ===
* For target buildjdk_hotspot_variant-server_libjvm_gtest_objs_gtestMain.obj:
gtestMain.cpp
d:\java\forks\openjdk\jdk\test\hotspot\gtest\gtestMain.cpp(233): error C2039: 'FLAGS_gtest_internal_run_death_test': is not a member of 'testing::internal'
d:\repos\googletest\googlemock\include\gmock/gmock-nice-strict.h(80): note: see declaration of 'testing::internal'
d:\java\forks\openjdk\jdk\test\hotspot\gtest\gtestMain.cpp(233): error C2065: 'FLAGS_gtest_internal_run_death_test': undeclared identifier
... (rest of output omitted)
* For target hotspot_variant-server_libjvm_gtest_objs_gtestMain.obj:
gtestMain.cpp
d:\java\forks\openjdk\jdk\test\hotspot\gtest\gtestMain.cpp(233): error C2039: 'FLAGS_gtest_internal_run_death_test': is not a member of 'testing::internal'
d:\repos\googletest\googlemock\include\gmock/gmock-nice-strict.h(80): note: see declaration of 'testing::internal'
d:\java\forks\openjdk\jdk\test\hotspot\gtest\gtestMain.cpp(233): error C2065: 'FLAGS_gtest_internal_run_death_test': undeclared identifier
... (rest of output omitted)
* All command lines available in /cygdrive/d/java/forks/openjdk/jdk/build/windows-aarch64-server-release/make-support/failure-logs.
=== End of repeated output ===
Before my changes, safefetch.hpp included safefetch_windows.hpp, which uses structured exception handling. The read is done in a __try { } __except block. However, the Windows AArch64 port uses vectored exception handling. This is therefore not the right approach. I added the !defined(_M_ARM64) check to ensure that safefetch_static.hpp is included instead. This requires us to implement SafeFetch32_impl and SafeFetchN_impl, the same way the Linux and macosx AArch64 implementation do. These functions are declared as extern C because they will be implemented in assembly, specifically in safefetch_windows_aarch64.S. Here’s the implementation of SafeFetchN_impl (copied to match the other 2 AArch64 platforms):
; Support for intptr_t SafeFetchN(intptr_t* address, intptr_t defaultval);
;
; x0 : address
; x1 : defaultval
ALIGN 4
EXPORT _SafeFetchN_fault
EXPORT _SafeFetchN_continuation
EXPORT SafeFetchN_impl
SafeFetchN_impl
_SafeFetchN_fault
ldr x0, [x0]
ret
_SafeFetchN_continuation
mov x0, x1
ret
END
Notice that it is a 4 assembly instructions function. The ldr instruction tries to dereference the pointer in x0. If the memory access succeeds, the function returns the loaded value successfully. Otherwise, the exception handler will be invoked. The exception handling logic checks whether the exception being handled was caused by the safefetch load. This is where the _SafeFetchN_fault label comes into play. If the exception is an EXCEPTION_ACCESS_VIOLATION, we can check whether the PC was at the _SafeFetchN_fault (the ldr) instruction. If so, the exception handler sets the PC in the OS CONTEXT structure to the _SafeFetchN_continuation instruction. The exception handler then returns EXCEPTION_CONTINUE_EXECUTION to allow execution to resume successfully at the mov instruction, which simply loads x0 with the error value that was passed in x1. The 32-bit safefetch function has an identical structure.
A few months ago, I was investigating some exception handling OpenJDK bugs on Windows AArch64. One of the bugs was in the safefetch implementation. I needed to switch part of the implementation to assembly language (similar to the Linux and macosx aarch64 safefetch implementations). Compilation failed after I added the new safefetch_windows_aarch64.S assembly source file. The failing command line was in the .cmdline file when the build terminated:
Command from build\windows-x86_64-server-slowdebug\make-support\failure-logs\support_native_jdk.incubator.vector_libjsvml_jsvml_d_acos_windows_x86.obj.cmdline
/cygdrive/d/java/ms/dups/openjdk-jdk/build/windows-x86_64-server-slowdebug/fixpath exec /cygdrive/c/progra~1/mib055~1/2022/enterprise/vc/tools/msvc/14.44.35207/bin/hostx64/x64/ml64.exe -nologo -c -Ta -Fo/cygdrive/d/java/ms/dups/openjdk-jdk/build/windows-x86_64-server-slowdebug/support/native/jdk.incubator.vector/libjsvml/jsvml_d_acos_windows_x86.obj /cygdrive/d/java/ms/dups/openjdk-jdk/src/jdk.incubator.vector/windows/native/libjsvml/jsvml_d_acos_windows_x86.S
From build\windows-x86_64-server-slowdebug\make-support\failure-logs\support_native_jdk.incubator.vector_libjsvml_jsvml_d_acos_windows_x86.obj.log
Assembling: -Fod:\java\ms\dups\openjdk-jdk\build\windows-x86_64-server-slowdebug\support\native\jdk.incubator.vector\libjsvml\jsvml_d_acos_windows_x86.obj
MASM : fatal error A1000:cannot open file : -Fod:\java\ms\dups\openjdk-jdk\build\windows-x86_64-server-slowdebug\support\native\jdk.incubator.vector\libjsvml\jsvml_d_acos_windows_x86.obj
I just needed to have a separate else branch to handle setting up armasm64.exe to avoid passing ml64.exe flags to armasm64.exe. This successfully assembled my AArch64 assembly source file. However, the JVM would terminate with an access violation, which clearly isn’t supposed to happen because the fetch is supposed to be safe, by definition! I asked copilot: when would the program counter pointing at this aarch64 instruction result in an access violation? mov x0, x1. One scenario:
The Program Counter (PC) is pointing to an invalid address
If the PC is pointing to a location that is not mapped in the process’s address space (e.g., due to corruption, jumping to unmapped memory, or executing data as code), then fetching the instruction itself could trigger an access violation.
Example: If the PC points to a region of memory that has been freed or is protected (e.g., read-only or non-executable), the CPU will raise a fault when trying to fetch or decode the instruction.
This gave me a hint that my assembly instructions were probably not in an executable page! I found the AREA directive details at ARM Compiler armasm Reference Guide Version 6.01. It was tricky that the first AREA argument is a name and could therefore be anything. If I recall correctly, I think the access violation was because I didn’t have the CODE attribute on the AREA. With that fixed, I was able to successfully execute the compiled JVM.
This resulted in this error, which confirmed that it was a valid place to set the flag:
=== Output from failing command(s) repeated here ===
* For target support_native_jdk.incubator.vector_libjsvml_BUILD_LIBJSVML_run_ld:
LINK : fatal error LNK1181: cannot open input file 'd:\java\forks\dups12\openjdk\jdk\build\windows-x86_64-server-slowdebug\support\native\jdk.incubator.vector\libjsvml\jsvml_d_acos_windows_x86.obj'
* For target support_native_jdk.incubator.vector_libjsvml_jsvml_d_acos_windows_x86.obj:
Assembling: sdf
MASM : fatal error A1000:cannot open file : sdf
* For target support_native_jdk.incubator.vector_libjsvml_jsvml_d_asin_windows_x86.obj:
Assembling: sdf
MASM : fatal error A1000:cannot open file : sdf
After Magnus’s feedback on 8/23, I reverted that change and tried this instead:
diff --git a/make/autoconf/flags.m4 b/make/autoconf/flags.m4
index d50538108a4..8ba1a313cb2 100644
--- a/make/autoconf/flags.m4
+++ b/make/autoconf/flags.m4
@@ -320,6 +320,11 @@ AC_DEFUN([FLAGS_SETUP_TOOLCHAIN_CONTROL],
[
if test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
CC_OUT_OPTION=-Fo
+ if test "x$OPENJDK_TARGET_CPU" = xaarch64; then
+ AS_NON_ASM_EXTENSION_FLAG=
+ else
+ AS_NON_ASM_EXTENSION_FLAG=-Tazzz
+ endif
else
# The option used to specify the target .o,.a or .so file.
# When compiling, how to specify the to be created object file.
diff --git a/make/common/native/CompileFile.gmk b/make/common/native/CompileFile.gmk
index 26472da6d02..7f8e8ffeddc 100644
--- a/make/common/native/CompileFile.gmk
+++ b/make/common/native/CompileFile.gmk
@@ -236,7 +236,7 @@ define CreateCompiledNativeFileBody
# For assembler calls just create empty dependency lists
$$(call ExecuteWithLog, $$@, $$(call MakeCommandRelative, \
$$($1_COMPILER) $$($1_FLAGS) \
- $(CC_OUT_OPTION)$$($1_OBJ) $$($1_SRC_FILE))) \
+ $(CC_OUT_OPTION)$$($1_OBJ) $(AS_NON_ASM_EXTENSION_FLAG) $$($1_SRC_FILE))) \
| $(TR) -d '\r' | $(GREP) -v -e "Assembling:" || test "$$$$?" = "1" ; \
$(ECHO) > $$($1_DEPS_FILE) ; \
$(ECHO) > $$($1_DEPS_TARGETS_FILE)
The configure script failed:
Runnable configure script is not present
Generating runnable configure script at /cygdrive/d/java/forks/dups12/openjdk/jdk/build/.configure-support/generated-configure.sh
Using autoconf at /usr/bin/autoconf [autoconf (GNU Autoconf) 2.72]
-:166141: error: possibly undefined macro: AS_NON_ASM_EXTENSION_FLAG
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure: Configuration created at Sat Aug 23 15:11:36 MDT 2025.
On the prompt " recipe commences before first target" Copilot says:
The error message “recipe commences before first target” in GNU Make typically means that there’s a line in your Makefile that starts with a tab (indicating a recipe), but it appears before any target has been defined. In Makefiles, recipes (commands to execute) must follow a target and its dependencies.
I moved the check further up, which fixed the build. After the PR was closed, I got a comment about the quotes I introduced! They shouldn’t be there :(.
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
# in lieu of restarting the shell
\. "$HOME/.nvm/nvm.sh"
# Download and install Node.js:
nvm install 22
# Verify the Node.js version:
node -v # Should print "v22.20.0".
# Verify npm version:
npm -v # Should print "10.9.3".
Installing Node.js on Windows
The post on my Introduction to Tauri required me to install Node.js on my Windows (x64) desktop. Interestingly, the node and npm commands cannot be found now even though "C:\Program Files\nodejs\node" -v outputs v22.16.0. Installing the latest MSI fixes this. Running node -v from any directory outputs v22.20.0.
npm warn deprecated npmlog@2.0.4: This package is no longer supported.
npm warn deprecated gauge@1.2.7: This package is no longer supported.
npm warn deprecated are-we-there-yet@1.1.7: This package is no longer supported.
npm warn deprecated boolean@3.2.0: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
added 941 packages in 24s
151 packages are looking for funding
run `npm fund` for details
npm notice
npm notice New major version of npm available! 10.9.3 -> 11.6.2
npm notice Changelog: https://github.com/npm/cli/releases/tag/v11.6.2
npm notice To update run: npm install -g npm@11.6.2
npm notice
I then updated npm before proceeding:
npm install -g npm@11.6.2
added 1 package in 4s
28 packages are looking for funding
run `npm fund` for details
Creating my First VS Code Extension
The initial idea I had was for an extension to create a named branch and create a file in a specific subdirectory of the repo. I usually need to do this in OpenJDK development, so it seemed like a good idea for a time saving extension. Naming is hard though, so I went with the name “createfileinbranch” for my extension.
C:\repos\createfileinbranch>yo code
_-----_ ╭──────────────────────────╮
| | │ Welcome to the Visual │
|--(o)--| │ Studio Code Extension │
`---------´ │ generator! │
( _´U`_ ) ╰──────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
? What type of extension do you want to create? (Use arrow keys)
> New Extension (TypeScript)
New Extension (JavaScript)
New Color Theme
New Language Support
New Code Snippets
New Keymap
New Extension Pack
New Language Pack (Localization)
New Web Extension (TypeScript)
New Notebook Renderer (TypeScript)
? What type of extension do you want to create? New Extension (TypeScript)
? What's the name of your extension? fileforbranch
? What's the identifier of your extension? fileforbranch
? What's the description of your extension? Create a file from a template in a given branch
? Initialize a git repository? Yes
? Which bundler to use? (Use arrow keys)
> unbundled
webpack
esbuild
The package manager choices were npm, yarn, or pnpm. I selected npm.
C:\repos\createfileinbranch>yo code
_-----_ ╭──────────────────────────╮
| | │ Welcome to the Visual │
|--(o)--| │ Studio Code Extension │
`---------´ │ generator! │
( _´U`_ ) ╰──────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
? What type of extension do you want to create? New Extension (TypeScript)
? What's the name of your extension? fileforbranch
? What's the identifier of your extension? fileforbranch
? What's the description of your extension? Create a file from a template in a given branch
? Initialize a git repository? Yes
? Which bundler to use? unbundled
? Which package manager to use? npm
Writing in C:\repos\createfileinbranch\fileforbranch...
create fileforbranch\.vscode\extensions.json
create fileforbranch\.vscode\launch.json
create fileforbranch\.vscode\settings.json
create fileforbranch\.vscode\tasks.json
create fileforbranch\package.json
create fileforbranch\tsconfig.json
create fileforbranch\.vscodeignore
create fileforbranch\vsc-extension-quickstart.md
create fileforbranch\.gitignore
create fileforbranch\README.md
create fileforbranch\CHANGELOG.md
create fileforbranch\src\extension.ts
create fileforbranch\src\test\extension.test.ts
create fileforbranch\.vscode-test.mjs
create fileforbranch\eslint.config.mjs
Changes to package.json were detected.
Running npm install for you to install the required dependencies.
npm warn deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
added 260 packages, and audited 261 packages in 9s
74 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Your extension fileforbranch has been created!
To start editing with Visual Studio Code, use the following commands:
code fileforbranch
Open vsc-extension-quickstart.md inside the new extension for further instructions
on how to modify, test and publish your extension.
For more information, also visit http://code.visualstudio.com and follow us @code.
? Do you want to open the new folder with Visual Studio Code? (Use arrow keys)
> Open with `code`
Skip
VS Code asked if I wanted to install the recommended extensions for this project. I agreed, and the suggestions were ESLint – Pluggable JavaScript Linter and the Extension Test Runner. They were shown in the extensions tab with “@recommended” in the search box. I could successfully F5 the extension and obtain the message it displayed in VS Code.
The Open in GitHub Extension
I noticed though that something I do a lot is find files in on GitHub by copying their relative path and pasting it into the search box on the GitHub website. Why can’t I automatically just open that URL? This seemed like a more impactful extension to build! I initially used the fileforbranch extension to experiment with this idea.
I decided to use Copilot to see if I could move things along. I was quite surprised to find that the agent implemented a fully functional extension! Granted it’s not complex, but still, it was surprising that it just worked! The first prompt I used when I opened the extension folder in VS Code was “How to detect if opened folder is in a Git repo”
I returned to this task a few days later and told the agent that I would like to “Add this function directly into my workspace”.
Scrolling through the HTML revealed the paragraphs explaining that no server is available. Seems like the agent should handle this type of failure more gracefully.
The operation succeeded after I retried (I must have been logged out):
It was at this point that I decided to ask the agent to implement the whole thing using this prompt:
Add menu command called "Open in GitHub" when user right clicks on a file
I ran the extension and was pleased to see that it worked perfectly! It even included the current commit to give a stable link!
Creating the Extension in a New Repo
I used the agent in the first extension I had created but I wanted a separate repo to house my (potential) extension collection. I created my swesonga/vscode-extensions repo for this using the GitHub UI then cloned it and created a local folder for the extension.
I initialized the extension using the steps at the beginning of this post. This is the log from the initialization of the new extension. Notice that I selected no when asked to create a Git repo since I was already in the repo I had created in GitHub. I didn’t find any options for a TypeScript or JavaScript .gitignore file in the GitHub UI though so it didn’t have a .gitignore file. The first thing I did after the extension was created was git add . to add all the files to my repo. There were thousands of files added though, which led me to this question: node.js – Should the “node_modules” folder be included in the Git repository? – Stack Overflow. I decided to leave them in the repo for now.
I then used the same prompt to create the extension: Add a menu command called “Open in GitHub” when user right clicks on a file. The image below shows the various steps the agent took.
Claude Sonnet 4’s Summary
I’ve successfully added a “Open in GitHub” context menu command to your VS Code extension. Here’s what was implemented:
Changes Made:
Updated package.json:
Added a new command openingithub.openInGitHub with the title “Open in GitHub”
Added menu contributions for the explorer context menu that appears when right-clicking on files (not folders)
Enhanced extension.ts:
Added the command handler for the “Open in GitHub” functionality
Implemented getGitHubUrl() function that:
Gets the Git remote URL for the current workspace
Converts Git URLs (both HTTPS and SSH) to GitHub web URLs
Gets the current branch name
Constructs the full GitHub URL for the specific file
Added error handling for various scenarios
Made the command work both from explorer context menu and when no URI is provided
Features:
Right-click context menu: Users can right-click on any file in the Explorer and select “Open in GitHub”
Git integration: Automatically detects the GitHub repository URL from Git remotes
Branch awareness: Opens the file on the current Git branch
Error handling: Provides helpful error messages when GitHub URL cannot be determined
Cross-platform: Works with both SSH and HTTPS Git remotes
How it works:
When a user right-clicks on a file and selects “Open in GitHub”
The extension determines the workspace folder and Git repository
It executes Git commands to get the remote URL and current branch
Converts the Git URL to a GitHub web URL
Constructs the full GitHub file URL with the correct branch and file path
Opens the URL in the default browser
The extension will now show “Open in GitHub” in the context menu when right-clicking on files in the VS Code Explorer, and it will open the corresponding GitHub page for that file in the browser.
Committing Claude Sonnet 4’s Changes
Looking at the changed files in the Source Control view, I noticed that there were files in the out/ subdirectory that I didn’t notice in the filefrombranch extension. This was when I noticed that the filefrombranch extension had a .gitignore file. I therefore decided to remove the node_modules folder from the repo. I used git rm -r openingithub/node_modules then committed that change. Next, I just needed to rebase with fixup to get that change out of the first commit. I tried rebasing using HEAD~5 because there were now 5 commits in the repo.
$ git rebase -i HEAD~5
fatal: invalid upstream 'HEAD~5'
With the node_modules folder deleted, I needed to run npm install again before I could run the extension.
PS C:\repos\vscode-extensions\openingithub> npm install
npm warn deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
added 260 packages, and audited 261 packages in 2s
74 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Opening URLs at the Current Commit
The first iteration of this implementation (in the fileforbranch extension) had the handy feature of including the current commit in the URL that was opened. I like this feature so I added it using this prompt:
Add another context menu command to open the URL at the current commit
The final question was how to install this extension on another computer. The Publishing Extensions | Visual Studio Code Extension API page explains how to package the extension into a .vsix file and install it on another computer.
$ npm install -g @vscode/vsce
added 319 packages in 13s
88 packages are looking for funding
run `npm fund` for details
$ which vsce
/c/Users/saint/AppData/Roaming/npm/vsce
$ vsce package
Executing prepublish script 'npm run vscode:prepublish'...
> openingithub@0.0.1 vscode:prepublish
> npm run compile
> openingithub@0.0.1 compile
> tsc -p ./
ERROR It seems the README.md still contains template text. Make sure to edit the README.md file before you package or publish your extension.
I was surprised that I couldn’t publish the extension because the README file still contained template text. Neat check! After stripping it down, I was able to create a .VSIX file and install it on another computer.
Executing prepublish script 'npm run vscode:prepublish'...
> openingithub@0.0.1 vscode:prepublish
> npm run compile
> openingithub@0.0.1 compile
> tsc -p ./
WARNING A 'repository' field is missing from the 'package.json' manifest file.
Use --allow-missing-repository to bypass.
Do you want to continue? [y/N] y
WARNING LICENSE, LICENSE.md, or LICENSE.txt not found
Do you want to continue? [y/N] y
INFO Files included in the VSIX:
openingithub-0.0.1.vsix
├─ [Content_Types].xml
├─ extension.vsixmanifest
└─ extension/
├─ changelog.md [0.23 KB]
├─ initial-setup-output.txt [2.48 KB]
├─ package.json [1.54 KB]
├─ readme.md [0.25 KB]
└─ out/
├─ extension.js [8.22 KB]
└─ test/
└─ extension.test.js [1.94 KB]
DONE Packaged: C:\repos\vscode-extensions\openingithub\openingithub-0.0.1.vsix (8 files, 6.67 KB)
Key Takeaways
These AI Agents are fantastic! I went from never having written an extension to having a practical “Hello World” extension for opening files in their GitHub repos! Reading the agent’s code gave me some insight into the structure of an extension. Using the agent drastically cut down the amount of time it would have taken me to write this extension.
The test fails with the output below. The key line is Native frames: <unavailable>.
STDOUT:
Command line: [C:\java\forks\openjdk\jdk\build\windows-aarch64-server-slowdebug\images\jdk\bin\java.exe -cp C:\java\forks\openjdk\jdk\JTwork\classes\runtime\jni\nativeStack\TestNativeStack.d;C:\java\forks\openjdk\jdk\test\hotspot\jtreg\runtime\jni\nativeStack;C:\java\forks\openjdk\jdk\JTwork\classes\test\lib;C:\java\binaries\jtreg\jtreg-7.5.2\lib\javatest.jar;C:\java\binaries\jtreg\jtreg-7.5.2\lib\jtreg.jar;C:\java\binaries\jtreg\jtreg-7.5.2\lib\junit-platform-console-standalone-1.11.0.jar;C:\java\binaries\jtreg\jtreg-7.5.2\lib\testng-7.3.0.jar;C:\java\binaries\jtreg\jtreg-7.5.2\lib\guice-5.1.0.jar;C:\java\binaries\jtreg\jtreg-7.5.2\lib\jcommander-1.82.jar -Xmx512m -Xcheck:jni -Djava.library.path=C:\java\forks\openjdk\jdk\build\windows-aarch64-server-slowdebug\support\test\hotspot\jtreg\native\lib TestNativeStack$Main ]
[2025-09-04T15:54:18.312130500Z] Gathering output for process 22380
[2025-09-04T15:54:18.753851700Z] Waiting for completion for process 22380
[2025-09-04T15:54:18.756861300Z] Waiting for completion finished for process 22380
Output and diagnostic info for process 22380 was saved into 'pid-22380-output.log'
STDERR:
stdout: [Triggering a JNI warning
WARNING in native method: JNI call made without checking exceptions when required to from CallStaticObjectMethod
Native frames: <unavailable>
];
stderr: [WARNING: A restricted method in java.lang.System has been called
WARNING: java.lang.System::loadLibrary has been called by TestNativeStack in an unnamed module (file:/C:/java/forks/openjdk/jdk/JTwork/classes/runtime/jni/nativeStack/TestNativeStack.d/)
WARNING: Use --enable-native-access=ALL-UNNAMED to avoid a warning for callers in this module
WARNING: Restricted methods will be blocked in a future release unless native access is enabled
]
exitValue = -2147483645
java.lang.RuntimeException: Expected to get exit value of [0], exit value is: [-2147483645]
at jdk.test.lib.process.OutputAnalyzer.shouldHaveExitValue(OutputAnalyzer.java:522)
at TestNativeStack.main(TestNativeStack.java:57)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:565)
at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:335)
at java.base/java.lang.Thread.run(Thread.java:1474)
I added a DebugBreak call to the line outputing that message to see how we got there:
KernelBase.dll!...DebugBreak() Line 2582 C++
jvm.dll!NativeStackPrinter::print_stack_from_frame(outputStream * st, frame fr, char * buf, int buf_size, bool print_source_info, int max_frames) Line 80 C++
jvm.dll!NativeStackPrinter::print_stack_from_frame(outputStream * st, char * buf, int buf_size, bool print_source_info, int max_frames) Line 104 C++
jvm.dll!NativeStackPrinter::print_stack(outputStream * st, char * buf, int buf_size, unsigned char * & lastpc, bool print_source_info, int max_frames) Line 40 C++
jvm.dll!JavaThread::print_jni_stack() Line 1775 C++
jvm.dll!check_pending_exception(JavaThread * thr) Line 192 C++
jvm.dll!functionEnter(JavaThread * thr) Line 218 C++
jvm.dll!checked_jni_CallStaticObjectMethod(JNIEnv_ * env, _jclass * clazz, _jmethodID * methodID, ...) Line 1321 C++
nativeStack.dll!generateWarning(const JNINativeInterface_ * * env) Line 66 C
nativeStack.dll!thread_start(void * unused) Line 92 C
ucrtbase.dll!00007ffb0970b028() Unknown
kernel32.dll!00007ffb0bbd8740() Unknown
ntdll.dll!RtlUserThreadStart(long(*)(void *) StartAddress, void * Argument) Line 1184 C
Triggering a JNI warning
WARNING: A restricted method in java.lang.System has been called
WARNING: java.lang.System::loadLibrary has been called by TestNativeStack in an unnamed module (file:/C:/java/forks/openjdk/jdk/JTwork/classes/runtime/jni/nativeStack/TestNativeStack.d/)
WARNING: Use --enable-native-access=ALL-UNNAMED to avoid a warning for callers in this module
WARNING: Restricted methods will be blocked in a future release unless native access is enabled
Native thread is running and attaching as daemon ...
About to trigger JNI Warning
WARNING in native method: JNI call made without checking exceptions when required to from CallStaticObjectMethod
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [jvm.dll+0x10e8aa8] os::win32::platform_print_native_stack+0x58 (os_windows_aarch64.cpp:143)
V [jvm.dll+0x10598ac] os::platform_print_native_stack+0x34 (os_windows_aarch64.inline.hpp:38)
V [jvm.dll+0x1059588] NativeStackPrinter::print_stack+0x48 (nativeStackPrinter.cpp:35)
V [jvm.dll+0xba74a0] JavaThread::print_jni_stack+0x120 (javaThread.cpp:1775)
V [jvm.dll+0xcb0e84] check_pending_exception+0x84 (jniCheck.cpp:192)
V [jvm.dll+0xcb0f24] functionEnter+0x4c (jniCheck.cpp:218)
V [jvm.dll+0xcbbb70] checked_jni_CallStaticObjectMethod+0xf0 (jniCheck.cpp:1321)
C [nativeStack.dll+0x1264] generateWarning+0x13c (libnativeStack.c:66)
C [nativeStack.dll+0x1364] thread_start+0xa4 (libnativeStack.c:92)
C [ucrtbase.dll+0x2b028] (no source info available)
C [KERNEL32.DLL+0x8740] (no source info available)
C [ntdll.dll+0xd47a4] (no source info available)
Native thread terminating
C:\java\forks\openjdk\jdk\build\windows-aarch64-server-slowdebug\images\jdk\bin\java.exe (process 30308) exited with code 0 (0x0).
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .
Last month, I investigated OpenJDK gtest failures on Windows. The error message was that the gtests Caught std::exception-derived exception escaping the death test statement. I tracked the commit responsible for the failures to 8343756: CAN_SHOW_REGISTERS_ON_ASSERT for Windows · openjdk/jdk@0054bbe.
// Asserts that a given `statement` causes the program to exit, with an
// integer exit status that satisfies `predicate`, and emitting error output
// that matches `matcher`.
#define ASSERT_EXIT(statement, predicate, matcher) \
GTEST_DEATH_TEST_(statement, predicate, matcher, GTEST_FATAL_FAILURE_)
I was trying to ensure my understanding of the exit code being an exact match is correct. The line EXPECT_EXIT(_exit(1), testing::ExitedWithCode(1), ""); from googletest/googletest/test/googletest-death-test-test.cc at v1.14.0 · google/googletest supports this hypothesis. The EXPECT_EXIT macro comment (below) left me wondering how ASSERT_EXIT does not continue on to successive tests. The difference between these two macros is in the final parameter, which is GTEST_NONFATAL_FAILURE_ for the EXPECT_EXIT macro.
// Like `ASSERT_EXIT`, but continues on to successive tests in the
// test suite, if any:
#define EXPECT_EXIT(statement, predicate, matcher) \
GTEST_DEATH_TEST_(statement, predicate, matcher, GTEST_NONFATAL_FAILURE_)
creates a child process with the same executable as the current process to run the death test. The child process is given the –gtest_filter and –gtest_internal_run_death_test flags such that it knows to run the current death test only.
The question now became, why were we catching this std::exception? I asked copilot: how does a windows access violation turn into a std::exception? Part of its answer mentioned the /EHsc compiler flag so I decided to examine the flags used to compile the JVM binaries. I searched for the regex out:[^\s]+jvm.dll in the build logs and found this jvm.dll linker command. Note that 2 separate jvm.dll files get built, one for the product and another for the gtests. The /IMPLIB (Name Import Library) | Microsoft Learn flag was present, but didn’t look relevant.
checking for apk... [not found]
checking for pandoc... [not found]
/cygdrive/d/java/forks/openjdk/jdk/build/.configure-support/generated-configure.sh: line 64028: syntax error: unexpected end of file
configure exiting with result code 2
That file appeared to be truncated??? VSCode was doing something related to building the Java projects in the repo. It is possible that something in VSCode could have interrupted this but I just removed the build folder then reexamined the change.
My include path didn’t appear in the include paths for gtestLauncher.obj! I searched the repo for googlemock and the only place that path could be coming from was CompileGtest.gmk. However, I then noticed that the gtest launcher has its own configuration section. Sheesh. Here is the diff that I used to definitively see how these includes work:
gtestLauncher.exe was now being compiled with -EHsc but the gtests still failed. Since jvm.dll is compiled without -EHsc, I added it to see if the test behavior would change. I started by searching for libjvm in the codebase. This is the additional change I made:
At this point, I looked at the exception handler and it looked like what was happening was that returning EXCEPTION_CONTINUE_EXECUTION let the SEH handler in the gtests continue instead of the code down the report_vm_error path! I decided to create my own handler but needed to look up the syntax. I used Structured Exception Handling (C/C++) | Microsoft Learn.
diff --git a/src/hotspot/share/utilities/debug.hpp b/src/hotspot/share/utilities/debug.hpp
index 12724153659..e40c16c1c59 100644
--- a/src/hotspot/share/utilities/debug.hpp
+++ b/src/hotspot/share/utilities/debug.hpp
@@ -39,7 +39,21 @@ class oopDesc;
#define CAN_SHOW_REGISTERS_ON_ASSERT
extern char* g_assert_poison;
extern const char* g_assert_poison_read_only;
+#if (defined(_WINDOWS))
+// We use structured exception handling when writing to the poison variable.
+// This allows us to continue execution and perform error reporting instead of
+// bailing out to other SEH handlers such as those in the googletest code.
+#include <excpt.h>
+#define TOUCH_ASSERT_POISON \
+do { \
+ __try { \
+ (*g_assert_poison) = 'X'; \
+ } __except (EXCEPTION_CONTINUE_EXECUTION) { \
+ } \
+} while (0)
+#else
#define TOUCH_ASSERT_POISON (*g_assert_poison) = 'X';
+#endif // _WINDOWS
void initialize_assert_poison();
void disarm_assert_poison();
bool handle_assert_poison_fault(const void* ucVoid);
...\jdk\src\hotspot\share\utilities/growableArray.hpp(81): error C2712: Cannot use __try in functions that require object unwinding
...\jdk\src\hotspot\share\classfile/vmClassID.hpp(41): error C3615: constexpr function 'EnumeratorRangeImpl::end_value' cannot result in a constant expression
...\jdk\src\hotspot\share\utilities/enumIterator.hpp(97): note: failure was caused by a statement or an expression that is not valid in a constexpr context
...\jdk\src\hotspot\share\utilities/unsigned5.hpp(190): error C3615: constexpr function 'UNSIGNED5::max_encoded_in_length' cannot result in a constant expression
...\jdk\src\hotspot\share\utilities/unsigned5.hpp(191): note: failure was caused by a statement or an expression that is not valid in a constexpr context
...\jdk\src\hotspot\cpu\x86\register_x86.hpp(61): error C3615: constexpr function 'Register::RegisterImpl::encoding' cannot result in a constant expression
...\jdk\src\hotspot\cpu\x86\register_x86.hpp(61): note: failure was caused by a statement or an expression that is not valid in a constexpr context
...\jdk\src\hotspot\cpu\x86\register_x86.hpp(233): error C3615: constexpr function 'XMMRegister::XMMRegisterImpl::encoding' cannot result in a constant expression
...\jdk\src\hotspot\cpu\x86\register_x86.hpp(233): note: failure was caused by a statement or an expression that is not valid in a constexpr context
...\jdk\src\hotspot\share\runtime/park.hpp(131): error C2712: Cannot use __try in functions that require object unwinding
...\jdk\src\hotspot\share\runtime/mutexLocker.hpp(235): error C2712: Cannot use __try in functions that require object unwinding
...\jdk\src\hotspot\share\runtime/mutexLocker.hpp(240): error C2712: Cannot use __try in functions that require object unwinding
...\jdk\src\hotspot\share\runtime/mutexLocker.hpp(250): error C2712: Cannot use __try in functions that require object unwinding
...\jdk\src\hotspot\share\runtime/mutexLocker.hpp(255): error C2712: Cannot use __try in functions that require object unwinding
At this point, I realized that I needed to disable SEH at the gtest level. I turned off GTEST_HAS_SEH with this change and finally got the gtests to pass!
What was not sure of was whether the JVM reporting code was running (vs the JVM just exiting) and whether there was a narrower way to pass the GTEST_HAS_SEH define – I noticed it in thousands of lines in the compilation log, which might also explain why I was getting error C2712: Cannot use __try in functions that require object unwinding in many more places than I expected when I added the -EHsc flag when compiling jvm.obj. Therefore, it was logical to try to find the minimal diff that would fix the gtests. Here’s one I tried:
This manual approach of finding the minimal change needed was tedious so I decided to add my own defines to see which portions of the gmk files are used and for which compile/link commands:
For location 2, there were 214 lines matching the regex DMYTEST_LOCATION2.+.cpp and 213 lines matching the regex DMYTEST_LOCATION2.+/test/hotspot/gtest/.+.cpp. The location 2 define was therefore correctly scoped to the gtests only. These 213 lines compiled files like test_blocktree.cpp and test_vmerror.cpp. The line that was different between the 2 regexes was the one compiling build/windows-x86_64-server-slowdebug/hotspot/variant-server/libjvm/gtest/objs/BUILD_GTEST_LIBJVM_pch.cpp. Location 3 was only used for compiling test/hotspot/gtest/gtestLauncher.cpp. The challenging case was location 0, which seemed to appear for way more files than it should. Was it really necessary? No it wasn’t! That made life much easier for me.
Inspecting gTest Code Coverage
In the course of this investigation, I considered using time travel debugging to see which code paths were executed. An alternative was to see whether the exception filter code was covered at the end of the gtest execution! The path to the code coverage tool is C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\IDE\Extensions\Microsoft\CodeCoverage.Console\Microsoft.CodeCoverage.Console.exe – it should be in the path by default in the Developer Command Prompt. I kicked it off with these commands:
Search for “exceptionfilter” in the Code Coverage Results pane to view the code coverage for the exception filter.
Verifying Execution Path
The first time I paused execution of the gtests in the debugger, stopped in jdk/src/hotspot/share/utilities/bitMap.cpp. I set a breakpoint there. I liked this location because I could move execution into the failure path (in the assembly view). This was how I saw the gtest structured exception handler kicking in without the JVM’s failure reporting code executing. With the tests now passing, I found the write to the poison location just going through without interruption. Did this mean the test was broken? Or did it mean that the exception filter ran and successfully said to continue execution? I think it has to be the latter but I’ll need time travel debugging to verify this. In the meantime, I sought to at least ensure there were multiple test processes involved.
Verifying Multiple Processes are Started
I started Process Monitor and added a filter for path containing “slowdebug”. Notice tons of PIDs for gtestlauncher.exe in the process monitor screenshot below (as expected).
I could successfully execute the error handling path by manually moving the program counter (RIP) after skipping into the failure path of BitMap::verify_range. Why didn’t the PID change in the debugger? Oh wait, was I was still stepping thereby causing recursion? This callstack did not support that hypothesis. Looks like it was just error reporting continuing to execute.
> jvm.dll!BitMap::verify_range(unsigned __int64 beg, unsigned __int64 end) Line 212 C++
jvm.dll!BitMap::clear_range(unsigned __int64 beg, unsigned __int64 end) Line 280 C++
jvm.dll!JVMFlag::printFlags(outputStream * out, bool withComments, bool printRanges, bool skipDefaults) Line 706 C++
jvm.dll!VMError::report(outputStream * st, bool _verbose) Line 1260 C++
jvm.dll!VMError::report_and_die(int id, const char * message, const char * detail_fmt, char * detail_args, Thread * thread, unsigned char * pc, const void * siginfo, const void * context, const char * filename, int lineno, unsigned __int64 size) Line 1847 C++
jvm.dll!report_vm_error(const char * file, int line, const char * error_msg, const char * detail_fmt, ...) Line 195 C++
jvm.dll!CompressedKlassPointers::check_init<int>(int var) Line 154 C++
jvm.dll!CompressedKlassPointers::shift() Line 218 C++
jvm.dll!CompressedKlassPointers::print_mode(outputStream * st) Line 301 C++
jvm.dll!VMError::report(outputStream * st, bool _verbose) Line 1196 C++
jvm.dll!VMError::report_and_die(int id, const char * message, const char * detail_fmt, char * detail_args, Thread * thread, unsigned char * pc, const void * siginfo, const void * context, const char * filename, int lineno, unsigned __int64 size) Line 1847 C++
jvm.dll!report_vm_error(const char * file, int line, const char * error_msg, const char * detail_fmt, ...) Line 195 C++
jvm.dll!BitMap::verify_limit(unsigned __int64 bit) Line 206 C++
jvm.dll!BitMap::to_words_align_down(unsigned __int64 bit) Line 94 C++
jvm.dll!BitMap::word_addr(unsigned __int64 bit) Line 144 C++
jvm.dll!BitMap::set_bit(unsigned __int64 bit) Line 37 C++
jvm.dll!JfrEventVerifier::set_field_bit(unsigned __int64 field_idx) Line 41 C++
jvm.dll!JfrEvent<EventTenuringDistribution>::set_field_bit(unsigned __int64 field_idx) Line 267 C++
jvm.dll!EventObjectAllocationOutsideTLAB::set_objectClass(const Klass * new_value) Line 7304 C++
jvm.dll!trace_flag_changed<bool,EventBooleanFlagChanged>(JVMFlag * flag, const bool old_value, const bool new_value, const JVMFlagOrigin origin) Line 39 C++
jvm.dll!TypedFlagAccessImpl<bool,EventBooleanFlagChanged>::check_constraint_and_set(JVMFlag * flag, void * value_addr, JVMFlagOrigin origin, bool verbose) Line 78 C++
jvm.dll!FlagAccessImpl_bool::set_impl(JVMFlag * flag, void * value_addr, JVMFlagOrigin origin) Line 98 C++
jvm.dll!FlagAccessImpl::set(JVMFlag * flag, void * value, JVMFlagOrigin origin) Line 49 C++
jvm.dll!JVMFlagAccess::set_impl(JVMFlag * flag, void * value, JVMFlagOrigin origin) Line 307 C++
jvm.dll!JVMFlagAccess::set_or_assert(JVMFlagsEnum flag_enum, int type_enum, void * value, JVMFlagOrigin origin) Line 353 C++
jvm.dll!JVMFlagAccess::set<bool,0>(JVMFlagsEnum flag_enum, bool value, JVMFlagOrigin origin) Line 101 C++
jvm.dll!Flag_UseLargePagesIndividualAllocation_set(bool value, JVMFlagOrigin origin) Line 69 C++
jvm.dll!os::init() Line 4436 C++
jvm.dll!Threads::create_vm(JavaVMInitArgs * args, bool * canTryAgain) Line 463 C++
jvm.dll!JNI_CreateJavaVM_inner(JavaVM_ * * vm, void * * penv, void * args) Line 3589 C++
jvm.dll!JNI_CreateJavaVM(JavaVM_ * * vm, void * * penv, void * args) Line 3680 C++
jvm.dll!init_jvm(int argc, char * * argv, bool disable_error_handling, JavaVM_ * * jvm_ptr) Line 94 C++
jvm.dll!JVMInitializerListener::OnTestStart(const testing::TestInfo & test_info) Line 124 C++
jvm.dll!testing::internal::TestEventRepeater::OnTestStart(const testing::TestInfo & parameter) Line 3858 C++
jvm.dll!testing::TestInfo::Run() Line 2821 C++
jvm.dll!testing::TestSuite::Run() Line 3015 C++
jvm.dll!testing::internal::UnitTestImpl::RunAllTests() Line 5920 C++
jvm.dll!testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl,bool>(testing::internal::UnitTestImpl * object, bool(testing::internal::UnitTestImpl::*)() method, const char * location) Line 2614 C++
jvm.dll!testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl,bool>(testing::internal::UnitTestImpl * object, bool(testing::internal::UnitTestImpl::*)() method, const char * location) Line 2648 C++
jvm.dll!testing::UnitTest::Run() Line 5484 C++
jvm.dll!RUN_ALL_TESTS() Line 2317 C++
jvm.dll!runUnitTestsInner(int argc, char * * argv) Line 290 C++
jvm.dll!runUnitTests(int argc, char * * argv) Line 371 C++
gtestLauncher.exe!main(int argc, char * * argv) Line 40 C++
[Inline Frame] gtestLauncher.exe!invoke_main() Line 78 C++
gtestLauncher.exe!__scrt_common_main_seh() Line 288 C++
kernel32.dll!00007ffdcbdce8d7() Unknown
ntdll.dll!00007ffdcc97c34c() Unknown
One advantage of the stack above is that it showed how the os::init code gets executed (which I was curious about when wondering whether the exception filter was being set up). Disabling the breakpoint just before skipping into the failure path and resuming execution now led to the JVM dying:
[==========] Running 1197 tests from 205 test suites.
[----------] Global test environment set-up.
[----------] 3 tests from AltHashingTest
[ RUN ] AltHashingTest.halfsiphash_test_ByteArray
[ OK ] AltHashingTest.halfsiphash_test_ByteArray (0 ms)
[ RUN ] AltHashingTest.halfsiphash_test_CharArray
[ OK ] AltHashingTest.halfsiphash_test_CharArray (0 ms)
[ RUN ] AltHashingTest.halfsiphash_test_FromReference
[ OK ] AltHashingTest.halfsiphash_test_FromReference (0 ms)
[----------] 3 tests from AltHashingTest (2 ms total)
[----------] 1 test from ThreadsListHandle
[ RUN ] ThreadsListHandle.sanity_vm
#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (d:\java\forks\openjdk\jdk\src\hotspot\share\utilities\bitMap.cpp:208), pid=39132, tid=109872
# assert(bit <= _size) failed: BitMap limit out of bounds: 0 > 64
#
# JRE version: ((uninitialized)) (slowdebug build )
# Java VM: OpenJDK 64-Bit Server VM (slowdebug 26-internal-adhoc.saint.jdk, mixed mode, sharing, tiered, compressed class ptrs, unknown gc, windows-amd64)
# Core dump will be written. Default location: D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\images\test\hotspot\gtest\server\hs_err_pid39132.mdmp
#
# An error report file with more information is saved as:
# D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\images\test\hotspot\gtest\server\hs_err_pid39132.log
#
#
D:\java\forks\openjdk\jdk\build\windows-x86_64-server-slowdebug\images\test\hotspot\gtest\server\gtestLauncher.exe (process 39132) exited with code 1 (0x1).
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .
c:\progra~1\mib055~1\2022\enterprise\vc\tools\msvc\14.44.35207\include\__msvc_ostream.hpp(781): error C2220: the following warning is treated as an error
c:\progra~1\mib055~1\2022\enterprise\vc\tools\msvc\14.44.35207\include\__msvc_ostream.hpp(781): warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc
c:\progra~1\mib055~1\2022\enterprise\vc\tools\msvc\14.44.35207\include\__msvc_ostream.hpp(781): note: the template instantiation context (the oldest one first) is
c:\repos\googletest\googletest\include\gtest/gtest-message.h(118): note: see reference to function template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::operator <<<std::char_traits<char>>(std::basic_ostream<char,std::char_traits<char>> &,const char *)' being compiled