Building libffi for Windows ARM64 with Visual C++

The previous post covered Building libffi for Windows x64 with Visual C++. In this post, I detail the instructions needed to build for the ARM64 platform (building the zero variant of the HotSpot JVM for the Windows ARM64 platform was my overall objective). I used the same Windows x64 machine for this build. As in the previous post, Visual C++ and MSYS are prerequisites. Get the sources from GitHub:

cd /c/repos
git clone https://github.com/libffi/libffi.git
cd libffi
git checkout v3.4.8

MSYS Prerequisites

Launch MSYS2 and install automake and libtool using these commands:

pacman -S automake
pacman -S libtool

The Visual C++ compiler needs to be available in the path as well. Run cl without any parameters to ensure the compiler is available. If it is available, it must be the ARM64 compiler to ensure we cross-compile! It most likely won’t be by default. If it isn’t, add it to the path as follows:

export PATH="$PATH:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/bin/Hostx64/arm64/"

Generating the configure file

With the MSYS prerequisites installed, run the autogen.sh script:

user@machine /d/repos/libffi
$ ./autogen.sh

This creates a configure script in the root of the repository. Run it using bash. This command is the main difference between ARM64 and x86_64. Notice that I need to specify various include paths for the ARM64 compiler and linker that were not required in the x86_64 case.

export INCLUDE_PATH_ucrt=/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt
export INCLUDE_PATH_um=/c/progra~2/wi3cf2~1/10/include/100226~1.0/um
export INCLUDE_PATH_shared=/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared
export INCLUDE_PATH_MSVC=/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include
time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I $INCLUDE_PATH_MSVC -I $INCLUDE_PATH_ucrt -I $INCLUDE_PATH_um -I $INCLUDE_PATH_shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -I $INCLUDE_PATH_MSVC -I $INCLUDE_PATH_ucrt -I $INCLUDE_PATH_um -I $INCLUDE_PATH_shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   LD=link \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   CPP="cl -nologo -EP -I $INCLUDE_PATH_MSVC -I $INCLUDE_PATH_ucrt -I $INCLUDE_PATH_shared" \
   CXXCPP="cl -nologo -EP -I $INCLUDE_PATH_MSVC -I $INCLUDE_PATH_ucrt -I $INCLUDE_PATH_shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

Building the Source Code

Run make in the root of the repo. The generated LIB and DLL files should be in the aarch64-w64-mingw32/.libs/ subdirectory of the repo root. There will also be ffi.h and ffitarget.h include files in the aarch64-w64-mingw32/include/ subdirectory of the repo root. These 4 files are typically what will be required by other projects with a libffi dependency (like OpenJDK).

$ ls -1 aarch64-w64-mingw32/.libs/
libffi.la
libffi.lai
libffi_convenience.la
libffi_convenience.lib
libffi-8.dll*
libffi-8.exp
libffi-8.lib

$ ls -1 aarch64-w64-mingw32/include/
ffi.h
ffitarget.h
Makefile

Background Investigation Details

Investigating Configure Errors

My initial attempt at building libffi for Windows ARM64 started on the wrong path, based on this quote from libffi/libffi at v3.4.8.

To build static library for ARM64 with MSVC using visual studio solution, msvc_build folder have aarch64/Ffi_staticLib.sln required header files in aarch64/aarch64_include/

I thought this meant that it would be much faster for me to build libffi since I wouldn’t need all these bash configure stuff. The solution informed me that I needed to upgrade the toolset:

This was the resulting change.

diff --git a/msvc_build/aarch64/Ffi_staticLib.vcxproj b/msvc_build/aarch64/Ffi_staticLib.vcxproj
index 3187699..8e0353f 100644
--- a/msvc_build/aarch64/Ffi_staticLib.vcxproj
+++ b/msvc_build/aarch64/Ffi_staticLib.vcxproj
@@ -15,20 +15,20 @@
     <ProjectGuid>{115502C0-BE05-4767-BF19-5C87D805FAD6}</ProjectGuid>
     <Keyword>Win32Proj</Keyword>
     <RootNamespace>FfistaticLib</RootNamespace>
-    <WindowsTargetPlatformVersion>10.0.17763.0</WindowsTargetPlatformVersion>
+    <WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
     <ProjectName>Ffi_staticLib_arm64</ProjectName>
   </PropertyGroup>
   <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
   <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|ARM64'" Label="Configuration">
     <ConfigurationType>StaticLibrary</ConfigurationType>
     <UseDebugLibraries>true</UseDebugLibraries>
-    <PlatformToolset>v141</PlatformToolset>
+    <PlatformToolset>v143</PlatformToolset>
     <CharacterSet>Unicode</CharacterSet>
   </PropertyGroup>
   <PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|ARM64'" Label="Configuration">
     <ConfigurationType>StaticLibrary</ConfigurationType>
     <UseDebugLibraries>false</UseDebugLibraries>
-    <PlatformToolset>v141</PlatformToolset>
+    <PlatformToolset>v143</PlatformToolset>
     <WholeProgramOptimization>true</WholeProgramOptimization>
     <CharacterSet>Unicode</CharacterSet>
   </PropertyGroup>

I then changed the architecture (in the Configuration Manager dropdown on the standard VS toolbox) from x64 to ARM64. There are a bunch of compiler errors!

1>D:\repos\libffi\src\closures.c(1015,30): error C2039: 'ftramp': is not a member of 'ffi_closure'
1>    D:\repos\libffi\msvc_build\aarch64\aarch64_include\ffi.h(306,16):
1>    see declaration of 'ffi_closure'
...
1>D:\repos\libffi\src\prep_cif.c(248,16): error C2065: 'FFI_BAD_ARGTYPE': undeclared identifier

How could a needed field be missing??!! I tried replacing ffi.h with the one from the x64 build but it was clearly wrong because it had architecture-specific code like this:

/* Specify which architecture libffi is configured for. */
#ifndef X86_WIN64
#define X86_WIN64
#endif

I then checked out the commit that added support for Windows AArch64.

git checkout d856743e6b02fcb5911491204131e277a7a4e10b

This allowed VS to build that solution! I set up the repo for OpenJDK by copying the .lib and .h files.

mkdir lib
cp msvc_build/aarch64/ARM64/Debug/Ffi_staticLib_arm64.lib lib/libffi.lib
cp msvc_build/aarch64/aarch64_include/ffi.h include/
cp src/aarch64/ffitarget.h include/

I then tried to configure OpenJDK using this command but the configure script failed!

date; time bash configure --with-jvm-variants=zero --with-libffi=/cygdrive/c/repos/libffi --openjdk-target=aarch64-unknown-cygwin --with-debug-level=slowdebug --with-jtreg=/cygdrive/c/java/binaries/jtreg/jtreg-7.5.1+1 --with-gtest=/cygdrive/c/repos/googletest --with-extra-ldflags=-profile --with-boot-jdk=/cygdrive/c/java/binaries/jdk/x64/jdk-24+36; time /cygdrive/c/repos/scratchpad/scripts/java/cygwin/build-jdk.sh windows aarch64 slowdebug

At this point, I had the build tools installed with the C++ compiler in C:\progra~2\micros~3\2022\buildt~1\vc\tools\msvc\1443~1.348\bin\hostx64\arm64\cl.exe. I opened the VS Installer and installed the ARM64 compiler tools. This was necessary because this script was not present on my machine:

"C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsamd64_arm64.bat"

Running vcvarsamd64_arm64.bat initialized the environment for ‘x64_arm64’ (cross-compilation targeting ARM64). I then ran dumpbin to see which symbols were in the .lib file VS generated.

cd /d C:\repos\libffi
dumpbin /all /out:ffi-arm64.txt libffi.lib

cd /d D:\repos\libffi
dumpbin /all /out:ffi-x64.txt libffi.lib

The symbols were very different, which was my sign that I just needed to try building for ARM64 in MSYS2. I also upgraded VS some of the paths use 14.44 and others were 14.43. I started MSYS2 then added the arm64 compiler to the PATH. I tried the long path again but only the 8.3 filename format path worked.

export PATH="/c/Program\ Files/Microsoft\ Visual\ Studio/2022/Enterprise/VC/Tools/MSVC/14.44.35207/bin/Hostx64/arm64/:$PATH"

export PATH="$PATH:/c/Program\ Files/Microsoft\ Visual\ Studio/2022/Enterprise/VC/Tools/MSVC/14.44.35207/bin/Hostx64/arm64/"

# Only this one works.
$ export PATH="$PATH:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/bin/Hostx64/arm64/"

$ where cl.exe

I then switched the repo back to v3.4.8 and ran autogen.sh. This time I specified the –target option to request a aarch64 build. See Specifying Target Triplets (Autoconf) for an overview of the target triplets.

git co v3.4.8
ls -l configure
mkdir -p /c/temp/libffi

./autogen.sh

bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --target=aarch64-w64-mingw32

The above configure command failed with this error:

$ bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --target=aarch64-w64-mingw32
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... aarch64-w64-mingw32
continue configure in default builddir "./aarch64-w64-mingw32"
....exec /bin/sh ../configure "--srcdir=.." "--enable-builddir=aarch64-w64-mingw32" "mingw32"
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... aarch64-w64-mingw32
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether sleep supports fractional seconds... yes
checking filesystem timestamp resolution... 0.01
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking xargs -n works... yes
checking for gcc... /c/repos/libffi/msvcc.sh -marm64
checking whether the C compiler works... no
configure: error: in '/c/repos/libffi/aarch64-w64-mingw32':
configure: error: C compiler cannot create executables
See 'config.log' for more details

Snippet from aarch64-w64-mingw32/config.log:

configure:4726: checking for C compiler version
configure:4735: /c/repos/libffi/msvcc.sh -marm64 --version >&5
/Wall enable all warnings               /w   disable all warnings
/W<n> set warning level (default n=1)   
/Wv:xx[.yy[.zzzzz]] disable warnings introduced after version xx.yy.zzzzz
/WX treat warnings as errors            /WL enable one line diagnostics
/wd<n> disable warning n                /we<n> treat warning n as an error
/wo<n> issue warning n once             /w<l><n> set warning level 1-4 for n
/external:W<n>          - warning level for external headers
/external:templates[-]  - evaluate warning level across template instantiation chain
/sdl enable additional security features and warnings
/options:strict unrecognized compiler options are an error
Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35207.1 for ARM64
Copyright (C) Microsoft Corporation.  All rights reserved.

configure:4746: $? = 0
configure:4735: /c/repos/libffi/msvcc.sh -marm64 -v >&5
cl : Command line warning D9002 : ignoring unknown option '-v'
cl : Command line error D8003 : missing source filename
configure:4746: $? = 0
configure:4735: /c/repos/libffi/msvcc.sh -marm64 -V >&5
cl : Command line error D8004 : '/V' requires an argument
configure:4746: $? = 0
configure:4735: /c/repos/libffi/msvcc.sh -marm64 -qversion >&5
cl : Command line warning D9002 : ignoring unknown option '-qversion'
cl : Command line error D8003 : missing source filename
configure:4746: $? = 0
configure:4735: /c/repos/libffi/msvcc.sh -marm64 -version >&5
cl : Command line warning D9002 : ignoring unknown option '-version'
cl : Command line error D8003 : missing source filename
configure:4746: $? = 0
configure:4766: checking whether the C compiler works
configure:4788: /c/repos/libffi/msvcc.sh -marm64  -DFFI_BUILDING_DLL  conftest.c  >&5
LINK : fatal error LNK1104: cannot open file 'MSVCRT.lib'
configure:4792: $? = 0
configure:4833: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "libffi"
| #define PACKAGE_TARNAME "libffi"
| #define PACKAGE_VERSION "3.3-rc0"
| #define PACKAGE_STRING "libffi 3.3-rc0"
| #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
| #define PACKAGE_URL ""
| #define PACKAGE "libffi"
| #define VERSION "3.3-rc0"
| /* end confdefs.h.  */
| 
| int
| main (void)
| {
| 
|   ;
|   return 0;
| }
configure:4838: error: in '/c/repos/libffi/aarch64-w64-mingw32':
configure:4840: error: C compiler cannot create executables
See 'config.log' for more details

I noticed there is a --verbose option in the script. Comparing the 2 architecture revealed that the compiler was being invoked the same way.

$ /c/repos/libffi/msvcc.sh -m64 --verbose
cl -MD -nologo -W3
cl : Command line error D8003 : missing source filename

$ /c/repos/libffi/msvcc.sh -marm64 --verbose
cl -MD -nologo -W3
cl : Command line error D8003 : missing source filename

I asked Copilot Which autoconf macro outputs “checking whether the C compiler works” and it said that’s the AC_PROG_CC macro. That string showed up in 3 spots in the codebase but they weren’t what I was looking for. The “checking for C compiler version” was in the generated configure script though.

# Provide some information about the compiler.
printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5
set X $ac_compile
ac_compiler=$2
for ac_option in --version -v -V -qversion -version; do
  { { ac_try="$ac_compiler $ac_option >&5"
case "(($ac_try" in
  *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
  *) ac_try_echo=$ac_try;;
esac

This explained where those odd arguments in the config.log snippet were coming from. The question was now how this was different from the x64 case where it just worked? The diff showed that I was actually still on 3.3-rc0 so I needed to rerun autogen.sh on v3.4.8. I didn’t think I needed the --target option since the correct compiler was selected (as far as I could tell from the --verbose output above).

bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi

The configure files were identical in both scenarios. However, there was a key difference in the config logs! Here is a snippet from the working x64 build’s config.log. Notice that the version detection errors were present in this case too (that was a red herring)!

configure:4679: /d/repos/libffi/msvcc.sh -m64 -version >&5
cl : Command line warning D9002 : ignoring unknown option '-version'
cl : Command line error D8003 : missing source filename
configure:4690: $? = 0
configure:4710: checking whether the C compiler works
configure:4732: /d/repos/libffi/msvcc.sh -m64  -DFFI_BUILDING_DLL  conftest.c  >&5
configure:4736: $? = 0
configure:4787: result: yes
configure:4679: /c/repos/libffi/msvcc.sh -marm64 -version >&5
cl : Command line warning D9002 : ignoring unknown option '-version'
cl : Command line error D8003 : missing source filename
configure:4690: $? = 0
configure:4710: checking whether the C compiler works
configure:4732: /c/repos/libffi/msvcc.sh -marm64  -DFFI_BUILDING_DLL  conftest.c  >&5
LINK : fatal error LNK1104: cannot open file 'MSVCRT.lib'
configure:4736: $? = 0
configure:4777: result: no

The linker error was really what I needed to address here. I created this conftest.c file to address the command line compilation issue:

int main (void)
{
  return 0;
}
$ cl -MD -W3 conftest.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35207.1 for ARM64
Copyright (C) Microsoft Corporation.  All rights reserved.

conftest.c
Microsoft (R) Incremental Linker Version 14.44.35207.1
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:conftest.exe
conftest.obj
LINK : fatal error LNK1104: cannot open file 'MSVCRT.lib'

How does OpenJDK get around this? Interestingly, this was when I noticed that the OpenJDK log also had all the version checking errors (-v -V –version, etc). This is the snippet from OpenJDK’s config.log (notice the -libpaths):

configure:105502: checking whether the C compiler works
configure:105524: /cygdrive/d/java/forks/TheShermanTanker/jdk/build/windows-aarch64-zero-slowdebug/fixpath exec /cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/bin/hostx64/arm64/cl.exe   -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/include -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/atlmfc/include -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/auxili~1/vs/include -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/winrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/cppwinrt -I/cygdrive/c/progra~2/wi3cf2~1/netfxsdk/4.8/include/um   -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/include -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/atlmfc/include -I/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/auxili~1/vs/include -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/winrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/cppwinrt -I/cygdrive/c/progra~2/wi3cf2~1/netfxsdk/4.8/include/um  conftest.c  -link   -libpath:/cygdrive/c/progra~1/mib055~1/2022/enterp~1/vc/tools/msvc/1443~1.348/lib/arm64 -libpath:/cygdrive/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -libpath:/cygdrive/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 -profile >&5
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34810 for ARM64
Copyright (C) Microsoft Corporation.  All rights reserved.

conftest.c
Microsoft (R) Incremental Linker Version 14.43.34810.0
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:conftest.exe 
-libpath:c:\progra~1\mib055~1\2022\enterp~1\vc\tools\msvc\1443~1.348\lib\arm64 
-libpath:c:\progra~2\wi3cf2~1\10\lib\100226~1.0\ucrt\arm64 
-libpath:c:\progra~2\wi3cf2~1\10\lib\100226~1.0\um\arm64 
-profile 
conftest.obj 
configure:105528: $? = 0
configure:105579: result: yes

Searching that codebase for libpath led to the location where the -libpath arguments are built in jdk/make/autoconf/toolchain_microsoft.m4. I should do the same thing and set the LDFLAGS.

$ cl -MD -W3 conftest.c -link -libpath:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64
Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35207.1 for ARM64
Copyright (C) Microsoft Corporation.  All rights reserved.

conftest.c
Microsoft (R) Incremental Linker Version 14.44.35207.1
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:conftest.exe
-libpath:C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64
-libpath:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64
-libpath:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64
conftest.obj

That succeeded so I tried to set the LDFLAGS for libffi.

bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LDFLAGS="-link -libpath:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi

The error was now about missing kernel32.lib.

configure:4710: checking whether the C compiler works
configure:4732: /c/repos/libffi/msvcc.sh -marm64  -DFFI_BUILDING_DLL -link -libpath:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 conftest.c  >&5
LINK : fatal error LNK1104: cannot open file 'kernel32.lib'
configure:4736: $? = 0
configure:4777: result: no

I verified that kernel32.lib exists in C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\arm64\, which is the path /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64. A solution to How can I convert a Windows short name path into long names within a batch script – Stack Overflow would have been nice but oh well.

$ /c/repos/libffi/msvcc.sh -marm64 --verbose -DFFI_BUILDING_DLL -link -libpath:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -libpath:/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 conftest.c
cl -MD -nologo -W3 -DFFI_BUILDING_DLL C:/repos/libffi/conftest.c -link  -libpath:/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64
LINK : fatal error LNK1104: cannot open file 'kernel32.lib'

Looks like the other paths are being dropped by the script. Further inspection of the script reveals that it has a -L option for these libraries. I tried the -link option but something wasn’t working so I moved on to -L. These are the libraries I needed:

bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi

With the above command, the next issue was around cross compiling:

configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
continue configure in default builddir "./x86_64-w64-mingw32"
....exec /bin/sh ../configure "--srcdir=.." "--enable-builddir=x86_64-w64-mingw32" "mingw32"
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether sleep supports fractional seconds... yes
checking filesystem timestamp resolution... 0.01
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking xargs -n works... yes
checking for gcc... /c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64
checking whether the C compiler works... yes
checking for C compiler default output file name... conftest.exe
checking for suffix of executables... .exe
checking whether we are cross compiling... configure: error: in '/c/repos/libffi/x86_64-w64-mingw32':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use '--host'.
See 'config.log' for more details

At least this error message let me know what I needed to do to.

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

Next error after that change in the checking how to run the C++ preprocessor step, specifically error: C++ preprocessor "cl -nologo -EP" fails sanity check.

configure:14431: checking how to run the C++ preprocessor
configure:14498: result: cl -nologo -EP
configure:14512: cl -nologo -EP -DFFI_BUILDING_DLL conftest.cpp
conftest.cpp
conftest.cpp(12): fatal error C1034: limits.h: no include path set
configure:14512: $? = 2
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "libffi"
| #define PACKAGE_TARNAME "libffi"
| #define PACKAGE_VERSION "3.4.8"
| #define PACKAGE_STRING "libffi 3.4.8"
| #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
| #define PACKAGE_URL ""
| #define PACKAGE "libffi"
| #define VERSION "3.4.8"
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h.  */
| #include <limits.h>
| 		     Syntax error
configure:14512: cl -nologo -EP -DFFI_BUILDING_DLL conftest.cpp
conftest.cpp
conftest.cpp(12): fatal error C1034: limits.h: no include path set
configure:14512: $? = 2
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "libffi"
| #define PACKAGE_TARNAME "libffi"
| #define PACKAGE_VERSION "3.4.8"
| #define PACKAGE_STRING "libffi 3.4.8"
| #define PACKAGE_BUGREPORT "http://github.com/libffi/libffi/issues"
| #define PACKAGE_URL ""
| #define PACKAGE "libffi"
| #define VERSION "3.4.8"
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h.  */
| #include <limits.h>
| 		     Syntax error
configure:14547: error: in '/c/repos/libffi/aarch64-w64-mingw32':
configure:14549: error: C++ preprocessor "cl -nologo -EP" fails sanity check
See 'config.log' for more details

My fix was to provide the MSVC include path:

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

I used this new conftest.c to ensure that the compiler would succeed.

#include <limits.h>

int main (void)
{
  return 0;
}
/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 conftest.c

The include path was respected with this manually executed command so I ran msvcc.sh in verbose mode to be sure it was picking up all my arguments:

time bash configure \
   CC="/c/repos/libffi/msvcc.sh --verbose -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

The above command failed but I noticed that this is the C++ preprocessor. I needed the same command for the CXX environment variable.

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

I saved this test program as conftest.cpp this time (notice the extension).

#include <limits.h>

int main (void)
{
  return 0;
}

The test below showed that providing the include path lets cl.exe complete successfully.

$ cl -nologo -EP -DFFI_BUILDING_DLL conftest.cpp
conftest.cpp

conftest.cpp(1): fatal error C1034: limits.h: no include path set

$ cl -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -nologo -EP -DFFI_BUILDING_DLL conftest.cpp
conftest.cpp









#pragma once

...

The issue must have been in the CPP command so I updated it:

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

The configure script now completed! I had a feeling I would need to keep adding paths like this during the build process.

...
checking size of long double... 0
checking whether byte ordering is bigendian... no
checking assembler .cfi pseudo-op support... no
checking whether compiler supports pointer authentication... no
checking for _ prefix in compiled symbols... no
configure: versioning on shared library symbols is no
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating include/Makefile
config.status: creating include/ffi.h
config.status: creating Makefile
config.status: creating testsuite/Makefile
config.status: creating man/Makefile
config.status: creating doc/Makefile
config.status: creating libffi.pc
config.status: creating fficonfig.h
config.status: executing buildir commands
config.status: create top_srcdir/Makefile guessed from local Makefile
config.status: build in aarch64-w64-mingw32 (HOST=)
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing include commands
config.status: executing src commands

real    1m29.429s
user    0m32.473s
sys     0m35.396s

Investigating Build Errors

Just as I suspected, there were build errors when I ran make. Specifically, 8 of these C1083 errors:

libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -O2 -c ../src/prep_cif.c  -DDLL_EXPORT -DPIC -o src/.libs/prep_cif.obj
C:/repos/libffi/include\ffi.h(66): fatal error C1083: Cannot open include file: 'stddef.h': No such file or directory

That file lives in C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt. The OpenJDK build includes these 5 paths (among many others) but I didn’t think I’d need the RT-related paths. I added the other 3 to the configure command then ran make again.

-I/cygdrive/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/winrt -I/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/cppwinrt

The more critical error was this one:

libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -I. -I../include -Iinclude -I../src -c ../src/aarch64/win64_armasm.S  -DDLL_EXPORT -DPIC -o src/aarch64/.libs/win64_armasm.obj
win64_armasm.S
C:/repos/libffi/src/aarch64/win64_armasm.S(27): fatal error C1083: Cannot open include file: 'ksarm64.h': No such file or directory

Where did that file go? Based on the inability to build the VS solution in my first attempt, I searched for which commit could have deleted this ksarm64.h file. I used the suggestion at git – How to find a deleted file in the project commit history? – Stack Overflow

git log --diff-filter=D --summary | grep delete
git log --diff-filter=D --summary | grep delete | grep ks

This search for commits did not yield anything but a web search of ksarm64.h – Search led me to the [Arm64/Windows] Missing ksarm64.h ? · Issue #7409 · dotnet/runtime GitHub issue, which said that ksarm64.h is part of the Windows SDK. ksarm64.h isn’t include in Windows SDK – Developer Community was the pointer about where it lives: c/progra~2/wi3cf2~1/10/include/100226~1.0/shared. I had excluded this path because I wanted a minimal set of include paths. This was the next command I tried. I should have exported these paths to an environment variable like I have at the top but I just kept moving forward.

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link \
   CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

I was still seeing errors on a new clone of the repo (under the dups subdirectory):

Making all in man
make[3]: Entering directory '/d/repos/dups/libffi/aarch64-w64-mingw32/man'
make[3]: Nothing to be done for 'all'.
make[3]: Leaving directory '/d/repos/dups/libffi/aarch64-w64-mingw32/man'
make[3]: Entering directory '/d/repos/dups/libffi/aarch64-w64-mingw32'
source='../src/prep_cif.c' object='src/prep_cif.lo' libtool=yes \
DEPDIR=.deps depmode=none /bin/sh ../depcomp \
/bin/sh ./libtool  --tag=CC   --mode=compile /c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 -DHAVE_CONFIG_H -I. -I..  -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL  -O2 -c -o src/prep_cif.lo ../src/prep_cif.c
libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -O2 -c ../src/prep_cif.c  -DDLL_EXPORT -DPIC -o src/.libs/prep_cif.obj
D:/repos/dups/libffi/aarch64-w64-mingw32/include\ffi.h(105): fatal error C1083: Cannot open include file: 'stddef.h': No such file or directory

I could reproduce these errors as follows:

mkdir src/.libs/
cd aarch64-w64-mingw32/

/c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -O2 -c ../src/raw_api.c  -DDLL_EXPORT -DPIC -o src/.libs/raw_api.obj

D:/repos/dups/libffi/aarch64-w64-mingw32/include\ffi.h(105): fatal error C1083: Cannot open include file: 'stddef.h': No such file or directory

Adding the --verbose flag to the last command above showed me the problem: the -I paths were broken!

cl -MD -nologo -W3 -I"C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I"C:/software/msys64/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I"C:/software/msys64/cygdrive/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -DHAVE_CONFIG_H -I"D:/repos/dups/libffi/aarch64-w64-mingw32" -I"D:/repos/dups/libffi" -I"D:/repos/dups/libffi/aarch64-w64-mingw32" -I"D:/repos/dups/libffi/include" -I"D:/repos/dups/libffi/aarch64-w64-mingw32/include" -I"D:/repos/dups/libffi/src" -DFFI_BUILDING_DLL -O2 -c D:/repos/dups/libffi/src/raw_api.c -DDLL_EXPORT -DPIC -Fosrc/.libs/raw_api.obj -Fdsrc/.libs/raw_api -Fpsrc/.libs/raw_api -Fasrc/.libs/raw_api -link  -LIBPATH:C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -LIBPATH:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -LIBPATH:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 -OPT:REF -OPT:ICF -INCREMENTAL:NO
D:/repos/dups/libffi/aarch64-w64-mingw32/include\ffi.h(105): fatal error C1083: Cannot open include file: 'stddef.h': No such file or directory

This was my solution to these path issues:

/c/repos/libffi/msvcc.sh --verbose -marm64 -I "C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "C:/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "C:/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -O2 -c ../src/raw_api.c  -DDLL_EXPORT -DPIC -o src/.libs/raw_api.obj

Now the cl.exe command looked like this:

cl -MD -nologo -W3 -I"C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I"C:/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I"C:/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -DHAVE_CONFIG_H -I"D:/repos/dups/libffi/aarch64-w64-mingw32" -I"D:/repos/dups/libffi" -I"D:/repos/dups/libffi/aarch64-w64-mingw32" -I"D:/repos/dups/libffi/include" -I"D:/repos/dups/libffi/aarch64-w64-mingw32/include" -I"D:/repos/dups/libffi/src" -DFFI_BUILDING_DLL -O2 -c D:/repos/dups/libffi/src/raw_api.c -DDLL_EXPORT -DPIC -Fosrc/.libs/raw_api.obj -Fdsrc/.libs/raw_api -Fpsrc/.libs/raw_api -Fasrc/.libs/raw_api -link  -LIBPATH:C:/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -LIBPATH:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -LIBPATH:C:/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64 -OPT:REF -OPT:ICF -INCREMENTAL:NO
D:/repos/dups/libffi/src/raw_api.c(188): warning C4013: 'bcopy' undefined; assuming extern returning int

libffi/msvcc.sh at v3.4.8 · libffi/libffi uses cygpath -ma, which outputs mixed absolute paths (windows form with forward slashes). Here is the corrected configure command (without the /cygdrive path prefixes):

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link \
   CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

The build now failed with this error:

$ /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL "-I. -I../include -Iinclude -I../src" -c ../src/aarch64/win64_armasm.S -o src/aarch64/win64_armasm.obj >/dev/null 2>&1
/bin/sh ./libtool  --tag=CC   --mode=link /c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L /c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64  -O2   -o libffi_convenience.la  src/prep_cif.lo src/types.lo src/raw_api.lo src/java_raw_api.lo src/closures.lo src/tramp.lo   src/aarch64/ffi.lo src/aarch64/win64_armasm.lo
libtool:   error: require no space between '-L' and '/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64'

I tried the same command without the spaces:

/c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" -L "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" -L "/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL "-I. -I../include -Iinclude -I../src" -c ../src/aarch64/win64_armasm.S -o src/aarch64/win64_armasm.obj >/dev/null 2>&1
/bin/sh ./libtool  --tag=CC   --mode=link /c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64  -O2   -o libffi_convenience.la  src/prep_cif.lo src/types.lo src/raw_api.lo src/java_raw_api.lo src/closures.lo src/tramp.lo   src/aarch64/ffi.lo src/aarch64/win64_armasm.lo

This resolved the error about the spaces but then failed with:

Microsoft (R) Library Manager Version 14.44.35207.1
Copyright (C) Microsoft Corporation.  All rights reserved.

LINK : fatal error LNK1181: cannot open input file 'src\.libs\prep_cif.obj'

Here’s the next iteration of the configure script:

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link \
   CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

The build now progressed to this error:

libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" "-L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -DFFI_BUILDING_DLL -O2 -c ../src/closures.c  -DDLL_EXPORT -DPIC -o src/.libs/closures.obj
D:\repos\dups\libffi\src\dlmalloc.c(453): fatal error C1083: Cannot open include file: 'windows.h': No such file or directory

This is where the /c/progra~2/wi3cf2~1/10/include/100226~1.0/um include path was required.

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   LD=link \
   CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32

This led me to a new error from make:

...
libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/um" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" "-L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -O2 -c ../src/prep_cif.c  -DDLL_EXPORT -DPIC -o src/.libs/prep_cif.obj
D:/repos/dups/libffi/src/prep_cif.c(219): warning C4273: 'ffi_prep_cif': inconsistent dll linkage
D:/repos/dups/libffi/src/prep_cif.c(225): warning C4273: 'ffi_prep_cif_var': inconsistent dll linkage
D:/repos/dups/libffi/src/prep_cif.c(257): warning C4273: 'ffi_prep_closure': inconsistent dll linkage
D:/repos/dups/libffi/src/prep_cif.c(268): warning C4273: 'ffi_get_struct_offsets': inconsistent dll linkage
...
libtool: compile:  /c/repos/libffi/msvcc.sh -marm64 -I "/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/um" -I "/c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" "-L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64" "-L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -O2 -c ../src/types.c  -DDLL_EXPORT -DPIC -o src/.libs/types.obj
D:/repos/dups/libffi/src/types.c(77): error C2491: 'ffi_type_void': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(81): error C2491: 'ffi_type_uint8': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(82): error C2491: 'ffi_type_sint8': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(83): error C2491: 'ffi_type_uint16': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(84): error C2491: 'ffi_type_sint16': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(85): error C2491: 'ffi_type_uint32': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(86): error C2491: 'ffi_type_sint32': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(87): error C2491: 'ffi_type_uint64': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(88): error C2491: 'ffi_type_sint64': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(90): error C2491: 'ffi_type_pointer': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(92): error C2491: 'ffi_type_float': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(93): error C2491: 'ffi_type_double': definition of dllimport data not allowed
D:/repos/dups/libffi/src/types.c(111): error C2491: 'ffi_type_longdouble': definition of dllimport data not allowed

This seemed pretty odd, considering these errors didn’t show up for x64. I didn’t see any defines related to DLLs. Upon further inspection, I realized that I had removed the CPPFLAGS variable somewhere along the way! Restoring it finally got the job done! No make errors at all, phew!

time bash configure \
   CC="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   CXX="/c/repos/libffi/msvcc.sh -marm64 -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/um -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared -L/c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/lib/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/ucrt/arm64 -L/c/progra~2/wi3cf2~1/10/lib/100226~1.0/um/arm64" \
   LD=link \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   CPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   CXXCPP="cl -nologo -EP -I /c/Progra~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/include -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/ucrt -I /c/progra~2/wi3cf2~1/10/include/100226~1.0/shared" \
   --disable-docs \
   --prefix=/c/temp/libffi \
   --host=aarch64-w64-mingw32


Building libffi for Windows x64 with Visual C++

I needed to build the zero variant of the HotSpot JVM for the Windows platform recently. libffi is one of the prerequisites for the zero variant. It provides “a portable, high level programming interface to various calling conventions.” I decided to build libffi/libffi at v3.4.8 since it looks like the latest version. I used a Windows x64 machine for this entire process. Visual C++ and MSYS need to be installed to do this. Launch MSYS2 and get the sources from GitHub:

cd /c/repos
git clone https://github.com/libffi/libffi.git
cd libffi
git checkout v3.4.8

MSYS Prerequisites

Install automake and libtool using these commands:

pacman -S automake
pacman -S libtool

The Visual C++ compiler needs to be available in the path as well. Run cl without any parameters to ensure the compiler is available. It most likely won’t be by default. If it isn’t, add it to the path as follows:

export PATH="/c/PROGRA~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/:$PATH"

Note that the name of the Visual C++ linker is link.exe, which clashes with the built in “link” command. Prepending the C++ compiler path means that the built-in link command will not be available. Appending the C++ compiler path means that the linker cannot be invoked without specifying its full path.

Generating the configure file

With the MSYS prerequisites installed, run the autogen.sh script:

user@machine /d/repos/libffi
$ ./autogen.sh

This creates a configure script in the root of the repository. Run it using bash:

$ bash configure \
   CC="/d/repos/libffi/msvcc.sh -m64" \
   CXX="/d/repos/libffi/msvcc.sh -m64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/d/temp/libffi

Running configure takes about a minute and a half on my 24-core (32 logical processor) machine with 128GB RAM.

Building the Source Code

Simply run make in the root of the repo. The generated LIB and DLL files should be in the x86_64-w64-mingw32/.libs/ subdirectory of the repo root. There will also be ffi.h and ffitarget.h include files in the x86_64-w64-mingw32/include/ subdirectory of the repo root. These 4 files are typically what will be required by other projects with a libffi dependency (like OpenJDK).

$ ls -1 x86_64-w64-mingw32/.libs/
libffi.la
libffi.lai
libffi_convenience.la
libffi_convenience.lib
libffi-8.dll*
libffi-8.exp
libffi-8.lib

$ ls -1 x86_64-w64-mingw32/include/
ffi.h
ffitarget.h
Makefile

My Motivation for Building libffi

I was trying to configure an OpenJDK build (at commit c3de94cee12471) using this command line:

bash configure --with-jvm-variants=zero --with-debug-level=slowdebug --with-jtreg=/cygdrive/c/java/binaries/jtreg/jtreg-7.5.1+1 --with-gtest=/cygdrive/c/repos/googletest --with-extra-ldflags=-profile --with-boot-jdk=/cygdrive/c/java/binaries/jdk/x64/jdk-24+36
...
checking if hsdis should be bundled... no
checking for --enable-libffi-bundling... disabled, default
checking for LIBFFI... checking for ffi.h... no
configure: error: Could not find libffi!
configure exiting with result code 1

That’s when I found the --with-libffi option in jdk/doc/building.md and cloned the libffi repo.

bash configure --with-jvm-variants=zero --with-debug-level=slowdebug --with-jtreg=/cygdrive/c/java/binaries/jtreg/jtreg-7.5.1+1 --with-gtest=/cygdrive/c/repos/googletest --with-extra-ldflags=-profile --with-boot-jdk=/cygdrive/c/java/binaries/jdk/x64/jdk-24+36 --with-libffi=/cygdrive/d/repos/libffi

configure then failed with this error:

...
checking for --enable-libffi-bundling... disabled, default
checking if libffi works... no
configure: error: Found libffi but could not link and compile with it.
configure exiting with result code 1

This was my hint that I probably need to build libffi first. libffi/README.md at v3.4.8 · libffi/libffi explains that the configure script can be generated by running autogen.sh. I first need to fix the line endings. This copilot prompt “convert all existing files in a repo from windows to unix line endings” gets me the solution:

# Tells Git to convert CRLF to LF on commit
# but not the other way around on checkout.
git config core.autocrlf input

# resets the working directory and re-checks
# out the files using the current core.autocrlf setting
git reset --hard

Now autogen.sh can be executed. I didn’t read the instructions all the way through to see what prerequisites are required. Even so, which ones can I get away without?

user@machine /cygdrive/d/repos/libffi
$ ./autogen.sh
autoreconf-2.71: export WARNINGS=
autoreconf-2.71: Entering directory '.'
autoreconf-2.71: configure.ac: not using Gettext
autoreconf-2.71: running: aclocal -I m4
Can't exec "aclocal": No such file or directory at /usr/share/autoconf2.7/Autom4te/FileUtils.pm line 274.
autoreconf-2.71: error: aclocal failed with exit status: 1

Can’t exec “aclocal” – Search leads to macos – Error ‘Can’t exec “aclocal”‘ with homebrew installed autoreconf on mac – Stack Overflow, which suggests the solution (see Windows OpenJDK Development Environment Setup – Saint’s Log for command line I used to set up my development environment). I install the two prerequisites in Cygwin:

setup-x86_64.exe -q -P automake
setup-x86_64.exe -q -P libtool

For some reason, autogen.sh not only doesn’t work in Cygwin after this but absolutely nothing happens, no error messages and no configure file created, it’s as though I just pressed ENTER. At this point, I went back to the History for make/autoconf/lib-ffi.m4 – openjdk/jdk. The instructions in 8309880: Add support for linking libffi on Windows and Mac · openjdk/jdk@4c18b9e (using MSYS2) were promising: I launched a VS 2022 Developer Command Prompt then ran

C:\software\msys64\ucrt64.exe

Running autogen.sh then reminded me to install the prerequisites the setup in MSYS:

$ ./autogen.sh
autoreconf-2.72: export WARNINGS=
autoreconf-2.72: Entering directory '.'
autoreconf-2.72: configure.ac: not using Gettext
autoreconf-2.72: running: aclocal -I m4
Can't exec "aclocal": No such file or directory at /usr/share/autoconf-2.72/Autom4te/FileUtils.pm line 299.
autoreconf-2.72: error: aclocal failed with exit status: 1

$ pacman -S automake
resolving dependencies...
looking for conflicting packages...

Packages (8) automake1.11-1.11.6-6  automake1.12-1.12.6-6  automake1.13-1.13.4-7  automake1.14-1.14.1-6
             automake1.15-1.15.1-4  automake1.16-1.16.5-1  automake1.17-1.17-1  automake-wrapper-20240607-1

Total Download Size:    3.49 MiB
Total Installed Size:  10.25 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 automake1.17-1.17-1-any              535.0 KiB   444 KiB/s 00:01 [###################################] 100%
 automake1.14-1.14.1-6-any            503.1 KiB   391 KiB/s 00:01 [###################################] 100%
 automake1.15-1.15.1-4-any            513.4 KiB   390 KiB/s 00:01 [###################################] 100%
 automake1.12-1.12.6-6-any            503.1 KiB   363 KiB/s 00:01 [###################################] 100%
 automake1.11-1.11.6-6-any            490.2 KiB  1656 KiB/s 00:00 [###################################] 100%
 automake1.13-1.13.4-7-any            501.5 KiB   881 KiB/s 00:01 [###################################] 100%
 automake-wrapper-20240607-1-any        4.3 KiB  8.89 KiB/s 00:00 [###################################] 100%
 automake1.16-1.16.5-1-any            526.3 KiB   223 KiB/s 00:02 [###################################] 100%
 Total (8/8)                            3.5 MiB  1360 KiB/s 00:03 [###################################] 100%
(8/8) checking keys in keyring                                    [###################################] 100%
(8/8) checking package integrity                                  [###################################] 100%
(8/8) loading package files                                       [###################################] 100%
(8/8) checking for file conflicts                                 [###################################] 100%
(8/8) checking available disk space                               [###################################] 100%
:: Processing package changes...
(1/8) installing automake1.11                                     [###################################] 100%
(2/8) installing automake1.12                                     [###################################] 100%
(3/8) installing automake1.13                                     [###################################] 100%
(4/8) installing automake1.14                                     [###################################] 100%
(5/8) installing automake1.15                                     [###################################] 100%
(6/8) installing automake1.16                                     [###################################] 100%
(7/8) installing automake1.17                                     [###################################] 100%
(8/8) installing automake-wrapper                                 [###################################] 100%
:: Running post-transaction hooks...
(1/1) Updating the info directory file...

$ ./autogen.sh
autoreconf-2.72: export WARNINGS=
autoreconf-2.72: Entering directory '.'
autoreconf-2.72: configure.ac: not using Gettext
autoreconf-2.72: running: aclocal -I m4
autoreconf-2.72: configure.ac: tracing
autoreconf-2.72: configure.ac: not using Libtool
autoreconf-2.72: configure.ac: not using Intltool
autoreconf-2.72: configure.ac: not using Gtkdoc
autoreconf-2.72: running: /usr/bin/autoconf-2.72
configure.ac:88: warning: The preprocessor macro `STDC_HEADERS' is obsolete.
configure.ac:88:   Except in unusual embedded environments, you can safely include all
configure.ac:88:   ISO C90 headers unconditionally.
configure.ac:123: warning: The macro 'AC_TRY_COMPILE' is obsolete.
configure.ac:123: You should run autoupdate.
../autoconf-2.72/lib/autoconf/general.m4:2845: AC_TRY_COMPILE is expanded from...
../autoconf-2.72/lib/m4sugar/m4sh.m4:690: _AS_IF_ELSE is expanded from...
../autoconf-2.72/lib/m4sugar/m4sh.m4:697: AS_IF is expanded from...
../autoconf-2.72/lib/autoconf/general.m4:2249: AC_CACHE_VAL is expanded from...
../autoconf-2.72/lib/autoconf/general.m4:2270: AC_CACHE_CHECK is expanded from...
m4/asmcfi.m4:1: GCC_AS_CFI_PSEUDO_OP is expanded from...
configure.ac:123: the top level
configure.ac:438: warning: LT_PATH_LD is m4_require'd but not m4_defun'd
acinclude.m4:149: LIBFFI_CHECK_LINKER_FEATURES is expanded from...
acinclude.m4:255: LIBFFI_ENABLE_SYMVERS is expanded from...
configure.ac:438: the top level
autoreconf-2.72: running: /usr/bin/autoheader-2.72
autoreconf-2.72: running: automake --add-missing --copy --no-force
configure.ac:31: installing './compile'
configure.ac:19: installing './install-sh'
configure.ac:19: installing './missing'
Makefile.am:39: error: Libtool library used but 'LIBTOOL' is undefined
Makefile.am:39:   The usual way to define 'LIBTOOL' is to add 'LT_INIT'
Makefile.am:39:   to 'configure.ac' and run 'aclocal' and 'autoconf' again.
Makefile.am:39:
Makefile.am:39:   If 'LT_INIT' is in 'configure.ac', make sure
Makefile.am:39:   its definition is in aclocal's search path.
Makefile.am:39:
Makefile.am:39:   If you install Automake in its own prefix,
Makefile.am:39:   you'll need to arrange for the Libtool m4 files
Makefile.am:39:   to be found by aclocal.  For info on this, see:
Makefile.am:39:     https://gnu.org/s/automake/manual/automake.html#Libtool-library-used-but-LIBTOOL-is-undefined
Makefile.am: installing './depcomp'
doc/Makefile.am:3: installing 'doc/mdate-sh'
doc/Makefile.am:3: installing 'doc/texinfo.tex'
autoreconf-2.72: error: automake failed with exit status: 1

$ pacman -S libtool
resolving dependencies...
looking for conflicting packages...

Packages (2) libltdl-2.5.3-1  libtool-2.5.3-1

Total Download Size:   0.43 MiB
Total Installed Size:  2.37 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 libltdl-2.5.3-1-x86_64                40.6 KiB  45.4 KiB/s 00:01 [###################################] 100%
 libtool-2.5.3-1-x86_64               403.4 KiB   382 KiB/s 00:01 [###################################] 100%
 Total (2/2)                          444.0 KiB   389 KiB/s 00:01 [###################################] 100%
(2/2) checking keys in keyring                                    [###################################] 100%
(2/2) checking package integrity                                  [###################################] 100%
(2/2) loading package files                                       [###################################] 100%
(2/2) checking for file conflicts                                 [###################################] 100%
(2/2) checking available disk space                               [###################################] 100%
:: Processing package changes...
(1/2) installing libltdl                                          [###################################] 100%
(2/2) installing libtool                                          [###################################] 100%
:: Running post-transaction hooks...
(1/1) Updating the info directory file...

$ ./autogen.sh
autoreconf-2.72: export WARNINGS=
autoreconf-2.72: Entering directory '.'
autoreconf-2.72: configure.ac: not using Gettext
autoreconf-2.72: running: aclocal -I m4
autoreconf-2.72: configure.ac: tracing
autoreconf-2.72: running: libtoolize --copy
libtoolize: putting auxiliary files in '.'.
libtoolize: copying file './ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
autoreconf-2.72: configure.ac: not using Intltool
autoreconf-2.72: configure.ac: not using Gtkdoc
autoreconf-2.72: running: aclocal -I m4
autoreconf-2.72: running: /usr/bin/autoconf-2.72
configure.ac:88: warning: The preprocessor macro `STDC_HEADERS' is obsolete.
configure.ac:88:   Except in unusual embedded environments, you can safely include all
configure.ac:88:   ISO C90 headers unconditionally.
configure.ac:123: warning: The macro 'AC_TRY_COMPILE' is obsolete.
configure.ac:123: You should run autoupdate.
../autoconf-2.72/lib/autoconf/general.m4:2845: AC_TRY_COMPILE is expanded from...
../autoconf-2.72/lib/m4sugar/m4sh.m4:690: _AS_IF_ELSE is expanded from...
../autoconf-2.72/lib/m4sugar/m4sh.m4:697: AS_IF is expanded from...
../autoconf-2.72/lib/autoconf/general.m4:2249: AC_CACHE_VAL is expanded from...
../autoconf-2.72/lib/autoconf/general.m4:2270: AC_CACHE_CHECK is expanded from...
m4/asmcfi.m4:1: GCC_AS_CFI_PSEUDO_OP is expanded from...
configure.ac:123: the top level
autoreconf-2.72: running: /usr/bin/autoheader-2.72
autoreconf-2.72: running: automake --add-missing --copy --no-force
autoreconf-2.72: Leaving directory '.'

$ ls configure
configure

The configure file was created successfully. Now I could run bash configure per the libffi instructions.

mkdir -p /d/temp/libffi

bash configure \
   CC="/d/repos/libffi/msvcc.sh -m64" \
   CXX="/d/repos/libffi/msvcc.sh -m64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   --disable-docs \
   --prefix=/d/temp/libffi

Of course this wasn’t going to just work:

configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
continue configure in default builddir "./x86_64-w64-mingw32"
....exec /bin/sh ../configure "--srcdir=.." "--enable-builddir=x86_64-w64-mingw32" "mingw32"
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether sleep supports fractional seconds... yes
checking filesystem timestamp resolution... 0.01
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking xargs -n works... yes
checking for gcc... /d/repos/libffi/msvcc.sh -m64
checking whether the C compiler works... no
configure: error: in '/d/repos/libffi/x86_64-w64-mingw32':
configure: error: C compiler cannot create executables
See 'config.log' for more details

The instructions at jdk/make/devkit/createLibffiBundle.sh at c3de94cee12471a11c457c11dd55c547633de5cb · openjdk/jdk look incomplete, compared to those at libffi/libffi at c6f1610509d3d146017d6cc30020ce334bde8425. I added the LD, CPP, and CXXPP values below but got the same error.

bash configure \
   CC="/d/repos/libffi/msvcc.sh -m64" \
   CXX="/d/repos/libffi/msvcc.sh -m64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/d/temp/libffi

I tried that command in Cygwin now that a configure file was present:

bash configure \
   CC="/cygdrive/d/repos/libffi/msvcc.sh -m64" \
   CXX="/cygdrive/d/repos/libffi/msvcc.sh -m64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/cygdrive/d/temp/libffi

In Cygwin, that command failed with “configure: error: cannot run /bin/sh ./config.sub“. What could be going wrong in the configure script? M365 Copilot prompt: “change build system type in msys2” refers to gcc – Configuration x86_64-pc-msys not supported – Stack Overflow but those flags seem unnecessary given my platform. I tried removing some of the compiler setting flags to no avail:

$ time bash configure CPPFLAGS="-DFFI_BUILDING_DLL" CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP"    --disable-docs    --prefix=/d/temp/libffi
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
continue configure in default builddir "./x86_64-w64-mingw32"
....exec /bin/sh ../configure "--srcdir=.." "--enable-builddir=x86_64-w64-mingw32" "mingw32"
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether sleep supports fractional seconds... yes
checking filesystem timestamp resolution... 0.01
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking xargs -n works... yes
checking for gcc... no
checking for cc... no
checking for cl.exe... no
checking for clang... no
configure: error: in '/d/repos/libffi/x86_64-w64-mingw32':
configure: error: no acceptable C compiler found in $PATH
See 'config.log' for more details

The config.log file is in the x86_64-w64-mingw32 folder in the repo root. What I should have verified is that I could run cl.exe in MSYS before trying any of this stuff. That was the primary reason for launch ucrt64.exe from a developer command prompt. Unfortunately, that didn’t work for whatever reason.

user@machine UCRT64 /d/repos/libffi
$ cl
-bash: cl: command not found

user@machine UCRT64 /d/repos/libffi
$ echo $PATH
/ucrt64/bin:/usr/local/bin:/usr/bin:/bin:/c/Windows/System32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0/:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl

I tried manually fixing the path as follows but this didn’t work (cl.exe could still not be found):

export PATH="$PATH:/c/Program\ Files/Microsoft\ Visual\ Studio/2022/Enterprise/VC/Tools/MSVC/14.43.34808/bin/Hostx64/x64/"

The dir command can show the short name equivalents of a file name, e.g. dir /x C:\Program Files.

dir /x C:\
...
05/24/2025  11:42 AM    <DIR>          PROGRA~1     Program Files
04/09/2025  01:31 AM    <DIR>          PROGRA~2     Program Files (x86)
...

dir /x "C:\Program Files"
11/30/2023  04:40 PM    <DIR>          MIB055~1     Microsoft Visual Studio

Maybe the path will work better with this format?

export PATH="$PATH:/c/PROGRA~1/MIB055~1/2022/Enterprise/VC/Tools/MSVC/14.43.34808/bin/Hostx64/x64/"

Sure enough, I could now find cl.exe and the configure script worked!

$ where cl.exe
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.43.34808\bin\Hostx64\x64\cl.exe
$ bash configure \
   CC="/d/repos/libffi/msvcc.sh -m64" \
   CXX="/d/repos/libffi/msvcc.sh -m64" \
   CPPFLAGS="-DFFI_BUILDING_DLL" \
   LD=link CPP="cl -nologo -EP" CXXCPP="cl -nologo -EP" \
   --disable-docs \
   --prefix=/d/temp/libffi
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
continue configure in default builddir "./x86_64-w64-mingw32"
....exec /bin/sh ../configure "--srcdir=.." "--enable-builddir=x86_64-w64-mingw32" "mingw32"
configure: loading site script /etc/config.site
checking build system type... x86_64-w64-mingw32
checking host system type... x86_64-w64-mingw32
checking target system type... x86_64-w64-mingw32
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether sleep supports fractional seconds... yes
checking filesystem timestamp resolution... 0.01
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking xargs -n works... yes
checking for gcc... /d/repos/libffi/msvcc.sh -m64
checking whether the C compiler works... yes
checking for C compiler default output file name... conftest.exe
checking for suffix of executables... .exe
checking whether we are cross compiling... no
checking for suffix of object files... obj
checking whether the compiler supports GNU C... no
checking whether /d/repos/libffi/msvcc.sh -m64 accepts -g... yes
checking for /d/repos/libffi/msvcc.sh -m64 option to enable C11 features... unsupported
checking for /d/repos/libffi/msvcc.sh -m64 option to enable C99 features... unsupported
checking for /d/repos/libffi/msvcc.sh -m64 option to enable C89 features... unsupported
checking whether /d/repos/libffi/msvcc.sh -m64 understands -c and -o together... yes
checking whether make supports the include directive... yes (GNU style)
checking dependency style of /d/repos/libffi/msvcc.sh -m64... none
checking whether the compiler supports GNU C++... no
checking whether /d/repos/libffi/msvcc.sh -m64 accepts -g... yes
checking for /d/repos/libffi/msvcc.sh -m64 option to enable C++11 features... unsupported
checking for /d/repos/libffi/msvcc.sh -m64 option to enable C++98 features... unsupported
checking dependency style of /d/repos/libffi/msvcc.sh -m64... none
checking dependency style of /d/repos/libffi/msvcc.sh -m64... none
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking how to print strings... printf
checking for a sed that does not truncate output... /usr/bin/sed
checking for fgrep... /usr/bin/grep -F
checking for non-GNU ld... link
checking if the linker (link) is GNU ld... no
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... no, using cp -pR
checking the maximum length of command line arguments... 8192
checking how to convert x86_64-w64-mingw32 file names to x86_64-w64-mingw32 format... func_convert_file_msys_to_w32
checking how to convert x86_64-w64-mingw32 file names to toolchain format... func_convert_file_msys_to_w32
checking for link option to reload object files... -r
checking for file... file
checking for objdump... objdump
checking how to recognize dependent libraries... file_magic ^x86 archive import|^x86 DLL
checking for dlltool... dlltool
checking how to associate runtime and link libraries... func_cygming_dll_for_implib
checking for ranlib... ranlib
checking for ar... ar
checking for archiver @FILE support... @
checking for strip... strip
checking command to parse /usr/bin/nm -B output from /d/repos/libffi/msvcc.sh -m64 object... ok
checking for sysroot... no
checking for a working dd... /usr/bin/dd
checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1
checking for mt... no
checking if : is a manifest tool... no
checking for stdio.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for strings.h... no
checking for sys/stat.h... yes
checking for sys/types.h... yes
checking for unistd.h... no
checking for dlfcn.h... no
checking for objdir... .libs
checking for /d/repos/libffi/msvcc.sh -m64 option to produce PIC... -DDLL_EXPORT -DPIC
checking if /d/repos/libffi/msvcc.sh -m64 PIC flag -DDLL_EXPORT -DPIC works... yes
checking if /d/repos/libffi/msvcc.sh -m64 static flag  works... yes
checking if /d/repos/libffi/msvcc.sh -m64 supports -c -o file.obj... yes
checking if /d/repos/libffi/msvcc.sh -m64 supports -c -o file.obj... (cached) yes
checking whether the /d/repos/libffi/msvcc.sh -m64 linker (link) supports shared libraries... yes
checking dynamic linker characteristics... Win32 ld.exe
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking how to run the C++ preprocessor... cl -nologo -EP
checking whether the /d/repos/libffi/msvcc.sh -m64 linker (link) supports shared libraries... no
checking for /d/repos/libffi/msvcc.sh -m64 option to produce PIC... -DDLL_EXPORT -DPIC
checking if /d/repos/libffi/msvcc.sh -m64 PIC flag -DDLL_EXPORT -DPIC works... yes
checking if /d/repos/libffi/msvcc.sh -m64 static flag  works... yes
checking if /d/repos/libffi/msvcc.sh -m64 supports -c -o file.obj... yes
checking if /d/repos/libffi/msvcc.sh -m64 supports -c -o file.obj... (cached) yes
checking whether the /d/repos/libffi/msvcc.sh -m64 linker (link) supports shared libraries... no
checking dynamic linker characteristics... Win32 ld.exe
checking how to hardcode library paths into programs... immediate
checking for readelf... readelf
checking size of size_t... 8
checking for C compiler vendor... microsoft
checking whether C compiler accepts  -O2... yes
checking CFLAGS for most reasonable warnings...
checking whether to enable maintainer-specific portions of Makefiles... no
checking for sys/memfd.h... no
checking for memfd_create... no
checking for egrep... (cached) /usr/bin/grep -E
checking for memcpy... no
checking for alloca.h... no
checking size of double... 8
checking size of long double... 8
checking whether byte ordering is bigendian... no
checking assembler .cfi pseudo-op support... no
checking assembler supports pc related relocs... yes
checking whether compiler supports pointer authentication... no
checking for _ prefix in compiled symbols... no
configure: versioning on shared library symbols is no
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating include/Makefile
config.status: creating include/ffi.h
config.status: creating Makefile
config.status: creating testsuite/Makefile
config.status: creating man/Makefile
config.status: creating doc/Makefile
config.status: creating libffi.pc
config.status: creating fficonfig.h
config.status: executing buildir commands
config.status: create top_srcdir/Makefile guessed from local Makefile
config.status: build in x86_64-w64-mingw32 (HOST=)
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing include commands
config.status: executing src commands

I could now run make as instructed by the readme. Here is the tail of the resulting output:

...
libtool: link: /d/repos/libffi/msvcc.sh -m64 -o .libs/libffi-8.dll  src/.libs/prep_cif.obj src/.libs/types.obj src/.libs/raw_api.obj src/.libs/java_raw_api.obj src/.libs/closures.obj src/.libs/tramp.obj src/x86/.libs/ffiw64.obj src/x86/.libs/win64_intel.obj   -m64 -O2   `func_echo_all "" | /usr/bin/sed 's/ -lc$//'` -link -dll
libtool: link: linknames=
libtool: link: true
libtool: link: ( cd ".libs" && rm -f "libffi.la" && cp -pR "../libffi.la" "libffi.la" )
make[3]: Leaving directory '/d/repos/libffi/x86_64-w64-mingw32'
make[2]: Leaving directory '/d/repos/libffi/x86_64-w64-mingw32'
make[1]: Leaving directory '/d/repos/libffi/x86_64-w64-mingw32'
MAKE x86_64-pc-mingw64 : 0 * all-configured
make[1]: Entering directory '/d/repos/libffi/x86_64-w64-mingw32'
make[1]: *** No rule to make target 'all-configured'.  Stop.
make[1]: Leaving directory '/d/repos/libffi/x86_64-w64-mingw32'
make: *** [Makefile:3782: all-configured] Error 2

Although the build appeared to have failed, the .DLL, .LIB, and .h files I needed had been generated!

$ ls -1 x86_64-w64-mingw32/.libs/
libffi-8.dll
libffi-8.exp
libffi-8.lib
libffi.la
libffi.lai
libffi_convenience.la
libffi_convenience.lib

I manually copied these files to set up the libffi repo for building OpenJDK (the expected LIB filename does not have the -8 suffix by default). I’m guessing make install or something like that is the proper way to do this but I had what I needed so this was good enough for me.

$ cp -r x86_64-w64-mingw32/.libs lib/
$ cp -r x86_64-w64-mingw32/include/ include/
$ cp lib/libffi-8.lib lib/libffi.lib
$ cp lib/libffi-8.dll lib/libffi.dll

This is the test function OpenJDK’s configure uses to validate the libffi installation.

#include <ffi.h>

int main() {
  ffi_call(NULL, NULL, NULL, NULL);
  return 0;
}

I tested this file to ensure I could compile it. The linking step failed (cl.exe without the -c option).

$ cl -c -I include ffi_test.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34810 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

ffi_test.c

$ cl -I include ffi_test.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34810 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

ffi_test.c
Microsoft (R) Incremental Linker Version 14.43.34810.0
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:ffi_test.exe
ffi_test.obj
ffi_test.obj : error LNK2019: unresolved external symbol __imp_ffi_call referenced in function main
ffi_test.exe : fatal error LNK1120: 1 unresolved externals

I tried manually running link.exe but this failed because the wrong link.exe is called.

$ where link.exe
C:\software\msys64\usr\bin\link.exe
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.43.34808\bin\Hostx64\x64\link.exe

Prepending the compiler path to $PATH resolved this.

$ cl -I include ffi_test.c -link -libpath:lib libffi.lib
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34810 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

ffi_test.c
Microsoft (R) Incremental Linker Version 14.43.34810.0
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:ffi_test.exe
-libpath:lib
libffi.lib
ffi_test.obj

$ ./ffi_test.exe

At this point, things were in good enough shape to build OpenJDK. However, I could not successfully run bash configure ... in Cygwin (to build OpenJDK) now. Perhaps it’s because I had been mucking around with the Cygwin setup. I tried removing automake and libtool but that didn’t fix the problem.

setup-x86_64.exe -?
setup-x86_64.exe -q -x automake
setup-x86_64.exe -q -x libtool

This was when I uninstalled and reinstalled Cygwin to get everything to work again.


Categories: Games

Using Stockfish with En Croissant

It has been at least a decade since I analyzed chess positions using software. Someone mentioned Stockfish as the latest hotness in this space so I have downloaded Stockfish 17.1. I was pleased to learn that it is open source! The GitHub – official-stockfish/Stockfish: A free and strong UCI chess engine docs state that it does not include a GUI but the Download and usage wiki has a list of free chess GUIs. I’m drawn to En Croissant – The Ultimate Chess Toolkit because it is also open source.

Main En Croissant Window

With both installed, I just need to recall the details of FEN (Forsyth-Edwards Notation). The only file that En Croissant can open though is a PGN file (briefly described at Representations of Chess: FEN, PGN, and Bitboards – Chess.com). I created a FEN for the position I wanted to explore:

KR1r/n7/k/8/8/8/8/8 w - - 0 1

The “Analysis Board” command looks like the place to enter it. Click on the “Edit Position” button.

En Croissant Analysis Board Window

A FEN section appears with a textbox for your FEN. You can click on the EMPTY button to clear the board.

En Croissant Edit Position Window

I pasted my FEN into the FEN position text box then tabbed out of it and it just reverts to the empty board. Very unintuitive behavior – I accidentally discover that I need to press ENTER in that text box to accept my new position. An error appears about this being an invalid position. It would have been nice to have this validation happen instead of deleting my entry on TAB.

Invalid board

This is the corrected FEN:

KR1r4/n7/k7/8/8/8/8/8 w - - 0 1

This leaves me confident that I get the computer to help me analyze this position.

“Play from Here” Button
Play from Here Screen

I want to have an engine play the position but none are available. The Download and usage · official-stockfish/Stockfish Wiki · GitHub says to go to the Engines tab to select an engine.

Your Engines

I select “C:\software\stockfish-windows-x86-64-avx2\stockfish\stockfish-windows-x86-64-avx2.exe” as per the wiki.

Add Engine
Stockfish 17.1 Engine Added

Now that I’ve set the engine to play, why aren’t the arrows labeled? I actually switched to Arena (see below) before returning with a renewed determination to get this to work. On the “Play from Here” screen, notice that the Engine button now displays Stockfish 17.1. Perhaps it’s the UI that’s confusing, looks like a bunch of controls just dumped into a panel. Switch both controls to Engine and adjust the time as desired. I’m using a 3s limit to keep things moving. Finally, click on “Start Game” and watch the engine battle itself.

Here are a few other FEN positions (from various Facebook posts) to play with:

KR1r4/n7/k/8/8/8/8/8 w - - 0 0

kr6/1r1N4/2Q5/8/8/8/8/K7 w - - 0 0

8/7p/1k4p1/4K3/6P1/8/8/8 w - - 0 0

k7/b7/2K5/4B3/8/8/8/Q7 w - - 0 0

Arena (3.5.1)

I had actually installed Arena before En Croissant but didn’t like that it wasn’t open source. I initially struggled getting the engine to play the game in En Croissant and went back to Arena.

Arena 3.5.1 Setup Wizard

I was pleasantly surprised that I could just load the FEN by going to Position > Load FEN from clipboard then have the computer play by going to Game > Move Now.

However, I didn’t like how long it took to make a move. After some exploration, I found that I could go to Levels > Adjust and change from Tournament mode to Time per move mode. It was ironic that I was using Arena without Stockfish for my analysis but still wanted to document my exploration of En Croissant. Perhaps it was this experience with the time setting in Arena that allowed me to adjust the time settings for the engine in En Croissant and actually see the Stockfish engine in action!

Improvements?

I wonder if I could add this Load FEN from clipboard option to En Croissant since it is open source. Such a command should automatically open the Analysis Board window. As I’m wrapping up this post, I just noticed the error below when I switched back to the En Croissant window.

Minified React Error

Making that UI change could be a good entry for me into the React world.


Categories: Networks

Introduction to Networks – Part IV

Cellular networks are a ubiquitous part of modern daily life. The history of cellular technology is definitely worth knowing, even if only at the high level presented in this video on the Evolution of Mobile Standards (1G, 2G, 3G, 4G, and 5G).

Evolution of Mobile Standards [1G, 2G, 3G, 4G, 5G]

2G introduced digital modulation and came in variants like TDMA, CDMA, and GSM. 3G uses spread spectrum in the radio portion of the network whereas 4G uses Orthogonal Frequency Division Mutiplexing (OFDM). 4G also separates the user and control planes whereas they were on the same hardware in 3G (and therefore couldn’t scale independently). The following videos from Sunny Classroom are brief but helpful explanations of these communications concepts.

FHSS – Frequency Hopping Spread Spectrum
DSSS – Direct Sequence Spread Spectrum
OFDM – Orthogonal Frequency Division Multiplexing

5G offers lower end-to-end latency and higher uplink and downlink throughput than 4G because it has more bands (low, mid, and high) vs just low and high with 4G. It is also a programmable network, which lets developers access network stats via APIs.

Mobility

The discussion in the class proceeded to mobility, introducing the concept of cellular handoff, which can broadly be classified into mobile assisted and mobile controlled handover. See Handoff in Wireless Mobile Networks for more details. Another classification of types of handover is based on when the UE disconnects from one cell: hard handoff vs soft handoff. Soft handoff ensures that calls are not dropped. The professor was drawing hexagonal cells when illustrating these and I realized I had no idea why they are hexagonal. Here’s why:

antennas in a coverage area are in a hexagonal pattern… because it requires fewer cells to represent a hexagon compared to triangle or square – meaning network carriers can cover a wider area with less base stations. – The Fundamentals of Cellular System Design

The transmissions to and from the base stations can be done via frequency division duplexing or time division duplexing. See Frequency Division vs. Time Division Duplexing in Wireless Communications or the video below for more details.

2.1 – TDD VS FDD IN LTE 4G Updated

LTE

Next, we took a closer look at the architectural details of 4G. The key components in the LTE packet core are the Serving Gateway (SGW), PDN Gateway (PGW), Mobility Management Entity (MME), Policy and Charging Rules Function (PCRF), and Home Subscriber Server (HSS). Their relationship is explained on this LTE (4G) Network Architecture page. See this 4G Architecture: LTE Network Elements and Interfaces page as well. These videos also cover the basics of LTE:

3.1 – LTE 4G ARCHITECTURE BASICS – INTRODUCTION
4G LTE Technology Overview

An interesting aspect of the LTE packet core is that a 5G base station can be attached to it. Contrast this mode, known as 5G non-standalone to 5G standalone mode, where a 5G radio is connected to a 5G packet core. See this post on Non-standalone and Standalone: two standards-based paths to 5G for a detailed review of these modes. One advantage of the 5G packet core is that it allows for cloud-based implementations. The Boost Mobile Network, for example, is 100% implemented in the cloud. Is AWS set to flex cloud on telecom? has a discussion of such a transition (to the cloud) in the telecom space.

The antennas used on the base stations can be of multiple types, e.g. SISO and MIMO. MIMO Antennas Explained: An In-Depth Guide provides more details on the differences between these designs. A key benefit of MIMO is that it eliminates performance degradation caused by multipath wave propagation.

We also dug into the LTE and 5G network evolution, from network deployment, to network growth, then finally coverage and capacity optimization. Deployment may involve dual-radio in UEs and EPC capabilities to support interoperability with earlier generations like 2G/3G. 5G is currently in the deployment phase since 5G SA has not yet been fully rolled out. Network growth may involve cell splitting in the RAN for capacity as well as expansion of the core network. Coverage and capacity optimization may involve spectrum aggregation, advanced network topologies, and advanced antenna techniques.

The continued growth in application and device diversity, RAN complexity, and QoS variance is making networks more complex and thus harder to optimize under the current network management paradigm. Self-organizing networks (SON) were designed to address this problem. Here is an overview of SON.

3GPP SON Series: An Introduction to Self-Organizing Networks (SON)

SON is used to set many required configuration parameters when introducing a new eNB or gNB to a network e.g. IP addresses from DHCP, transmit power, beam width, supported connections, connecting to neighboring base stations via the X2 (4G) or XN (5G) connection, etc. SON can also be used for driving energy savings by shutting down carriers when less capacity is required e.g. in the middle of the night (without dropping emergency calls). Another application is coverage and capacity optimization, which involves adjusting transmission power and continuously adjusting antenna tilt to increase capacity (thus decreasing coverage) or to increase coverage (thus decreasing capacity). Mobility handover optimization is also required to avoid too early/too late handover or a ping pong between base stations. The SON architecture can be centralized, distributed, or hybrid.

5G

Finally, we took a look at 5G technology, which has much lower latencies, much higher throughput, and high capacity. Some of the key technologies I learn about include Fixed Wireless Access, in which the UE does not move, and mmWave. More info on the latter is available at What is mmWave and how does it relate to 5G? 5G also supports modified air interfaces (modified OFDM), massive MIMO, device-to-device communication, separated user and control planes, and network virtualization.

An important capability that 5G introduced is positioning, which has many potential use cases e.g. industrial, automotive, and AR/VR. See 5G positioning: What you need to know for more details. In the industrial setting, for example, 5g all in one boxes are deployed in the 5G private networks. They have a base station and a packet core in a single piece of hardware, e.g. RAK All-in-One 5G box (the first one in the search results).

The 5G core network architecture is significantly different from the LTE packet core (eNB, SGW, PGW, MME, HSS, and PCRF). It moved to a service based architecture where microservices expose functionality via APIs. This makes the 5G network programmable and extensible. This 5G System Overview covers the overall 5G architecture. These are a few of the 5G components:

Summary

This is the final post in the Introduction to Networks series of posts. It has been an extremely enlightening course. I have appreciated how much more extensive it was than I expected from an introductory course as I try to stay on top of the fast moving tech space.


Building Windows AArch64 OpenJDK using Visual Studio Build Tools

The 8353009: Improve documentation for Windows AArch64 builds PR has a comment stating that “the BuildTools distribution of Visual Studio do not include aarch64-hosted compilers, so to be able to run native builds without the Prism emulation, you need to install the full Visual Studio, including the IDE.” This post describes how I determined this to be false.

I started by looking up how to install Visual Studio Build Tools. The first result I examined was Install Visual Studio Build Tools into a container to support a consistent build system, which at gave me the command line format for installing the build tools. These tools are available on the Visual Studio 2022 Release History page. I wanted a minimal set of components to install so I started with the ARM64 tools and the Windows 11 SDK:

vs_buildtools.exe --quiet --wait --norestart --nocache --installPath "%ProgramFiles(x86)%\Microsoft Visual Studio\2022\BuildTools" --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.Windows11SDK.22621

Running bash configure --with-boot-jdk=<PATH> failed with the error that it could not find a C compiler:

...
checking for cacerts file... default
checking for cacerts source... default
checking for --enable-unlimited-crypto... enabled, default
checking for jni library path... default
configure: Using default toolchain microsoft (Microsoft Visual Studio)
configure: Found Visual Studio installation at /cygdrive/c/progra~2/micros~2/2022/BuildTools using well-known name
configure: Found Microsoft Visual Studio 2022
configure: Trying to extract Visual Studio environment variables for aarch64
configure: using /cygdrive/c/progra~2/micros~2/2022/BuildTools/vc/auxiliary/build/vcvarsarm64.bat
configure: Setting extracted environment variables for aarch64
checking that Visual Studio variables have been correctly extracted... ok
checking for cl... [not found]
configure: error: Could not find a C compiler.
configure exiting with result code 1

The “cl” compiler name it is searching for came from the TOOLCHAIN_DETERMINE_TOOLCHAIN_TYPE macro in jdk/make/autoconf/toolchain.m4. The actual error message (Could not find a C compiler) comes from the TOOLCHAIN_FIND_COMPILER macro, which is invoked by the TOOLCHAIN_DETECT_TOOLCHAIN_CORE macro. The $COMPILER_NAME variable in the error message is the second argument in the TOOLCHAIN_FIND_COMPILER([CC], [C], $TOOLCHAIN_CC_BINARY) call. These macros are invoked from the top level configure.ac file.

The TOOLCHAIN_FIND_COMPILER macro calls the UTIL_LOOKUP_TOOLCHAIN_PROGS macro to find the C compiler. I verified that the last argument is “cl” with an AC_MSG_NOTICE. At this point, I compared the TOOLCHAIN_PATH in config.log with that on a different ARM64 machine with a full VS install. Sure enough, it didn’t contain the bin/hostarm64/arm64 path with the buildtools setup, even though the path exists on disk. TOOLCHAIN_PATH is coming from VS_PATH in toolchain_microsoft.m4. Here is the build\windows-aarch64-server-slowdebug\configure-support\vs-env-aarch64\set-vs-env.sh file.

PATH_BEFORE=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'C:\windows\system32;C:\cygwin64\usr\local\bin;C:\cygwin64\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files\Git\cmd;C:\Users\USER\AppData\Local\Microsoft\WindowsApps ') 
PATH_AFTER=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'c:\PROGRA~2\MICROS~2\2022\BuildTools\MSBuild\Current\bin\Roslyn;C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\\arm64;C:\Program Files (x86)\Windows Kits\10\bin\\arm64;c:\PROGRA~2\MICROS~2\2022\BuildTools\\MSBuild\Current\Bin\amd64;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;c:\PROGRA~2\MICROS~2\2022\BuildTools\Common7\IDE\;c:\PROGRA~2\MICROS~2\2022\BuildTools\Common7\Tools\;C:\windows\system32;C:\cygwin64\usr\local\bin;C:\cygwin64\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files\Git\cmd;C:\Users\USER\AppData\Local\Microsoft\WindowsApps ') 
VS_INCLUDE=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt ') 
VS_LIB=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\arm64;C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\arm64 ') 
VCINSTALLDIR=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import ' ') 
VCToolsRedistDir=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import ' ') 
WindowsSdkDir=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'C:\Program Files (x86)\Windows Kits\10\ ') 
WINDOWSSDKDIR=$($BASH $TOPDIR/make/scripts/fixpath.sh -i import 'C:\Program Files (x86)\Windows Kits\10\ ') 

Notice that VS_PATH only has what VS_ENV_CMD added to the PATH! This was a clue that I need to take another step back – I realized that I couldn’t even run cl.exe in the developer command prompt! Then again, the command line for the terminal is:

cmd.exe /k "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\Tools\VsDevCmd.bat" -startdir=none -arch=arm64 -host_arch=x64

Changing the host architecture to arm64 did not help. I launched the VS installer and noticed that the “Desktop development with C++” workload was not installed so I must have been missing additional components.

Visual Studio Build Tools 2022 LTSC 17.12 Workloads

I didn’t want to install the whole workload though, just the necessary individual components. I noticed the C++ Build Tools core features component wasn’t installed so I selected it. The Windows Universal C Runtime component is automatically selected as well:

Visual Studio Build Tools 2022 LTSC 17.12 Individual Components

Once the installation completed, I could run cl.exe in the developer command prompt!

**********************************************************************
** Visual Studio 2022 Developer Command Prompt v17.12.7
** Copyright (c) 2022 Microsoft Corporation
**********************************************************************

C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>cl
Microsoft (R) C/C++ Optimizing Compiler Version 19.42.34441 for ARM64
Copyright (C) Microsoft Corporation.  All rights reserved.

usage: cl [ option... ] filename... [ /link linkoption... ]

C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>

The VS installer log in %TEMP% contained these components:

Microsoft.VisualStudio.Component.Roslyn.Compiler,Microsoft.Component.MSBuild,Microsoft.VisualStudio.Component.CoreBuildTools,Microsoft.VisualStudio.Component.Windows10SDK,Microsoft.VisualStudio.Component.VC.CoreBuildTools,Microsoft.VisualStudio.Component.Windows11SDK.22621,Microsoft.VisualStudio.Component.VC.Tools.ARM64EC,Microsoft.VisualStudio.Component.VC.Tools.ARM64

This led me to the minimal set of components that I needed to build OpenJDK on a Windows AArch64 machine with the Visual Studio Build Tools:

vs_buildtools.exe --quiet --wait --norestart --nocache ^
--installPath "%ProgramFiles(x86)%\Microsoft Visual Studio\2022\BuildTools" ^
--add Microsoft.VisualStudio.Component.VC.CoreBuildTools ^
--add Microsoft.VisualStudio.Component.VC.Tools.ARM64 ^
--add Microsoft.VisualStudio.Component.Windows11SDK.22621

Copilot informed me that the caret was the way to split a command across multiple lines in the Windows Command Prompt. This was the final command I used to complete the 8353009: Improve documentation for Windows AArch64 builds PR.


Loss Functions and Gradient Descent 101

While learning about large language models, the issue of vanishing gradients came up. What is a gradient? I attempted to describe it as the difference between where you are and where you want to be (the target), which is given by the loss function. This led to the question of what exactly a loss function is. The video below from IBM Technology explains loss functions: a loss function is an evaluation metric (how well is the model performing) and/or a guide that directs the model’s learning process.

What is a Loss Function? Understanding How AI Models Learn

The primary reason for calculating the loss function is to guide the model’s learning process. It provides a numeric value that indicates how far off the model’s predictions are from the actual results. By analyzing the loss, the model’s parameters can be adjusted (optimization) since the loss function is a feedback mechanism to the model, telling it how well it is performing and where it needs to improve. – What is a Loss Function? Understanding How AI Models Learn

A smaller value of the loss function indicates that the performance of the model has improved.

A loss function can also be used as input to an algorithm that influences the model parameters to minimize loss, e.g. gradient descent. – What is a Loss Function? Understanding How AI Models Learn

The gradient of the loss function is useful because it enables algorithms to determine which adjustments (e.g. to weights) will result in a smaller loss. The next video on Gradient descent, how neural networks learn is a helpful introduction to how loss functions are used to guide learning.

Gradient descent, how neural networks learn | DL2

Backpropagation is the algorithm used to compute the gradient. This video from 3Blue1Brown is a helpful explanation of what backpropagation is:

Backpropagation, intuitively | DL3

Two important phenomena in gradient descent are the problems of vanishing and exploding gradients. The Vanishing & Exploding Gradient explained | A problem resulting from backpropagation video describes these problems as follows: vanishing gradients mean that updated weights earlier in the network barely change (stuck) which means that the rest of the network cannot really minimize the loss function (i.e. learn). Exploding gradients mean that the earlier weights now increase so much that the optimal value of the loss function will never be achieved because weights become too big too quickly.

Vanishing & Exploding Gradient explained | A problem resulting from backpropagation

Categories: Virtualization

Ubuntu Guest Screen Resolution on Hyper-V Host

In Ubuntu VM Setup for OpenJDK Development, I configured the Ubuntu guest to use a 1680×1050 resolution by editing the /etc/default/grub file, running update-grub and then rebooting. 1680 horizontal pixels now seems restrictive given that I want to look at side-by-side diffs in VSCode. I updated the grub file to 1920×1440 and rebooted but ended up with a 1024×768 window (I think, it was small). I suspected that 1920×1440 was not a supported resolution. My search for supported resolutions for hyperv ubuntu led me to a great solution posted in How to adjust virtual machine display resolution to adapt to full screen – Microsoft Q&A using the Set-VMVideo command:

Set-VMVideo -VMName "vm1ubuntu" -HorizontalResolution 1920 -VerticalResolution 1440 -ResolutionType Single

This command needs to be executed as an administrator and the virtual machine needs to be turned off to avoid this error:

C:\> Set-VMVideo -VMName "vm1ubuntu" -HorizontalResolution 1920 -VerticalResolution 1080 -ResolutionType Single
Set-VMVideo : Failed to modify device 'Synthetic Display Controller'.
'vm1ubuntu' failed to modify device 'Synthetic Display Controller'. (Virtual machine ID
1CF95D1E-C608-4B94-BA26-C73110C2B107)
The vm1ubuntu virtual machine must be turned off when setting resolution type, horizontal resolution or vertical
resolution. (Virtual machine ID 1CF95D1E-C608-4B94-BA26-C73110C2B107)
At line:1 char:1
+ Set-VMVideo -VMName "vm1ubuntu" -HorizontalResolution 1920 -VerticalR ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Set-VMVideo], VirtualizationException
    + FullyQualifiedErrorId : InvalidState,Microsoft.HyperV.PowerShell.Commands.SetVMVideo

This is much cleaner way to set the screen resolution so I’m glad the poster went back and added this solution.


Categories: Networks

Introduction to Networks – Part III

The focus of part 3 of this series is on the different types of wired networks. A key aspet of many networks is that QoS is just one concern, another key issue being how to meet guarantees for delivery of voice services. How is voice delivered? Integrated Services Digital Network (ISDN) is an international standard for voice, video, and data transmission over digital telephone or copper lines. It has two service levels. The first is Basic Rate Interface (BRI), which supports 2 bearer channels at 64kbps each and 1 D channel at 16 kbps. The second is the Primary Rate Interface (PRI), which supports 23 bearer channels (in the US) at 64 kbps each and 1 D channel at 16 kbps. The signaling/data (D) channel runs the ISDN signaling protocol based on
Q.931. This video is a good high level introduction of ISDN. T1 and ISDN are used in access networks, together with technologies like IP and MPLS.

ISDN – Integrated Services Digital Network

Optical Networks

In newer generations of networks, the core is fiber (instead of copper) because it can deliver terabits per second. Installation and management of fiber networks is also much easier than copper networks. Fiber optic signals are analog – (in the infrared range).

What is the ELECTROMAGNETIC SPECTRUM

Light sources used for fiber optic communication include light-emitting diodes (LEDs), laser diodes, vertical cavity surface emitting lasers (VCSELs), Fabry-Perot lasers, and distributed feedback lasers.

How LED Works – Unravel the Mysteries of How LEDs Work!

The packet transport network is another key piece to understand. Customers send traffic to metro access, aggregation, and core portions of the network where voice and data are converged. In the packet core, wavelengths are being added and dropped by add-drop multiplexers. There are several types of ADMs with links to explanations about them from various vendors:

  1. Fixed OADM (FOADM)
  2. Reconfigurable OADM (ROADM)
  3. Flexible ROADM
  4. Open ROADM, which works to address the fact that optical systems have been proprietary (e.g. because SD FEC algorithms) on transponders are not interoperable and there are proprietary control loops between transponders and other optical components).

The next video gave me a better understanding of customer concerns with ROADMs and FOADMs.

Tutorial: To ROADM or Not to ROADM: When does a FOADM make sense in your optical network?

Other major types of network components include amplifiers, regenerators, and equalization nodes. Transponders map client side signals to wavelengths for high speed transport. They can be contained in a field-replaceable unit (FRU). Common types of pluggable optics include SFP+ (Small Form-factor Pluggable), CFP4, and QSFP28. Amplification is an analog process of boosting signal strength and is done in the optical domain (no conversion to electrical). Any impairments in the signal are boosted as well. A single pump laser is used for this. Regeneration can reshape and retime the optical signal but requires conversion to the electrical domain then back to the optical domain, making it more expensive to implement.

How a Fiber Laser Works

Major types of amplifiers in optical networks include EDFA (Erbium Doped Fiber Amplifer), Raman amplifier, and Hybrid Raman-EDFA amplifier. These are great explanations of these amplifiers:

Working Principle of Erbium Doped Fiber Amplifier (EDFA)
The EDFA – how it was developed.

Wavelength Selective Switch (WSS) was first implemented using MEMS but did’t work well because the hinges would fail. Liquid Crystal on Silicon (LCoS) is now commonly used to implement WSS since it has no moving parts. It can also support Flexgrid.

What is LCoS Based Wavelength Selective Switch – FO4SALE.COM

Optical patch panels are another component in fiber networks. They are used to join optical fibers where a connect/disconnect capability is required.

Handling Failure

There are 2 types of protection in networks:

  1. Network protection: ensures that customer SLAs are met by preventing failures. Optical protection examples include mesh restoration (GMPLS, SDN), SNCP (OTN), UPSR & BLSR for SONET, and 1+1 or 1:1 circuits (active vs inactive backup circuit). Packet protection examples include MPLS fast reroute, LAG, G.8031, G.8032.
  2. Equipment protection: focuses on protecting individual nodes.

I couldn’t emphasize this enough: this is such a broad field with so many technologies! What an introduction to networking!


Categories: Networks

Introduction to Networks – Part II

The previous post introduced different types of networks and some of their architectural details. In this post, we look at the biggest problem network engineers work on: congestion. How are networks designed to address it? The professor starts tackling this area with a discussion of Quality of Service (QoS). Quality is defined in terms of the underlying requirements e.g. throughput, delay, jitter, packet loss, service availability, and per-flow sequence preservation. Services can be best effort, or other classes like gold service. Cisco’s Quality of Service (QoS) document discusses four levels of metal policy (platinum, gold, silver, and bronze), for example.

Class of Service (CoS) is a traffic classification that enables different actions to be taken on individual classes of traffic. Contrast this to type of service (ToS), which is a specific field in the IPv4 header (used to implement CoS). Juniper Networks post on Understanding Class of Service (CoS) Profiles equates QoS and CoS, but the professor explains that QoS is a more abstract concept than CoS.

QoS is a set of actions that the network takes to deliver the right delay, throughput, etc. QoS timeframes affect the way congestion is handled. For example, scheduling and dropping techniques and per-hop queuing are useful for the low millisecond time regime common in web traffic. Congestion over hundreds of milliseconds typically affects TCP (e.g. round trip times, closed-loop feedback) and this is addressed via methods like active queue management (AQM) and congestion control techniques like random early detection (RED). Congestion that occurs in the tens of seconds to minutes range is addressed by capacity planning.

How is QoS achieved in the data and control planes? By queuing, scheduling, policing, and dropping. The roles of the data and control planes are quite extensive as per the router diagram used to describe them. This is without getting into the details of the management plane e.g. the element management systems (per node) and the network management systems they communicate with. Control plane QoS mechanisms handle admission control and resource reservation and are typically implemented in software. Resource Reservation Protocol (RSVP) is the protocol mostly used in practice for control plane QoS. There are many explanations on RSVP, e.g. this Introduction to RSVP and this RSVP Overview. The primary QoS architectures are integrated services (Intserv) and differentiated services (Diffserv). Intserv uses RSVP and although it doesn’t scale, it is useful when guaranteed service is required.

We start a deep dive into the QoS techniques with queuing. There are different types of queues: first come first served (FCFS/FIFO), priority queues, and weighted queues. Packet schedulers can have a mix of these approches, e.g. 1 priority queue and N other weighted queues. Performance modeling can be done on queues. For voice traffic, the distribution of the arrival rate of traffic is a Poisson distribution. Therefore, the delay of packets and the length of the queue can be accurately modeled/predicted! See M/M/1 queues as a starting point (M/M/1 is Kendall notation and is more fully described in the next video).

Queuing Theory Tutorial – Queues/Lines, Characteristics, Kendall Notation, M/M/1 Queues

Data Plane QoS Mechanisms

These data plane QoS mechanisms are applied at each network node: classification, marking, policing and shaping, prioritization, minimum rate assurance. Below are more details about each.

Classification

This is the process of identifying flows of packets and grouping individual traffic flows into aggregated streams such that actions can be applied to those flow streams. Up to this point, I have had a vague idea of what a flow is but not a proper definition. The instructor defines a flow as a 5-tuple of source & destination IP addresses and TCP/UDP ports and a transport protocol. What is a Network Traffic Flow? discusses various ways of defining a flow, and this is just one of many. Classification needs to avoid fragmentation because the 5-tuple information is only in the first packet. There are 4 ways of classifying traffic:

  1. Simple classification – the use of fields designed for QoS classification in IP headers e.g. the type of service (TOS) byte in IPv4. There are complications with using the DTRM bits of the TOS (e.g. minimizing delay and maximizing throughput could conflict).
  2. Implicit classification – done without inspecting packet header or content, e.g. by examining layer 1 or 2 identifiers.
  3. Complex classification – using fields not designed for QoS classification or layer 2 criteria like MAC addresses.
  4. Deep packet/stateful inspection – examination of actual payload and/or stateful inspection of a sequence of packets.

Marking/Coloring

This is simply setting the fields assigned for QoS classification in IP packet headers (DSCP field) or MPLS packet headers (EXP field).

Rate Enforcement

This is done to avoid congestion. Policing is a mechanism to ensure that a traffic stream does not exceed a defined maximum rate. It stands in constrast to shaping, which is typically accomplished by queuing (delays traffic, never drops it). One type of policer is the token bucket policer. It never delays traffic and cannot reorder or reprioritize traffic. See Cisco’s Policing and Shaping Overview and QoS Policing documents for details. This is one of the rate limiting algorithms discussed in the video below (I found this video’s explanation more intuitive).

Five Rate Limiting Algorithms ~ Key Concepts in System Design

The single rate three color marker and the two rate three color marker are two rate limiting approaches. Traffic metering can be implemented using such policers.

Prioritization

The next stage is prioritization of the traffic. 4 possible approaches: with prioritiy queues, e.g. where VoIP traffic always has highest priority, other queues can be starved by the scheduler. Weighted round robbin will take more packets from the high priority queues but still cycle through the other queues, taking fewer packets from them. Weighted bandwidth scheduling considers the packet sizes instead of just packet counts per queue (e.g. just taking 1 packet from a low priority queue can have negative impact if the packet is huge). Deficit round robbin is the one used in practice. It keeps track of the history of the number of packets services, and not just instantaneous values. I found the next video to expand on these brief explanations of scheduling algorithms.

How Do Schedulers in Routers Work? Understanding RR, WRR, WFQ, and DRR Through Simple Examples

One of the points that came up in discussion was that the schedulers use Run-to-completion scheduling, which means that a packet must be fully processed before starting on another packet. Routers have an interface FIFO (Tx buffer) on the physical link. When it fills up, this signals to the scheduler that there may be congestion downstream, thereby allowing for back pressure flow control. There is also multi-level strict policy queuing which allows for multiple priority queues instead of just 1 (e.g. voice & video) but not as common today.

Routers also drop packets to prevent unacceptable delays caused by buffering too many packets. There are different dropping strategies, e.g. tail dropping (dropping from the back of the queue), weighted tail dropping (>1 queue limit via heuristics), and head dropping (rare).

Active queue management (AQM) is a congestion avoidance technique. It works by detecting congestion before queues overflow. These are some techniques for AQM:

  1. Random early detection (RED), which prevents TCP global synchronization
  2. Weighted random early detection

These QoS mechanisms operate in the context of an overriding architecture, integrated services (Intserv) or differentiated services (Diffserv). IntServ can be used in the financial industry or medical health facilities, for example. These are delay sensitive applications where unbounded scaling is not a real requirement. IntServ explicitly manages bandwidth resources on a per flow basis. DiffServ was developed to support (as the name suggests) differentiated treatment of packets in large scale environments. It does this using a 6-bit differentiated services code point (DSCP) in the IPv4 ToS header or the IPv6 traffic class octet. Classification and conditioning happen at the edge of the DiffServ domain. Actions are performed on behavior aggregates (contrast this to the per flow actions of IntServ). The next technology we learn about is Multiprotocol Label Switching, defined as follows on Wikipedia:

Multiprotocol Label Switching is a routing technique in telecommunications networks that directs data from one node to the next based on labels rather than network addresses.

MPLS is similar to IntServ in that it lets you define an end-to-end path through the network for traffic but without reserving resources. It is a hop by hop forwarding mechanism, which stands in contrast to IP which works by making next hop routing decisions without regard to the end-to-end path taken by the packets. MPLS can be deployed on any layer 2 technology (multiprotocol). Benefits of MPLS include fast rerouting in case of failures and providing QoS support. One of the settings in which MPLS is used is in SD-WAN. This article provides a helpful contrast: What is the difference between SD-WAN and MPLS? These are the main applications of MPLS:

  1. Traffic Engineering: allows network administrator to make the path deterministic (normal hop-by-hop routing is not). In other words, a frame forwarding policy can be used instead of relying on dynamic routing protocols.
  2. QoS: the MPLS EXP bits are used for marking traffic per the labels.

This is quite the array of topics, especially for an introduction to networks course. I have a greater appreciation of how broad this space is.


Categories: Networks

Introduction to Networks

I’m taking an online introductory course on networks. I have been surprised by how much ground this course is covering. I didn’t expect to cover wireless (mobile) networks, for example. I looked for videos on some of the topics to learn more, e.g. 4g network architecture – YouTube. Networking is turning out to be much cooler and more interesting than I thought possible. This post is a compilation of all the key topics introduced in the course (in the general order they were introduced, but not particularly organized into a coherent story).

My main takeaway from this first video is that 4G networks are entirely packet switched (basic, but new to me).

4G LTE Network Architecture Simplified

The next video on how messages are transmitted to the cell phone tower is insightful as well. I appreciated the high-level discussion of antennas.

How WiFi and Cell Phones Work | Wireless Communication Explained

The concept of control plane and data plane came up as well. One advantage of this separation as per the overview below are independent evolution and development of each (e.g. control software can be upgraded without changing the hardware).

M2.1: Overview of Control and Data Plane Separation

There are so many concepts in this space, most of them new to me, e.g. OAM, NMS, and EMS. Some places they are discussed include LTE Architecture Concepts, Differences Between an NMS and an EMS, and this video on Management Plane vs. Control Plane vs. Data Plane. We briefly got into the differences between 4G and 5G, one being the service-based architecture. Here’s a video I found introducing it:

5G Service Based Architecture | Telecoms Bytes – Mpirical

Then of course there are the fundamental concepts of throughput, delay, and packet loss error. Jim Kurose’s book (and video below) covers these topics but it’s been a while since I read that book.

The professor also clarified the difference between bandwidth and throughput. The next video briefly touches on this distinction:

The course has also introduced me to the concept of spectral efficiency as part of understanding the difference between bandwidth and throughput. There is no shortage of concepts to learn about, from the different types of lines like T1 and T3 to bit robbing to the existence of network interface devices. The video below is good intro to T1.

DS1 (T1) Fundamentals

There was also a discussion about cable networks, with an onslaught of concepts like Hybrid fiber-coaxial. This Cable 101 video is a helpful resource.

The HFC Cable Systems Introduction video below starts out with a comparison of coax and fiber then explains the flow of signals from the core network to the home.

HFC Cable Systems Introduction

I still need to learn more information about the Cable modem termination system (CMTS) and the next resource is perfect. It mentions CMTS vendors like Arris, Cisco, and Motorola, which inspires me to look up the Cisco CMTS.

Cable Modem Termination System Tutorial (CMTS)

I have never researched how most of these systems work so I am greatly appreciating this introduction to networks course! Here’s a video on how cable modems work, including their interactions with the CMTS.

How Cable Modems Work

The communication between the CMTS and the CMs is done via DOCSIS. Here is the reference I found with insight into DOCSIS.

DOCSIS® 3.1 – An Overview

Something I picked up is that CableLabs does a lot of the research for these systems. Other concepts to know include wavelength-division multiplexing (WDM), which was used in the traditional coax networks. The following explanation is an example of WDM in fiber.

What is WDM (Wavelength Division Multiplexer)?

The next technology described is DSL (Digital subscriber line). With DSL, the last mile is not shared (unlike cable networks). It evolved into ADSL and VDSL to support higher throughput. It’s interesting that it uses Asynchronous Transfer Mode (ATM) from back in the day. We also briefly introduce passive optical networks.

PON, What is a PON? All you need to know!

Next, we get into the 7-layer OSI model. The example given for the physical layer is SONET technology. Another foray into T1 technology reveals the fact that bipolar transmission is used for T1 since it is more power efficient.

Multiplexing is the next interesting topic introduced. I have included some videos below on the different types of multiplexing employed in communications.

  1. FDM involves modulating message signals over carrier frequencies then using bandpass filters to extract the individual signals.
  2. Time-division multiplexing: one variant is statistical TDM, which was a first step toward IP.
  3. Wavelength-division multiplexing (WDM)
Frequency Division Multiplexing (FDM) Explained
Time Division Multiplexing (TDM) | Synchronous, Asynchronous, Statistical TDM | Computer Networks

The course also addresses transmission fundamentals like the difference between bit rate and baud rate, the Shannon–Hartley theorem, the Nyquist–Shannon sampling theorem, modulation, modems, and codecs. I have compiled a few videos covering these topics below.

Here is an explanation of the Shannon–Hartley theorem:

Channel Capacity by Shannon-Hartley | Basics, Proof & Maximum Bandwidth Condition

The intuitition behind the Nyquist–Shannon sampling theorem is explained in the next video:

The intuition behind the Nyquist-Shannon Sampling Theorem

The concept of modulation comes next:

What is Modulation ? Why Modulation is Required ? Types of Modulation Explained.

Other concepts introduced include the constellation diagram and Quadrature amplitude modulation (QAM). The following videos introduce these 2 concepts:

What is a Constellation Diagram?

We then start getting into network addressing. One of the important concepts here is how the exhaustion of IPv4 addresses is handled: private IP addresses, DHCP, subnetting, and IPv6. One particularly interesting point was the difference between IPv4 and IPv6 headers:

IPv4 Header vs IPv6 Header Explained

The history of telecom is also worth knowing. More recent key events are the 1984 Modified Final Judgement and the Telecommunications Act of 1996. Verify that this playlist covers the 1984 Modified Final Judgement.

In a discussion of the impact of TCP on throughput, the professor called out TCP global synchronization as an issue that networks need to avoid. Here’s one video about it.

Avoiding packet reordering is another important aspect of TCP. The contrast with UDP is especially interesting when other protocols like Google’s QUIC are designed. The RTP protocol (a relative of UDP, informally speaking) is used for VoIP. This is a good description of RTP:

Real-Time Transport Protocol (RTP) in VoIP

The Session Initiation Protocol (SIP) may be used to set up the RTP bearer streams. Here is a high level overview of SIP.

What is SIP?

RTP Control Protocol is a related protocol used to provide feedback on the quality of service (QoS).