Categories: 3D Printing

Overview of 3D Printing Technologies

I recently bought a 3D printer for some SolidWorks 2023 models I have been working on. I was reading through the applications of SolidWorks but was struggling to motivate myself to read through the overview of 3D printing technologies. I was therefore thrilled to find this series on Material Science in Additive Manufacturing: Basics. These are the technologies I reviewed (with the first two videos embedded at the end of this post):

  1. Episode 5: Process Principle of Vat Photopolymerization (VP, SLA, DLP) – YouTube
  2. Episode 10: Process Prinicple of Polymer Powder Bed Fusion (SLS) – YouTube
  3. Episode 16: Process Princple of Material Extrusion (FDM, FFF) – YouTube
  4. Episode 21: Process Princple of Binder Jetting – YouTube
  5. Episode 19: Process Princple of Material Jetting (MJ, DOD) – YouTube
  6. Episode 23: Process Prinicple of Sheet Lamination (LOM) – YouTube
  7. Episode 25: Process Principle of Directed Energy Deposition (LENS, EBAM) – YouTube

This overview in the book is also the first time I have read about the STL file format. See STL files explained | Learn about the STL file format | Adobe.

Episode 5: Process Principle of Vat Photopolymerization (VP, SLA, DLP)
Episode 10: Process Prinicple of Polymer Powder Bed Fusion (SLS)

Categories: Containers

Installing Docker on Ubuntu

I was looking for the authoritative way to install docker on Ubuntu. install docker ubuntu – Google Search points me to Install Docker Engine on Ubuntu | Docker Docs. Running this command shows that none of the packages in the Uninstall old versions section are installed on my Ubuntu VM.

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

Docker Engine comes bundled with Docker Desktop for Linux. This is the easiest and quickest way to get started.

Install Docker Engine on Ubuntu | Docker Docs

The Docker Desktop generic installation steps link to Install Docker Desktop on Ubuntu | Docker Docs. Step 1 is to set up Docker’s package repository.

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

I end up running step 2 as well to install the docker engine and call it good.

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Skipping the package repository setup step will result in these errors (seen on x64 5.10.102.1-microsoft-standard-WSL2 but all other steps and output are from a VM):

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'docker-ce' has no installation candidate
E: Unable to locate package docker-ce-cli
E: Unable to locate package containerd.io
E: Couldn't find any package by glob 'containerd.io'
E: Couldn't find any package by regex 'containerd.io'
E: Unable to locate package docker-buildx-plugin
E: Unable to locate package docker-compose-plugin

I list the available containers by running docker ps and there are none, but this verifies that docker is working.

saint@ubuntuvm:~$ sudo docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
saint@ubuntuvm:~$

The hello-world image runs successfully as well.

saint@ubuntuvm:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
719385e32844: Pull complete 
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

saint@ubuntuvm:~$ 

Running Docker in WSL

I followed the above steps to install docker in my Windows Subsystem for Linux Ubuntu 22.04.2 LTS environment. Unfortunately, docker ps does not work.

saint@mymachine:~$ sudo docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
saint@mymachine:~$ 

linux – Docker not running on Ubuntu WSL due to error cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? – Stack Overflow suggests running sudo dockerd. Here is the tail end of the output, including an error.

ERRO[2023-10-17T09:09:19.059240012-06:00] failed to initialize a tracing processor "otlp"  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-10-17T09:09:19.059460691-06:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-10-17T09:09:19.059530687-06:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-10-17T09:09:19.059629051-06:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2023-10-17T09:09:19.059665540-06:00] containerd successfully booted in 0.025117s
INFO[2023-10-17T09:09:19.114570236-06:00] [graphdriver] using prior storage driver: overlay2
INFO[2023-10-17T09:09:19.114803099-06:00] Loading containers: start.
INFO[2023-10-17T09:09:19.297993571-06:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2023-10-17T09:09:19.298958219-06:00] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2023-10-17T09:09:19.299104948-06:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain:  (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.7 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
 (exit status 4))

I start by searching for the first error, failed to initialize a tracing processor “otlp” error=”no OpenTelemetry endpoint: skip plugin” – Google Search, instead of the last. However, I find Failed to start docker on WSL · Issue #8450 · microsoft/WSL (github.com) and it has the solution:

edit /etc/default/docker and add DOCKER_OPTS="--iptables=false"

Failed to start docker on WSL · Issue #8450 · microsoft/WSL (github.com)

Building OpenJDK with Custom Code Pages

I was recently poking around the Issue Navigator – Java Bug System (openjdk.org) for enhancements. I found this interesting issue: [JDK-8268719] Force execution (and source) code page used when compiling on Windows – Java Bug System (openjdk.org). By default, I can build the OpenJDK code without any changes on my system. What is my code page?

Checking Your Windows Code Page

See Code Pages – Win32 apps for an overview of why code pages exist (or start from Unicode and Character Sets – Win32 apps for the complete picture).

A Windows operating system always has one currently active Windows code page. All ANSI versions of API functions use the currently active code page.

Code Pages – Win32 apps | Microsoft Learn

To see your current ANSI code page, use the reg command from command line – How to see which ANSI code page is used in Windows? – Stack Overflow:

C:\> reg query "HKLM\SYSTEM\CurrentControlSet\Control\Nls\CodePage" -v ACP

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage
    ACP    REG_SZ    1252

C:\> reg query "HKLM\SYSTEM\CurrentControlSet\Control\Nls\CodePage" | findstr /I "CP.*REG_SZ"
    ACP    REG_SZ    1252
    OEMCP    REG_SZ    437
    MACCP    REG_SZ    10000

To change the active code page, go to Control Panel > Region. Click on the “Change system locale…” button in the Administrative tab.

The Region Dialog Box

The Region Settings dialog will pop up. Select a different locale e.g. Japanese (Japan).

Reboot when prompted. You can verify (even before rebooting) that the active and OEM code pages have changed. Locales like Kiswahili (Kenya) and English (India) did not change the code page values (and therefore didn’t prompt to reboot).

C:\> reg query "HKLM\SYSTEM\CurrentControlSet\Control\Nls\CodePage" | findstr /I "CP.*REG_SZ"
    ACP    REG_SZ    932
    OEMCP    REG_SZ    932
    MACCP    REG_SZ    10001
Change System Locale Reboot Dialog

After rebooting, I delete the build directory then configure and build OpenJDK again. This time the build fails with these errors:

ERROR: Build failed for target 'images' in configuration 'windows-x86_64-server-slowdebug' (exit code 2) 
Stopping javac server

=== Output from failing command(s) repeated here ===
* For target hotspot_variant-server_libjvm_gtest_objs_test_json.obj:
test_json.cpp
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(357): error C2143: syntax error: missing ')' before ']'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(355): error C2660: 'JSON_GTest::test': function does not take 1 arguments
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(49): note: see declaration of 'JSON_GTest::test'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(355): note: while trying to match the argument list '(const char [171])'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(357): error C2143: syntax error: missing ';' before ']'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(357): error C2059: syntax error: ']'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(357): error C2017: illegal escape sequence
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(357): error C2059: syntax error: ')'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(363): error C2143: syntax error: missing ')' before ']'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(361): error C2660: 'JSON_GTest::test': function does not take 1 arguments
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(49): note: see declaration of 'JSON_GTest::test'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(361): note: while trying to match the argument list '(const char [174])'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(363): error C2143: syntax error: missing ';' before ']'
d:\java\forks\jdk\test\hotspot\gtest\utilities\test_json.cpp(363): error C2059: syntax error: ']'
   ... (rest of output omitted)

* All command lines available in /cygdrive/d/java/forks/jdk/build/windows-x86_64-server-slowdebug/make-support/failure-logs.
=== End of repeated output ===

No indication of failed target found.
HELP: Try searching the build log for '] Error'.
HELP: Run 'make doctor' to diagnose build problems.

To see the command line, cat the .cmdline file shown below. The full command line is at hotspot_variant-server_libjvm_gtest_objs_test_json.obj.cmdline.

cat /d/java/forks/jdk/build/windows-x86_64-server-slowdebug/make-support/failure-logs/hotspot_variant-server_libjvm_gtest_objs_test_json.obj.cmdline

The Visual C++ compiler’s behavior when reading source files depends on whether or not source files have a byte-order mark.

By default, Visual Studio detects a byte-order mark to determine if the source file is in an encoded Unicode format, for example, UTF-16 or UTF-8. If no byte-order mark is found, it assumes that the source file is encoded in the current user code page, unless you’ve specified a code page by using /utf-8 or the /source-charset option.

/utf-8 (Set source and execution character sets to UTF-8)

This can be easily tested using hexdump in Cygwin. Launch notepad and open the test.txt file created by these commands. The File > Save as dialog has an Encoding dropdown that write a byte-order marker for any of the UTF options. Running hexdump will display the byte-order markers.

echo abc123 > test.txt
hexdump -C test.txt

Inspect the OpenJDK source file failing to build confirms that there is no BOM in the file. (can this be done on GitHub?)

$ hexdump -C /cygdrive/d/java/forks/jdk/test/hotspot/gtest/utilities/test_json.cpp | head
00000000  2f 2a 0a 20 2a 20 43 6f  70 79 72 69 67 68 74 20  |/*. * Copyright |
...

Updating CFLAGS

Add the -utf-8 option to TOOLCHAIN_CFLAGS_JVM in flags-cflags.m4.

diff --git a/make/autoconf/flags-cflags.m4 b/make/autoconf/flags-cflags.m4
index c0c78ce95b6..bbb0426c368 100644
--- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -560,7 +560,9 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
     TOOLCHAIN_CFLAGS_JVM="-qtbtable=full -qtune=balanced -fno-exceptions \
         -qalias=noansi -qstrict -qtls=default -qnortti -qnoeh -qignerrno -qstackprotect"
   elif test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
-    TOOLCHAIN_CFLAGS_JVM="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -MP"
+    # The -utf8 option sets source and execution character sets to UTF-8 to enable correct
+    # compilation of all source files regardless of the active code page on Windows.
+    TOOLCHAIN_CFLAGS_JVM="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -MP -utf-8"
     TOOLCHAIN_CFLAGS_JDK="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -Zc:wchar_t-"
   fi

The build still fails but this time the error is from the java.desktop tree.

ERROR: Build failed for target 'images' in configuration 'windows-x86_64-server-slowdebug' (exit code 2) 

=== Output from failing command(s) repeated here ===
* For target support_native_java.desktop_libfreetype_afblue.obj:
afblue.c
d:\java\forks\jdk\src\java.desktop\share\native\libfreetype\src\autofit\afblue.c(1): error C2220: the following warning is treated as an error
d:\java\forks\jdk\src\java.desktop\share\native\libfreetype\src\autofit\afblue.c(1): warning C4819: The file contains a character that cannot be represented in the current code page (932). Save the file in Unicode format to prevent data loss
d:\java\forks\jdk\src\java.desktop\share\native\libfreetype\src\autofit\afscript.h(1): warning C4819: The file contains a character that cannot be represented in the current code page (932). Save the file in Unicode format to prevent data loss
d:\java\forks\jdk\src\java.desktop\share\native\libfreetype\src\autofit\afblue.c(257): warning C4819: The file contains a character that cannot be represented in the current code page (932). Save the file in Unicode format to prevent data loss
   ... (rest of output omitted)
* For target support_native_java.desktop_libfreetype_afcjk.obj:
afcjk.c
...

To see the command line, cat the .cmdline file shown below. The full command line is at support_native_java.desktop_libfreetype_afblue.obj.cmdline.

cat /d/java/forks/jdk/build/windows-x86_64-server-slowdebug/make-support/failure-logs/support_native_java.desktop_libfreetype_afblue.obj.cmdline

TOOLCHAIN_CFLAGS_JDK in flags-cflags.m4 needs the -utf-8 compiler flag as well.

diff --git a/make/autoconf/flags-cflags.m4 b/make/autoconf/flags-cflags.m4
index c0c78ce95b6..8655dfe41fb 100644
--- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -560,8 +560,10 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
     TOOLCHAIN_CFLAGS_JVM="-qtbtable=full -qtune=balanced -fno-exceptions \
         -qalias=noansi -qstrict -qtls=default -qnortti -qnoeh -qignerrno -qstackprotect"
   elif test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
-    TOOLCHAIN_CFLAGS_JVM="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -MP"
-    TOOLCHAIN_CFLAGS_JDK="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -Zc:wchar_t-"
+    # The -utf-8 option sets source and execution character sets to UTF-8 to enable correct
+    # compilation of all source files regardless of the active code page on Windows.
+    TOOLCHAIN_CFLAGS_JVM="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -utf-8 -MP"
+    TOOLCHAIN_CFLAGS_JDK="-nologo -MD -Zc:preprocessor -Zc:strictStrings -Zc:inline -utf-8 -Zc:wchar_t-"
   fi

   # CFLAGS C language level for JDK sources (hotspot only uses C++)

These 2 changes enable the build to complete successfully. The upstream pull request is 8268719: Force execution (and source) code page used when compiling on Windows by swesonga · Pull Request #15569 · openjdk/jdk (github.com).


Categories: Assembly, Java

Inspecting Code in JitWatch

Developers disassemble! Use Java and hsdis to see it all. (oracle.com) is an excellent introduction to using the hotspot disassembler to view the instructions generated by HotSpot for a Java program. It also introduces JITWatch.

JITWatch processes the JIT compilation logs that are output by the JVM and explains the optimization decisions made by the JIT compilers.

Developers disassemble! Use Java and hsdis to see it all. (oracle.com)

Let us try using JITWatch on the sample Factorization program I have been using to learn about systems performance. Use these instructions from that blog post to get JITWatch:

git clone https://github.com/AdoptOpenJDK/jitwatch.git
cd jitwatch
mvn clean package
# Produces an executable jar in ui/target/jitwatch-ui-shaded.jar

java -jar ui/target/jitwatch-ui-shaded.jar

Start the factorization sample application such that a hotspot log file is generated. To do so, use the flags listed in the JITWatch Instructions · AdoptOpenJDK/jitwatch Wiki (github.com). I decide to redirect the output to a file to avoid filling the script with the additional logging output.

$JAVA_HOME/bin/java -XX:+UnlockDiagnosticVMOptions -Xlog:class+load=info -XX:+LogCompilation -XX:+PrintAssembly Factorize 897151542039582592342572091 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 6 > logfile.txt

Loading the HotSpot Log

Click on the “Open Log” button in JITWatch then select the hotspot*.log file. Next, click on the Start button to process the JIT log.

Opening a HotSpot Log File
Processed HotSpot Log
Viewing JIT-compiled Class Members

Clicking on a class member opens another window with the corresponding assembly instructions generated by the JIT. I haven’t set up any source code locations but the assembly instructions are still displayed.

Setting up MVN on Windows

To run JITWatch on Windows, download the Maven binaries from Maven – Download Apache Maven and verify the hashes using certutil. Extract the downloaded .zip file using tar. Here are the instructions I used in Git Bash.

mkdir -p /c/java/binaries/apache
cd /c/java/binaries/apache

curl -Lo apache-maven-3.9.3-bin.zip https://dlcdn.apache.org/maven/maven-3/3.9.3/binaries/apache-maven-3.9.3-bin.zip

certutil -hashfile apache-maven-3.9.3-bin.zip SHA512
# shasum -a 512 apache-maven-3.9.3-bin.zip

tar xf apache-maven-3.9.3-bin.zip

Add MAVEN_HOME to the system PATH environment variable as described at How to Install Maven on Windows {Step-by-Step Guide} (phoenixnap.com) – or run these command in an admin command prompt. Note that I echo the path because if the new PATH is too long, this will happen: WARNING: The data being saved is truncated to 1024 characters but the previous value will still be onscreen if needed. See the pitfalls of setx at setx | Microsoft Learn. The quotes around the new path prevent issues like cmd – Invalid syntax. Default option is not allowed more than ‘2’ time(s) – Stack Overflow.

set MAVEN_HOME=C:\java\binaries\apache\apache-maven-3.9.3
setx /M MAVEN_HOME %MAVEN_HOME%

echo %PATH%
setx /M PATH "%PATH%;%MAVEN_HOME%\bin"

Now build the JITWatch sources in a command prompt:

cd \java\repos\AdoptOpenJDK\jitwatch
C:\java\binaries\apache\apache-maven-3.9.3\bin\mvn clean package

Categories: Linux

Testing Mariner Linux on Windows

I recently needed to do some testing on Mariner. To use the docker images, I first installed Docker Desktop for Windows.

Installing Docker Desktop

One of the options it presented was to use WSL 2 instead of Hyper-V. Searching for wsl 2 vs hyper-v docker windows leads to windows 10 – Docker on Hyper-V vs WSL 2 – Super User. Docker addressed this in their post on The Future of Docker Desktop for Windows. Additional system requirements are listed at Install Docker Desktop on Windows.

Building a Mariner Image

Paste the lines below into a Dockerfile. See the Dockerfile reference for more information about Dockerfile commands.

FROM mcr.microsoft.com/cbl-mariner/base/core:2.0

Build the image by running docker build -t testimage . in the directory containing the Dockerfile. The output looks like this (hashes truncated to 16 characters):

$ docker build -t testimage .
[+] Building 24.3s (5/5) FINISHED                                                                        docker:default
 => [internal] load .dockerignore                                                                                  0.1s
 => => transferring context: 2B                                                                                    0.0s
 => [internal] load build definition from Dockerfile                                                               0.2s
 => => transferring dockerfile: 101B                                                                               0.0s
 => [internal] load metadata for mcr.microsoft.com/cbl-mariner/base/core:2.0                                       0.8s
 => [1/1] FROM mcr.microsoft.com/cbl-mariner/base/core:2.0@sha256:799d8ab777f935bf...  23.1s
 => => resolve mcr.microsoft.com/cbl-mariner/base/core:2.0@sha256:799d8ab777f935bf...  0.0s
 => => sha256:799d8ab777f935bf... 860B / 860B                         0.0s
 => => sha256:567f7e473f79bb91... 949B / 949B                         0.0s
 => => sha256:1f28c8aa4ec798df... 1.93kB / 1.93kB                     0.0s
 => => sha256:9b5d7e56a34b835b... 28.33MB / 28.33MB                  14.9s
 => => sha256:682c69bfe8e8c609... 55.46MB / 55.46MB                  22.3s
 => => sha256:51b2f9e22c65add4... 4.46kB / 4.46kB                     0.4s
 => => extracting sha256:9b5d7e56a34b835b...                          1.3s
 => => extracting sha256:682c69bfe8e8c609...                          0.5s
 => => extracting sha256:51b2f9e22c65add4...                          0.0s
 => exporting to image                                                                                             0.0s
 => => exporting layers                                                                                            0.0s
 => => writing image sha256:92f91ed651632b21e1e7dbc02de1f55140b3ca1f30ad6da29fa4b62f20a6d807                       0.0s
 => => naming to docker.io/library/testimage

Running Docker

To start a container using the image, use the docker run command. For details on the command line options, see docker run | Docker Documentation. The explanation at How to Use Docker Run Command with Examples (phoenixnap.com) was helpful as well.

docker run -i -t testimage

To view the status of the containers on your machine, run docker ps.

docker ps -a
docker ps --filter status=created
docker ps --filter status=exited

Using a Script

To do all this using a single script, paste these commands into a shell script:

mkdir docker
cd docker
echo "FROM mcr.microsoft.com/cbl-mariner/base/core:2.0" > Dockerfile
docker build -t myimage .
docker run -i -t testimage

Copying Files to the Container

Use docker cp as suggested by How to copy files from host to Docker container? – Stack Overflow.

docker ps
docker cp ~/compressed.tar.gz <containerid>:/myfiles

Starting the Container in Detached Mode

It is sometimes essential to have the container run in detached mode, e.g. when you have a single command line interface available (e.g. via SSH) and don’t want to connect to the host again. Start the container using docker run then connect to it using docker attach.

docker run -dit --name mycontainer testimage
docker attach mycontainer

Installing Components in Mariner

I tried to use the tar command to extract a file copied into the container but it outputs bash: tar: command not found. One of the results from install tar on mariner dockerfile – Search (bing.com) is azure-powershell/docker/Dockerfile-mariner-2 at main · Azure/azure-powershell · GitHub. It uses the tdnf command to install tar so we can do the same.

tdnf install tar

Windows Observations

Other than the machine name, WSL’s Ubuntu 22.04.2 LTS has the same uname -a output as the docker container from the test image created above (on my x64 Windows 11 machine): Linux 9a13d5e98075 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux.


Monitoring Context Switching in a Linux Process

As I learn about systems performance, one question that often arises is who is responsible for the context switching on a system. In Linux, the number of context switches per second is displayed by vmstat. To see this information every second, for example, run vmstat 1. Here is sample output from my Ubuntu 22.04 VM showing about 50000 context switches per sec. I used the -SM option to display memory info in Megabytes (which reduces the amount of output per line). The last (optional) argument is the number of updates (lines) to be displayed.

saint@ubuntuvm:~$ vmstat -SM 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0   4603    386   5176    0    0     1     5   16    7 35  6 59  0  0
 4  0      0   4603    386   5176    0    0     0    88 20031 45649 51  8 41  0  0
 4  0      0   4603    386   5176    0    0     0     0 23768 52532 49  8 43  0  0
 4  0      0   4603    386   5176    0    0     0     0 23826 52931 49  7 43  0  0
 5  0      0   4603    386   5176    0    0     0     0 23328 51731 49  9 43  0  0

Use pidstat to get a per-process breakdown of context switches per second. Like vmstat, a report delay and a report count can be specified. Without specifying any specific flags, the default output is a breakdown of CPU usage.

saint@ubuntuvm:~$ pidstat 1 5
...
Average:      UID       PID    %usr %system  %guest   %wait    %CPU   CPU  Command
Average:      129       691    0.40    0.00    0.00    0.00    0.40     -  mysqld
Average:     1000       896    0.60    0.40    0.00    0.00    1.00     -  Xorg
Average:     1000      1161    2.20    0.40    0.00    0.20    2.59     -  gnome-shell
Average:     1000     23655    0.40    0.00    0.00    0.20    0.40     -  gnome-terminal-
Average:     1000     23822    0.60    0.20    0.00    0.00    0.80     -  code
Average:     1000     23880    0.20    0.00    0.00    0.00    0.20     -  code
Average:     1000     23907    0.20    0.00    0.00    0.00    0.20     -  code
Average:     1000     33819    0.80    0.20    0.00    0.00    1.00     -  firefox
Average:     1000   1149158    1.40    0.20    0.00    0.00    1.60     -  Isolated Web Co
Average:     1000   1218546  294.81   37.52    0.00    0.00  332.34     -  java
Average:     1000   1227240    0.20    0.80    0.00    0.00    1.00     -  pidstat

For our purposes, we need to specify the -w flag to report task switching activity. After the specified number of reports have been displayed, the average number of context switches (voluntary and involuntary) are displayed.

saint@ubuntuvm:~$ pidstat -w 1 5
Linux 5.19.0-45-generic (ubuntuvm) 	07/03/2023 	_x86_64_	(6 CPU)

<5 reports truncated>

Average:      UID       PID   cswch/s nvcswch/s  Command
Average:        0         1      0.20      0.00  systemd
Average:        0        14      0.60      0.00  ksoftirqd/0
Average:        0        15     28.94      0.00  rcu_preempt
Average:        0        16      0.20      0.00  migration/0
Average:        0        22      0.20      0.00  migration/1
Average:        0        23      0.20      0.00  ksoftirqd/1
Average:        0        28      0.20      0.00  migration/2
Average:        0        29      0.60      0.00  ksoftirqd/2
Average:        0        34      0.20      0.00  migration/3
Average:        0        35      1.80      0.00  ksoftirqd/3
Average:        0        40      0.60      0.00  migration/4
Average:        0        41      1.00      0.00  ksoftirqd/4
Average:        0        46      0.20      0.00  migration/5
Average:        0        47      0.60      0.00  ksoftirqd/5
Average:        0        58      1.80      0.00  kcompactd0
Average:        0       195      0.40      0.00  kworker/1:1H-kblockd
Average:        0       210      0.60      0.00  kworker/4:1H-kblockd
Average:        0       238      0.60      0.20  jbd2/sda2-8
Average:        0       369      1.00      0.00  hv_balloon
Average:      108       484      3.99      0.00  systemd-oomd
Average:      101       485      0.60      0.00  systemd-resolve
Average:        0       523      0.20      0.00  acpid
Average:        0       576      0.20      0.00  wpa_supplicant
Average:     1000       896     81.44      0.00  Xorg
Average:     1000      1161     63.67     15.17  gnome-shell
Average:     1000     23655     97.60      0.80  gnome-terminal-
Average:     1000     23719      0.40      0.00  code
Average:     1000     23782      0.20      0.00  code
Average:     1000     23822      9.78      0.60  code
Average:     1000     23880      0.40      0.00  code
Average:     1000     23906      1.40      0.00  code
Average:     1000     23907      3.39      0.40  code
Average:     1000     23936      1.60      0.00  code
Average:     1000     23954      0.20      0.00  code
Average:     1000     24009      0.20      0.00  code
Average:     1000     24019      0.20      0.00  code
Average:     1000     33819      6.99      0.00  firefox
Average:     1000     34122      0.20      0.00  Privileged Cont
Average:        0    988101      4.99      0.00  kworker/0:1-mm_percpu_wq
Average:        0    993536      6.19      0.00  kworker/5:1-mm_percpu_wq
Average:        0   1124417      4.39      0.00  kworker/4:1-events
Average:        0   1148363      4.59      0.00  kworker/2:0-mm_percpu_wq
Average:        0   1148688      3.79      0.00  kworker/1:2-mm_percpu_wq
Average:        0   1149149      2.79      0.00  kworker/3:2-events
Average:     1000   1149158     61.88      2.00  Isolated Web Co
Average:     1000   1149675     10.38      6.19  Isolated Web Co
Average:     1000   1152215      0.40      0.00  code
Average:        0   1223207     71.26      0.00  kworker/u12:1-events_unbound
Average:        0   1226506     37.13      0.00  kworker/u12:3-writeback
Average:        0   1227384     27.94      0.00  kworker/u12:4-events_unbound
Average:     1000   1228275      1.00     17.96  pidstat

I happened to have my factorization Java application running on this VM. Interestingly, it does not appear in this output despite the fact that the time command displays a large number of context switches for this application. To see this, set up the application as described in the next section.

Running a Sample Multithreaded Application

Clone the scratchpad repo then compile and launch the factorization application using these instructions (with any necessary changes to the JAVA_HOME path):

# Download a Java build if necessary
mkdir -p ~/java/binaries/jdk/x64
cd ~/java/binaries/jdk/x64
wget https://aka.ms/download-jdk/microsoft-jdk-17.0.7-linux-x64.tar.gz
tar xzf microsoft-jdk-17.0.7-linux-x64.tar.gz

# Set the JAVA_HOME environment variable
export JAVA_HOME=~/java/binaries/jdk/x64/jdk-17.0.7+7

# Get the Factorization source code
mkdir ~/repos
cd ~/repos
git clone https://github.com/swesonga/scratchpad
cd scratchpad/demos/java/FindPrimes

# Compile the factorization source code
$JAVA_HOME/bin/javac Factorize.java

# Factorize a number and display task statistics
/usr/bin/time -v $JAVA_HOME/bin/java Factorize 897151542039582592342572091 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 6

Notice the context switching statistics when the command completes:

	Command being timed: "/home/saint/java/binaries/jdk/x64/jdk-20+36/bin/java Factorize 897151542039582592342572091 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 6"
	User time (seconds): 37.59
	System time (seconds): 6.47
	Percent of CPU this job got: 363%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:12.12
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 298576
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 8
	Minor (reclaiming a frame) page faults: 70247
	Voluntary context switches: 367393
	Involuntary context switches: 1337
	Swaps: 0
	File system inputs: 0
	File system outputs: 64
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

Per-Thread Task Switching Information

How can get some insight into the context switching in the Java process? We can also display statistics for threads associated with selected tasks using the -t option. We now get insight into Java’s contribution to context switching:

saint@ubuntuvm:~$ pidstat -w -t 1 5
Average:      UID      TGID       TID   cswch/s nvcswch/s  Command
Average:        0        14         -      1.58      0.00  ksoftirqd/0
Average:        0         -        14      1.58      0.00  |__ksoftirqd/0
Average:        0        15         -     32.94      0.00  rcu_preempt
Average:        0         -        15     32.94      0.00  |__rcu_preempt
Average:        0        16         -      0.39      0.00  migration/0
Average:        0         -        16      0.39      0.00  |__migration/0
Average:        0        22         -      0.20      0.00  migration/1
Average:        0         -        22      0.20      0.00  |__migration/1
...
Average:     1000     24546     24549      0.99      0.00  (java)__VM Thread
Average:     1000         -     24554      0.99      0.00  |__Monitor Deflati
Average:     1000         -     24555      0.20      0.00  |__C2 CompilerThre
Average:     1000         -     24556      0.20      0.00  |__C1 CompilerThre
Average:     1000         -     24562     20.12      0.00  |__VM Periodic Tas
Average:     1000         -     24610      0.20      0.00  |__pool-2-thread-1
Average:     1000         -     24704      2.17      0.00  |__Attach Listener
...
Average:        0   1232414         -    641.22      0.00  kworker/u12:0-events_unbound
Average:        0         -   1232414    641.22      0.00  |__kworker/u12:0-events_unbound
Average:        0   1234513         -    174.16      0.00  kworker/u12:2-events_unbound
Average:        0         -   1234513    174.16      0.00  |__kworker/u12:2-events_unbound
Average:        0   1236229         -    703.94      0.00  kworker/u12:3-ext4-rsv-conversion
Average:        0         -   1236229    703.94      0.00  |__kworker/u12:3-ext4-rsv-conversion
Average:     1000   1236678   1236680     66.27      0.20  (java)__GC Thread#0
Average:     1000         -   1236684     16.96      0.00  |__G1 Service
Average:     1000         -   1236685     56.21      1.58  |__VM Thread
Average:     1000         -   1236690      0.99      0.00  |__Monitor Deflati
Average:     1000         -   1236691      0.20      0.00  |__C2 CompilerThre
Average:     1000         -   1236692      0.20      0.00  |__C1 CompilerThre
Average:     1000         -   1236694     20.12      0.00  |__VM Periodic Tas
Average:     1000         -   1236697     22.68     20.71  |__Thread-0
Average:     1000         -   1236698     24.26    172.98  |__Thread-1
Average:     1000         -   1236699     24.65    170.22  |__Thread-2
Average:     1000         -   1236700     22.09    219.92  |__Thread-3
Average:     1000         -   1236701     65.48      1.18  |__GC Thread#1
Average:     1000         -   1236702     64.69      0.59  |__GC Thread#2
Average:     1000         -   1236703     62.13      0.39  |__GC Thread#3
Average:     1000         -   1236704     64.50      0.20  |__GC Thread#4
Average:     1000         -   1236705     64.50      0.20  |__GC Thread#5
Average:     1000   1236750         -      0.99   1119.72  pidstat
Average:     1000         -   1236750      0.99   1119.72  |__pidstat

Per-Process Task Switching Information

Given the large number of tasks on the system, it may be helpful to focus on Java alone. Running jps shows the pids of the Java processes running. We can pass the PID of interest to pidstat. Only context switches for that process will now be displayed, e.g.

saint@ubuntuvm:~$ pidstat -w -t -p 1236678 1 5
...
Average:     1000   1236678         -      0.00      0.00  java
Average:     1000         -   1236678      0.00      0.00  |__java
Average:     1000         -   1236679      0.00      0.00  |__java
Average:     1000         -   1236680     64.00      0.00  |__GC Thread#0
Average:     1000         -   1236681      0.00      0.00  |__G1 Main Marker
Average:     1000         -   1236682      0.00      0.00  |__G1 Conc#0
Average:     1000         -   1236683      0.00      0.00  |__G1 Refine#0
Average:     1000         -   1236684     18.00      0.00  |__G1 Service
Average:     1000         -   1236685     56.00      5.40  |__VM Thread
Average:     1000         -   1236686      0.00      0.00  |__Reference Handl
Average:     1000         -   1236687      0.00      0.00  |__Finalizer
Average:     1000         -   1236688      0.00      0.00  |__Signal Dispatch
Average:     1000         -   1236689      0.00      0.00  |__Service Thread
Average:     1000         -   1236690      1.00      0.00  |__Monitor Deflati
Average:     1000         -   1236691      0.20      0.00  |__C2 CompilerThre
Average:     1000         -   1236692      0.20      0.00  |__C1 CompilerThre
Average:     1000         -   1236693      0.00      0.00  |__Notification Th
Average:     1000         -   1236694     20.20      0.00  |__VM Periodic Tas
Average:     1000         -   1236695      0.00      0.00  |__Common-Cleaner
Average:     1000         -   1236697     17.80      7.60  |__Thread-0
Average:     1000         -   1236698     20.00      8.60  |__Thread-1
Average:     1000         -   1236699     21.40     12.00  |__Thread-2
Average:     1000         -   1236700     17.80      9.40  |__Thread-3
Average:     1000         -   1236701     64.40      0.20  |__GC Thread#1
Average:     1000         -   1236702     63.20      0.20  |__GC Thread#2
Average:     1000         -   1236703     66.20      0.00  |__GC Thread#3
Average:     1000         -   1236704     63.80      0.20  |__GC Thread#4
Average:     1000         -   1236705     63.60      0.20  |__GC Thread#5

The pidstat man page describes lots of other options that can be used to customize the output.


Categories: git, Security

Storing Git Credentials on Ubuntu

To store encrypted git credentials on disk in Ubuntu, install pass and the git-credential-manager. We will use gpg to generate a key that pass will use for secure storage and retrieval of credentials. Use these commands to get everything set up for git:

cd ~/Downloads
wget https://github.com/git-ecosystem/git-credential-manager/releases/download/v2.1.2/gcm-linux_amd64.2.1.2.deb

sudo dpkg -i gcm-linux_amd64.2.1.2.deb
git-credential-manager configure
git config --global credential.credentialStore

gpg --gen-key

sudo apt install pass
pass init <generated-key>

Background

The GitHub PAT I have been using on my Ubuntu VM recently expired. Authentication failed when I tried to push to my repo. I generated a new PAT as outlined at https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token and entered it on the command line when running git push. Of course, I got something wrong entering the PAT manually (assuming it would get saved).

saint@ubuntuvm:~/repos/scratchpad$ git push
Username for 'https://github.com': swesonga
Password for 'https://swesonga@github.com': 
remote: Permission to swesonga/scratchpad.git denied to swesonga.
fatal: unable to access 'https://github.com/swesonga/scratchpad/': The requested URL returned error: 403

Instead of fighting with this command line, I decided to educate myself on the proper way to do this. ubuntu git keychain – Search (bing.com) led me to this post on git – How to store your github https password on Linux in a terminal keychain? – Stack Overflow, which states that the 2022 answer would be to use the Microsft cross-platform GCM (Git Credential Manager). The git-credential-manager/docs/install.md page links to the instructions at git-credential-manager/docs/credstores.md. I download the .deb file from Release GCM 2.1.2 · git-ecosystem/git-credential-manager (github.com).

saint@ubuntuvm:~/repos/scratchpad$ sudo dpkg -i ~/Downloads/gcm-linux_amd64.2.1.2.deb 
[sudo] password for saint: 
Selecting previously unselected package gcm.
(Reading database ... 272980 files and directories currently installed.)
Preparing to unpack .../gcm-linux_amd64.2.1.2.deb ...
Unpacking gcm (2.1.2) ...
Setting up gcm (2.1.2) ...
saint@ubuntuvm:~/repos/scratchpad$ which git-credential-manager
/usr/local/bin/git-credential-manager
saint@ubuntuvm:~/repos/scratchpad$ git-credential-manager configure
Configuring component 'Git Credential Manager'...
Configuring component 'Azure Repos provider'...

The git push experience is now different:

saint@ubuntuvm:~/repos/scratchpad$ git push
fatal: No credential store has been selected.

Set the GCM_CREDENTIAL_STORE environment variable or the credential.credentialStore Git configuration setting to one of the following options:

  secretservice : freedesktop.org Secret Service (requires graphical interface)
  gpg           : GNU `pass` compatible credential storage (requires GPG and `pass`)
  cache         : Git's in-memory credential cache
  plaintext     : store credentials in plain-text files (UNSECURE)

See https://aka.ms/gcm/credstores for more information.

Username for 'https://github.com':
saint@ubuntuvm:~/repos/scratchpad$ git config --global credential.credentialStore
saint@ubuntuvm:~/repos/scratchpad$ git push
fatal: Password store has not been initialized at '/home/saint/.password-store'; run `pass init <gpg-id>` to initialize the store.
See https://aka.ms/gcm/credstores for more information.
Username for 'https://github.com': 

Since I own the VM, I don’t mind credentials being stored on disk (but not in plain text), so I set up gpg and pass as instructed.

saint@ubuntuvm:~$ gpg --gen-key
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: Saint Wesonga
Email address: saint@swesonga.org
You selected this USER-ID:
    "Saint Wesonga <saint@swesonga.org>"
...

saint@ubuntuvm:~$ sudo apt install pass
[sudo] password for saint: 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  libqrencode4 qrencode tree xclip
Suggested packages:
  libxml-simple-perl python ruby
The following NEW packages will be installed:
  libqrencode4 pass qrencode tree xclip
0 upgraded, 5 newly installed, 0 to remove and 92 not upgraded.
Need to get 151 kB of archives.
After this operation, 442 kB of additional disk space will be used.
Do you want to continue? [Y/n]
...

saint@ubuntuvm:~$ pass init ABCDEF0123456789
mkdir: created directory '/home/saint/.password-store/'
Password store initialized for ABCDEF0123456789

Apparently I used the wrong value for the key but git push is unfazed – it pushes successfully after the browser authentication completes. I’m not sure what is happening now since browser authentication is in use but as long as I can push, I can forge ahead with other tasks.

saint@ubuntuvm:~/repos/scratchpad$ git push
info: please complete authentication in your browser...
fatal: Failed to encrypt file '/home/saint/.password-store/git/https/github.com/swesonga.gpg' with gpg. exit=2, out=, err=gpg: <WRONG HEX VALUE>: skipped: No public key
gpg: [stdin]: encryption failed: No public key

Enumerating objects: 11, done.
Counting objects: 100% (11/11), done.
Delta compression using up to 6 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (6/6), 745 bytes | 745.00 KiB/s, done.
Total 6 (delta 3), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (3/3), completed with 3 local objects

Update: 2023-09-20. Use pass rm -r git to authenticate in the browser the next time git push is executed (e.g. if the password store secret is lost).


Categories: Profiling

Using perf in WSL Ubuntu Terminal

When Experimenting with perf on Linux, I used an Ubuntu VM. This can be a bit more cumbersome when simply trying to understand what various Linux commands can do. I decided to try using WSL to experiment with perf. Running wsl from the command line was sufficient to determine how to install the Ubuntu distribution.

C:\dev> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore

C:\dev> wsl --install
Windows Subsystem for Linux is already installed.
The following is a list of valid distributions that can be installed.
Install using 'wsl --install -d <Distro>'.

NAME                                   FRIENDLY NAME
Ubuntu                                 Ubuntu
Debian                                 Debian GNU/Linux
kali-linux                             Kali Linux Rolling
Ubuntu-18.04                           Ubuntu 18.04 LTS
Ubuntu-20.04                           Ubuntu 20.04 LTS
Ubuntu-22.04                           Ubuntu 22.04 LTS
OracleLinux_7_9                        Oracle Linux 7.9
OracleLinux_8_7                        Oracle Linux 8.7
OracleLinux_9_1                        Oracle Linux 9.1
SUSE-Linux-Enterprise-Server-15-SP4    SUSE Linux Enterprise Server 15 SP4
openSUSE-Leap-15.4                     openSUSE Leap 15.4
openSUSE-Tumbleweed                    openSUSE Tumbleweed

C:\dev> wsl --install -d Ubuntu-22.04
Installing: Ubuntu 22.04 LTS
Ubuntu 22.04 LTS has been installed.
Launching Ubuntu 22.04 LTS...
Creating a UNIX user account

Installing perf

Install the linux-tools-generic package then check the perf version as follows:

sudo apt install linux-tools-generic
/usr/lib/linux-tools/5.15.0-73-generic/perf --version

Background Investigation

Once the WSL Ubuntu distro installation completed and I have created a user account, I start by checking the perf --version lets you know how it can be installed:

saint@machine:~$ perf --version
Command 'perf' not found, but can be installed with:
sudo apt install linux-intel-iotg-tools-common    # version 5.15.0-1027.32, or
sudo apt install linux-nvidia-tools-common        # version 5.15.0-1023.23
sudo apt install linux-tools-common               # version 5.15.0-71.78
sudo apt install linux-nvidia-5.19-tools-common   # version 5.19.0-1009.9
sudo apt install linux-nvidia-tegra-tools-common  # version 5.15.0-1012.12

Since I’m not looking for anything vendor specific, I try to install the linux-tools-common package.

saint@machine:~$ sudo apt install linux-tools-common
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  linux-tools-common
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 290 kB of archives.
After this operation, 823 kB of additional disk space will be used.
Ign:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-common all 5.15.0-71.78
Err:1 http://security.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-common all 5.15.0-71.78
  404  Not Found [IP: ... 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-tools-common_5.15.0-71.78_all.deb  404  Not Found [IP: ... 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

Now try the command suggested in the last error:

saint@machine:~$ sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:2 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [455 kB]
Hit:3 http://archive.ubuntu.com/ubuntu jammy InRelease
Get:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:5 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [122 kB]
Get:6 http://security.ubuntu.com/ubuntu jammy-security/main amd64 c-n-f Metadata [10.1 kB]
Get:7 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [349 kB]
Get:8 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [52.6 kB]
Get:9 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB]
...
Get:39 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 c-n-f Metadata [548 B]
Get:40 http://archive.ubuntu.com/ubuntu jammy-backports/multiverse amd64 c-n-f Metadata [116 B]
Fetched 25.1 MB in 5s (4725 kB/s)
Reading package lists... Done

That seems to do the trick:

saint@machine:~$ sudo apt install linux-tools-common
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  linux-tools-common
0 upgraded, 1 newly installed, 0 to remove and 41 not upgraded.
Need to get 277 kB of archives.
After this operation, 833 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-common all 5.15.0-73.80 [277 kB]
Fetched 277 kB in 0s (793 kB/s)
Selecting previously unselected package linux-tools-common.
(Reading database ... 24137 files and directories currently installed.)
Preparing to unpack .../linux-tools-common_5.15.0-73.80_all.deb ...
Unpacking linux-tools-common (5.15.0-73.80) ...
Setting up linux-tools-common (5.15.0-73.80) ...
Processing triggers for man-db (2.10.2-1) ...

Can we run a perf command now? No, perf not found for my kernel.

saint@machine:~$ perf --version
WARNING: perf not found for kernel 5.10.102.1-microsoft

  You may need to install the following packages for this specific kernel:
    linux-tools-5.10.102.1-microsoft-standard-WSL2
    linux-cloud-tools-5.10.102.1-microsoft-standard-WSL2

  You may also want to install one of the following packages to keep up to date:
    linux-tools-standard-WSL2
    linux-cloud-tools-standard-WSL2

Is that really my kernel version? Yes it is.

saint@mymachine:~$ uname -a
Linux mymachine 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Unfortunately, the suggested packages cannot be found:

saint@machine:~$ sudo apt install linux-tools-standard-WSL2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package linux-tools-standard-WSL2
saint@machine:~$ sudo apt install linux-tools-5.10.102.1-microsoft-standard-WSL2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package linux-tools-5.10.102.1-microsoft-standard-WSL2
E: Couldn't find any package by glob 'linux-tools-5.10.102.1-microsoft-standard-WSL2'
nt@machine:~$ sudo apt-get install linux-tools-5.10.102.1-microsoft-standard-WSL2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package linux-tools-5.10.102.1-microsoft-standard-WSL2
E: Couldn't find any package by glob 'linux-tools-5.10.102.1-microsoft-standard-WSL2'
E: Couldn't find any package by regex 'linux-tools-5.10.102.1-microsoft-standard-WSL2'

Searching for the error message Unable to locate package linux-tools-5.10.102.1-microsoft-standard-WSL2 – Search (bing.com) reveals that this is a fairly common issue.

The interesting thing about this is that the version numbers shown in the list of packages to be installed do not match my kernel version. However, the installation succeeds.

saint@machine:~$ sudo apt install linux-tools-generic
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  linux-tools-5.15.0-73 linux-tools-5.15.0-73-generic
The following NEW packages will be installed:
  linux-tools-5.15.0-73 linux-tools-5.15.0-73-generic linux-tools-generic
0 upgraded, 3 newly installed, 0 to remove and 41 not upgraded.
Need to get 7931 kB of archives.
After this operation, 27.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-5.15.0-73 amd64 5.15.0-73.80 [7926 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-5.15.0-73-generic amd64 5.15.0-73.80 [1786 B]
Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 linux-tools-generic amd64 5.15.0.73.71 [2308 B]
Fetched 7931 kB in 2s (5163 kB/s)
Selecting previously unselected package linux-tools-5.15.0-73.
(Reading database ... 24210 files and directories currently installed.)
Preparing to unpack .../linux-tools-5.15.0-73_5.15.0-73.80_amd64.deb ...
Unpacking linux-tools-5.15.0-73 (5.15.0-73.80) ...
Selecting previously unselected package linux-tools-5.15.0-73-generic.
Preparing to unpack .../linux-tools-5.15.0-73-generic_5.15.0-73.80_amd64.deb ...
Unpacking linux-tools-5.15.0-73-generic (5.15.0-73.80) ...
Selecting previously unselected package linux-tools-generic.
Preparing to unpack .../linux-tools-generic_5.15.0.73.71_amd64.deb ...
Unpacking linux-tools-generic (5.15.0.73.71) ...
Setting up linux-tools-5.15.0-73 (5.15.0-73.80) ...
Setting up linux-tools-5.15.0-73-generic (5.15.0-73.80) ...
Setting up linux-tools-generic (5.15.0.73.71) ...

perf --version still fails though. It’s not a symlink to anything else.

saint@machine:~$ ls -l `which perf`
-rwxr-xr-x 1 root root 1622 May 15 07:10 /usr/bin/perf

However, there is a user that was able to use perf by running the tool in the /usr/lib/linux-tools/… directory. Sure enough, this does the trick!

saint@machine:~$ /usr/lib/linux-tools/5.15.0-73-generic/perf --version
perf version 5.15.98

Sharing Files Between Windows and WSL Ubuntu

I was curious about whether I could generate a report from a perf.data file generated on another machine. The docs on Working across file systems show how easy it is to use a file on the Windows file system:

cd /mnt/c/dev/reports
/usr/lib/linux-tools/5.15.0-73-generic/perf report -n --stdio > report.txt

This doesn’t work though. The command fails after about 40 seconds with the error No kallsyms or vmlinux with build-id 5c3d8... was found.


Categories: Assembly

Trial Division Factorization Disassembly

When Experimenting with Async Profiler, I created a basic trial division factorization Java application. To run it, download the OpenJDK build if it isn’t already installed:

mkdir -p ~/java/binaries/jdk/x64
cd ~/java/binaries/jdk/x64
wget https://aka.ms/download-jdk/microsoft-jdk-17.0.7-linux-x64.tar.gz
tar xzf microsoft-jdk-17.0.7-linux-x64.tar.gz

Test the factorization application to verify that the Java build works.

export JAVA_HOME=~/java/binaries/jdk/x64/jdk-17.0.7+7

cd ~/repos/scratchpad/demos/java/FindPrimes
$JAVA_HOME/bin/javac Factorize.java
$JAVA_HOME/bin/java Factorize 123890571352112309857

# Use 4 threads to speed things up
$JAVA_HOME/bin/java Factorize 123890571352112309857 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 4

Using hsdis

hsdis is a HotSpot plugin for disassembling dynamically generated code. Chriswhocodes was kind enough to build hsdis for various platforms and share the binaries on his website – hsdis HotSpot Disassembly Plugin Downloads (chriswhocodes.com). Download the appropriate hsdis binary and move it to the OpenJDK build’s lib directory, e.g.

wget https://chriswhocodes.com/hsdis/hsdis-amd64.so
export JAVA_HOME=~/java/binaries/jdk/x64/jdk-17.0.7+7
mv hsdis-amd64.so $JAVA_HOME/lib/

ls -l $JAVA_HOME/bin/hsdis*

We will need the PrintAssembly option to disassemble the code generated by the compiler when running a Java program. This option requires diagnostic VM options to be unlocked. This is the full command line for generating the disassembly from the application’s execution. The output is redirected to a code.asm file since it can be voluminous.

$JAVA_HOME/bin/java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly Factorize 123890571352112309857 CUSTOM_THREAD_COUNT_VIA_THREAD_CLASS 4 > code.asm

Here is a snippet of the disassembly in code.asm:

============================= C1-compiled nmethod ==============================
----------------------------------- Assembly -----------------------------------

Compiled method (c1)    2052  266       2       java.math.BigInteger::implMulAdd (81 bytes)
 total in heap  [0x00007f2e5943ca90,0x00007f2e5943d038] = 1448
 relocation     [0x00007f2e5943cbf0,0x00007f2e5943cc28] = 56
 main code      [0x00007f2e5943cc40,0x00007f2e5943ce00] = 448
 stub code      [0x00007f2e5943ce00,0x00007f2e5943ce30] = 48
 metadata       [0x00007f2e5943ce30,0x00007f2e5943ce38] = 8
 scopes data    [0x00007f2e5943ce38,0x00007f2e5943cee0] = 168
 scopes pcs     [0x00007f2e5943cee0,0x00007f2e5943d010] = 304
 dependencies   [0x00007f2e5943d010,0x00007f2e5943d018] = 8
 nul chk table  [0x00007f2e5943d018,0x00007f2e5943d038] = 32

--------------------------------------------------------------------------------
[Constant Pool (empty)]

--------------------------------------------------------------------------------

[Verified Entry Point]
  # {method} {0x00000008000a47c0} 'implMulAdd' '([I[IIII)I' in 'java/math/BigInteger'
  # parm0:    rsi:rsi   = '[I'
  # parm1:    rdx:rdx   = '[I'
  # parm2:    rcx       = int
  # parm3:    r8        = int
  # parm4:    r9        = int
  #           [sp+0x50]  (sp of caller)
  0x00007f2e5943cc40:   mov    %eax,-0x14000(%rsp)
  0x00007f2e5943cc47:   push   %rbp
  0x00007f2e5943cc48:   sub    $0x40,%rsp
  0x00007f2e5943cc4c:   movabs $0x7f2e38075370,%rax
  0x00007f2e5943cc56:   mov    0x8(%rax),%edi
  0x00007f2e5943cc59:   add    $0x2,%edi
  0x00007f2e5943cc5c:   mov    %edi,0x8(%rax)
  0x00007f2e5943cc5f:   and    $0xffe,%edi
  0x00007f2e5943cc65:   cmp    $0x0,%edi
  0x00007f2e5943cc68:   je     0x00007f2e5943cd52           ;*iload {reexecute=0 rethrow=0 return_oop=0}
                                                            ; - java.math.BigInteger::implMulAdd@0 (line 3197)
  0x00007f2e5943cc6e:   movslq %r9d,%r9
  0x00007f2e5943cc71:   movabs $0xffffffff,%rax
  0x00007f2e5943cc7b:   and    %rax,%r9
...

Finding the Java Installation Path

In the above example, I have used a Java build in a custom path. If you are using a Java build that is already installed, then a few extra steps might be needed to determine where the JAVA_HOME path, e.g.

saint@ubuntuvm:~$ which java
/usr/bin/java
saint@ubuntuvm:~$ ls -l `which java`
saint@ubuntuvm:~$ ls -l /etc/alternatives/java

Categories: Big Data

Hadoop Native Libraries for Apache Spark

The post on Diagnosing Hadoop Native Library Load Failures was focused on Hadoop being run as a standalone application. However, it can also be one component among many in an application with broader scope, such as Apache Spark. Having not used spark before, I found the Quick Start – Spark 3.4.0 Documentation (apache.org) informative. It suggested downloading the packaged release of Spark from the Spark website but I went with this CDN https://dlcdn.apache.org/spark/spark-3.4.0/ since it was the same one I had downloaded my Hadoop build from.

Setting Up and Launching Spark

Download and extract Spark using these commands:

cd ~/java/binaries
mkdir spark
cd spark
curl -Lo spark-3.4.0-bin-hadoop3.tgz https://dlcdn.apache.org/spark/spark-3.4.0/spark-3.4.0-bin-hadoop3.tgz

tar xzf spark-3.4.0-bin-hadoop3.tgz
cd spark-3.4.0-bin-hadoop3

Spark needs JAVA_HOME to be set (otherwise the first message displayed will be ERROR: JAVA_HOME is not set and could not be found).

export JAVA_HOME=~/java/binaries/jdk/x64/jdk-11.0.19+7

Next, I started the Spark shell by running this command as per the Quick Start docs:

./bin/spark-shell

Notice that the same Hadoop warning from Diagnosing Hadoop Native Library Load Failures showed up again! However, we have already seen that the Hadoop logging level can be customized. The key question now is how to enable DEBUG logging in spark

saint@ubuntuvm:~/java/binaries/spark/spark-3.4.0-bin-hadoop3$ ./bin/spark-shell
23/06/01 10:31:38 WARN Utils: Your hostname, ubuntuvm resolves to a loopback address: 127.0.1.1; using 172.18.28.45 instead (on interface eth0)
23/06/01 10:31:38 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/06/01 10:31:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context Web UI available at http://ubuntuvm.mshome.net:4040
Spark context available as 'sc' (master = local[*], app id = local-1685637105440).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.4.0
      /_/
         
Using Scala version 2.12.17 (OpenJDK 64-Bit Server VM, Java 17.0.6)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

Command Line Customization of Spark Logging Level

The Overview – Spark 3.4.0 Documentation (apache.org) page states that …

Users can also download a “Hadoop free” binary and run Spark with any Hadoop version by augmenting Spark’s classpath.

Spark 3.4.0 Documentation (apache.org)

The classpath augmentation doc (Using Spark’s “Hadoop Free” Build) is what informs me that the way Spark uses Hadoop can be customized by entries in conf/spark-env.sh. Unfortunately, there are no log level settings in the spark-env.sh.template file in that directory. After a bit of a winding journey, I discover that the way to customize the logging level is to first create a conf/log4j2.properties file by running:

cp conf/log4j2.properties.template conf/log4j2.properties

Next, change the logging level by updating this line:

logger.repl.level = warn

Launching the Spark shell now displays a much more informative error message. It is now evident that the paths being searched for native libraries do not include the path we need.

23/06/01 11:16:31 DEBUG NativeCodeLoader: Trying to load the custom-built native-hadoop library...
23/06/01 11:16:31 DEBUG NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]
23/06/01 11:16:31 DEBUG NativeCodeLoader: java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
23/06/01 11:16:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Fixing the Spark Hadoop Native Libraries

I searched for how to pass spark extra java options. The Tuning – Spark 3.4.0 Documentation mentioned the spark.executor.defaultJavaOptions and spark.executor.extraJavaOptions arguments, which I found documented at Configuration – Spark 3.4.0 Documentation. These are the flags I (unsuccessfully) tried passing to the Spark shell to load the Hadoop native library:

The required flag is the –driver-library-path. Sounds like the extraLibraryPath options didnt’ work because the JVM has already started by the time those are being processed.

./bin/spark-shell --driver-library-path=/home/saint/java/binaries/hadoop/x64/hadoop-3.3.5/lib/native

The –driver-library-path flag allows Spark to successfully load the Hadoop native libraries. The logging messages confirm this:

...
3/06/01 11:57:06 DEBUG NativeCodeLoader: Trying to load the custom-built native-hadoop library...
23/06/01 11:57:06 DEBUG NativeCodeLoader: Loaded the native-hadoop library
...

Appendix: Resources Reviewed for Spark Logging Level Changes

It was the SPARK-7261 pull request that led me to look for the log4j2.properties file. Changing rootLogger.level did not have any effect but scrolling through revealed the key line setting logger.repl.level.