Categories: Containers, Java

Running my Factorization Java App in Docker

I want to evaluate the OpenJDK serial collector using a Java program I wrote to factorize natural numbers by trial division. This post is about how to set up the app to run in a Docker container on a Linux host. Since the host is a shared machine, I put all my work under ~/swesonga (my own custom home directory). The directory structure for the container will be under ~/swesonga/container/.

Set up the Factorization App

First, log into Linux machine and download the Java binaries to test:

ssh user@IPaddress
mkdir -p ~/swesonga/container/java/binaries/jdk/x64/
cd ~/swesonga/container/java/binaries/jdk/x64/

curl -Lo microsoft-jdk-21.0.5-linux-x64.tar.gz https://aka.ms/download-jdk/microsoft-jdk-21.0.5-linux-x64.tar.gz

tar xzf microsoft-jdk-21.0.5-linux-x64.tar.gz

Clone the factorize repo and set up its dependencies:

cd ~/swesonga/container/
git clone https://github.com/swesonga/factorize

cd ~/swesonga/container/java
curl -Lo commons-cli-1.9.0-bin.tar.gz https://dlcdn.apache.org//commons/cli/binaries/commons-cli-1.9.0-bin.tar.gz
tar xzf commons-cli-1.9.0-bin.tar.gz

Compile the factorization app:

export CLASSPATH=~/swesonga/container/java/commons-cli-1.9.0/commons-cli-1.9.0.jar:.
export JAVA21_HOME=~/swesonga/container/java/binaries/jdk/x64/jdk-21.0.5+11

cd ~/swesonga/container/factorize/java/project/src/main/java/org/swesonga/math

$JAVA21_HOME/bin/javac -d . PrimalityTest.java FactorizationUtils.java Factorize.java ExecutionMode.java

Set up the Docker Environment

Create the Dockerfile

Create a Dockerfile. See Dockerizing a Java Application | Baeldung and How To Dockerize Java Application (Step-by-Step Tutorial) for examples of how to do this. There are some OpenJDK images at Microsoft Artifact Registry (did I need to get my own JDK? maybe not but for now, I know where the JDK is and what is happening).

docker pull mcr.microsoft.com/openjdk/jdk:21-ubuntu

Create the Dockerfile below, substituting the appropriate value for <user>.

FROM mcr.microsoft.com/openjdk/jdk:21-ubuntu
COPY . /swesonga/
WORKDIR /swesonga/factorize/java/project/src/main/java/org/swesonga/math

ENTRYPOINT ["/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java", "-cp", "/swesonga/java/commons-cli-1.9.0/commons-cli-1.9.0.jar:.", "-Xint", "-XX:+UseSerialGC", "-XX:+UseCompressedOops", "-XX:HeapBaseMinAddress=0x120000000000", "-Xlog:gc*=debug,safepoint:file=serialgc-jdk21.log::filecount=0", "-Xlog:pagesize=trace:file=pagesize-jdk21.log::filecount=0", "-Xlog:os=trace:file=os-jdk21.log::filecount=0", "org.swesonga.math.Factorize", "-threads", "4", "-number", "7438880205542315091371423981777", "-systemGCFrequency", "1048576"]

If you create the Dockerfile on another machine, you can copy the Dockerfile to the Linux host using scp(1) – Linux manual page:

scp Dockerfile user@IPaddress:/home/<user>/swesonga/container/

Start Docker

Verify that docker is up by running docker version. I got this output:

user@machine:~/swesonga$ docker version
Client: Docker Engine - Community
 Version:           23.0.1
 API version:       1.42
 Go version:        go1.19.5
 Git commit:        a5ee5b1
 Built:             Thu Feb  9 19:46:56 2023
 OS/Arch:           linux/amd64
 Context:           default
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Run sudo systemctl start docker as described at Start the daemon | Docker Docs.

Build the Docker Image

See docker buildx build | Docker Docs for details on how to build. I only need the -t option to tag the image as swesonga-jdk21-testapp.

cd ~/swesonga/container/
docker build -t swesonga-jdk21-testapp .

Here is some sample output:

user@machine:~/swesonga/container$ docker build -t swesonga-jdk21-testapp .
[+] Building 7.9s (6/6) FINISHED
 => [internal] load build definition from Dockerfile                                                                            0.0s
 => => transferring dockerfile: 669B                                                                                            0.0s
 => [internal] load .dockerignore                                                                                               0.0s
 => => transferring context: 2B                                                                                                 0.0s
 => [internal] load metadata for mcr.microsoft.com/openjdk/jdk:21-ubuntu                                                        0.4s
 => [1/2] FROM mcr.microsoft.com/openjdk/jdk:21-ubuntu@sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c  7.2s
 => => resolve mcr.microsoft.com/openjdk/jdk:21-ubuntu@sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c  0.0s
 => => sha256:6414378b647780fee8fd903ddb9541d134a1947ce092d08bdeb23a54cb3684ac 29.54MB / 29.54MB                                0.5s
 => => sha256:ac27f5b44782db802c5876054378d16318ba6ab095203e15acc7527778c85370 178.15MB / 178.15MB                              2.3s
 => => sha256:d42e3adbad90b3214756070b3e98acd724228f7e8d08344d7044c0788a185b66 1.38kB / 1.38kB                                  0.2s
 => => sha256:98b6af6a403a01d476ee579340d624dfaac70409f50080e36eb6d86603f0ed8c 683B / 683B                                      0.0s
 => => sha256:ab80e68248a29dec58a531a5ff5a5bb873bb96fe829ac6b17f46c6f2a05cef63 899B / 899B                                      0.0s
 => => sha256:6d6be45eade816f5be7fc8935372e429b035dfee5f6e386dd7f87e6430228554 3.90kB / 3.90kB                                  0.0s
 => => extracting sha256:6414378b647780fee8fd903ddb9541d134a1947ce092d08bdeb23a54cb3684ac                                       1.3s
 => => extracting sha256:ac27f5b44782db802c5876054378d16318ba6ab095203e15acc7527778c85370                                       4.6s
 => => extracting sha256:d42e3adbad90b3214756070b3e98acd724228f7e8d08344d7044c0788a185b66                                       0.0s
 => [2/2] WORKDIR /home/<user>/swesonga/factorize/java/project/src/main/java/org/swesonga/math                                     0.2s
 => exporting to image                                                                                                          0.0s
 => => exporting layers                                                                                                         0.0s
 => => writing image sha256:4c068055cad5759153f3d3677404b9729b8ddee1c7026aa30e88b8c58c228037                                    0.0s
 => => naming to docker.io/library/swesonga-jdk21-testapp

Start the Docker Container

Before starting the container: docker ps | Docker Docs. Then docker run | Docker Docs.

docker ps
docker run -i -t swesonga-jdk21-testapp

Troubleshoot any Docker Errors

One error I ran into initially was that docker was unable to start the container process. I had missed the COPY command in the Dockerfile so the file couldn’t be found:

user@machine:~/swesonga$ docker run -i -t swesonga-jdk21-testapp
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/home/<user>/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java": stat /home/<user>/swesonga/java/binaries/jdk/x64/jdk-21.0.5+11/bin/java: no such file or directory: unknown.
ERRO[0000] error waiting for container:

Fix the Dockerfile then docker build again. My ENTRYPOINT path still wasn’t correct so the same error appeared when re-running the application. I didn’t realize this though and thought that since I had changed the entrypoint, a cached build must still be in use. A delete cached docker build – Search pointed to the post on macos – Is there a way to clean docker build cache? – Stack Overflow. This is what I tried in my ignorance:

user@machine:~/swesonga$ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          2         2         1.22GB    443.1MB (36%)
Containers      3         0         0B        0B
Local Volumes   0         0         0B        0B
Build Cache     9         0         776.9MB   776.9MB
user@machine:~/swesonga$ docker system df -v
Images space usage:

REPOSITORY               TAG       IMAGE ID       CREATED          SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
swesonga-jdk21-testapp   latest    682fedf54071   11 minutes ago   1.22GB    443.1MB       776.9MB       2
<none>                   <none>    4c068055cad5   26 minutes ago   443.1MB   443.1MB       0B            1

Containers space usage:

CONTAINER ID   IMAGE                    COMMAND                  LOCAL VOLUMES   SIZE      CREATED          STATUS    NAMES
c77b69082a8a   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        6 minutes ago    Created   awesome_chatelet
58c723638dd2   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        8 minutes ago    Created   sharp_lamport
4a3be55725b5   4c068055cad5             "/home/<user>/swesonga/…"   0               0B        17 minutes ago   Created   lucid_tharp

Local Volumes space usage:

VOLUME NAME   LINKS     SIZE

Build cache usage: 776.9MB
...

I tried pruning the build cache as suggested in that post.

user@machine:~/swesonga$ docker builder prune --all
WARNING! This will remove all build cache. Are you sure you want to continue? [y/N] y
ID                                              RECLAIMABLE     SIZE            LAST ACCESSED
te4o8rbj7s6nh6pluquzummzz                       true            0B              8 minutes ago
n2wbz4gf448fluw4uuogiqxdo*                      true    776.9MB         8 minutes ago
...
Total:  1.554GB

I realized that pruning wasn’t what I needed because now the cache was empty but the containers were still there based on the next output:

user@machine:~/swesonga$ docker system df -v
Images space usage:

REPOSITORY               TAG       IMAGE ID       CREATED          SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
swesonga-jdk21-testapp   latest    682fedf54071   18 minutes ago   1.22GB    443.1MB       776.9MB       2
<none>                   <none>    4c068055cad5   33 minutes ago   443.1MB   443.1MB       0B            1

Containers space usage:

CONTAINER ID   IMAGE                    COMMAND                  LOCAL VOLUMES   SIZE      CREATED          STATUS    NAMES
c77b69082a8a   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        13 minutes ago   Created   awesome_chatelet
58c723638dd2   swesonga-jdk21-testapp   "/home/<user>/swesonga/…"   0               0B        15 minutes ago   Created   sharp_lamport
4a3be55725b5   4c068055cad5             "/home/<user>/swesonga/…"   0               0B        24 minutes ago   Created   lucid_tharp

Local Volumes space usage:

VOLUME NAME   LINKS     SIZE

Build cache usage: 0B

CACHE ID   CACHE TYPE   SIZE      CREATED   LAST USED   USAGE     SHARED
user@machine:~/swesonga$

I should have been using docker ps -a instead! The -a shows the existing containers (regardless of whether they are running).

user@machine:~/swesonga$ docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS    PORTS     NAMES
c77b69082a8a   682fedf54071   "/home/<user>/swesonga/…"   16 minutes ago   Created             awesome_chatelet
58c723638dd2   682fedf54071   "/home/<user>/swesonga/…"   18 minutes ago   Created             sharp_lamport
4a3be55725b5   4c068055cad5   "/home/<user>/swesonga/…"   27 minutes ago   Created             lucid_tharp
user@machine:~/swesonga$

I started displaying the Dockerfile before building and one of the errors I ran into was because I hadn’t saved the Dockerfile. Sheesh.

cat Dockerfile
docker build -t swesonga-jdk21-testapp .
docker ps -a
docker run -i -t swesonga-jdk21-testapp

After all this experimentation, we can remove all the broken containers using docker container rm | Docker Docs to remove individual ones or docker container prune | Docker Docs to remove all stopped containers (use with caution)!

docker container prune

Collect Logs from the Container

I used CTRL+C to stop the container after some time. The next question was how to examine the files in the container. linux – Exploring Docker container’s file system – Stack Overflow suggests using docker container cp | Docker Docs. I opened another SSH session to the host running Docker then copied the logs from the container using these commands:

mkdir -p ~/swesonga/logs/tip/
cd ~/swesonga/logs/

docker ps -a
export CONTAINERID=XXXXXXXXXXX

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/serialgc-jdk21.log ~/swesonga/logs/

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/pagesize-jdk21.log ~/swesonga/logs/

docker cp $CONTAINERID:/swesonga/factorize/java/project/src/main/java/org/swesonga/math/os-jdk21.log ~/swesonga/logs/

I switched back to the development box and used scp to get the log files back to it.

scp user@IPaddress:/home/<user>/swesonga/logs/*.log .

When gathering multiple rounds of logs, it is convenient to group them into separate folders. I used this command:

export CURRDATE=`date +%Y-%m-%d_%H%M%S`; scp user@IPaddress:/home/<user>/swesonga/logs/*.log ./logs-$CURRDATE/

Setting Container Limits

To see the amount of RAM on the Docker host, run free -h since it’s an Ubuntu host. How do we set a RAM limit for a docker container? Docker – Setting Memory And CPU Limits points me to the --memory limit parameter of docker run | Docker Docs.

docker ps -a
docker run -i --memory 2GB -t swesonga-jdk21-testapp

Observe the head of the jdk21 GC log below. The total memory is now reported as 2048M. The maximum heap size is 25% of this total, as expected. The initial heap is 32MB and the minimum heap is 8MB.

[0.006s][info][gc,init] CardTable entry size: 512
[0.006s][debug][gc,heap] Minimum heap 8388608  Initial heap 33554432  Maximum heap 536870912
[0.006s][info ][gc     ] Using Serial
[0.006s][debug][gc,heap,coops] Protected page at the reserved heap base: 0x0000120000000000 / 2097152 bytes
[0.006s][debug][gc,heap,coops] Heap address: 0x0000120000200000, size: 512 MB, Compressed Oops mode: Non-zero disjoint base: 0x0000120000000000, Oop shift amount: 3
[0.007s][info ][gc,init      ] Version: 21.0.5+11-LTS (release)
[0.007s][info ][gc,init      ] CPUs: 20 total, 20 available
[0.007s][info ][gc,init      ] Memory: 2048M
[0.007s][info ][gc,init      ] Large Page Support: Disabled
[0.007s][info ][gc,init      ] NUMA Support: Disabled
[0.007s][info ][gc,init      ] Compressed Oops: Enabled (Non-zero disjoint base)
[0.007s][info ][gc,init      ] Heap Min Capacity: 8M
[0.007s][info ][gc,init      ] Heap Initial Capacity: 32M
[0.007s][info ][gc,init      ] Heap Max Capacity: 512M
[0.007s][info ][gc,init      ] Pre-touch: Disabled

The number of CPUs can be set using the --cpus option.

cat Dockerfile
docker build -t swesonga-jdktip-testapp-2core .
docker ps -a
docker run -i --cpus 2 --memory 2GB -t swesonga-jdktip-testapp-2core

Viewing Container Stats

One of the questions I need to answer is how much memory is in use and how much is available in the container. A view available memory in a docker container – Google Search pointed me to How to Use the Resource Usage Docker Extension. Turns out docker stats is good enough for my needs right now.

user@machine:~/swesonga$ docker stats

CONTAINER ID   NAME                      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O   PIDS
7f096bd5163d   condescending_goldstine   196.82%   26.03MiB / 2GiB     1.27%     806B / 0B   0B / 0B     13

Categories: Containers

Installing Docker on Ubuntu

I was looking for the authoritative way to install docker on Ubuntu. install docker ubuntu – Google Search points me to Install Docker Engine on Ubuntu | Docker Docs. Running this command shows that none of the packages in the Uninstall old versions section are installed on my Ubuntu VM.

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done

Docker Engine comes bundled with Docker Desktop for Linux. This is the easiest and quickest way to get started.

Install Docker Engine on Ubuntu | Docker Docs

The Docker Desktop generic installation steps link to Install Docker Desktop on Ubuntu | Docker Docs. Step 1 is to set up Docker’s package repository.

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

I end up running step 2 as well to install the docker engine and call it good.

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Skipping the package repository setup step will result in these errors (seen on x64 5.10.102.1-microsoft-standard-WSL2 but all other steps and output are from a VM):

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'docker-ce' has no installation candidate
E: Unable to locate package docker-ce-cli
E: Unable to locate package containerd.io
E: Couldn't find any package by glob 'containerd.io'
E: Couldn't find any package by regex 'containerd.io'
E: Unable to locate package docker-buildx-plugin
E: Unable to locate package docker-compose-plugin

I list the available containers by running docker ps and there are none, but this verifies that docker is working.

saint@ubuntuvm:~$ sudo docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
saint@ubuntuvm:~$

The hello-world image runs successfully as well.

saint@ubuntuvm:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
719385e32844: Pull complete 
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

saint@ubuntuvm:~$ 

Running Docker in WSL

I followed the above steps to install docker in my Windows Subsystem for Linux Ubuntu 22.04.2 LTS environment. Unfortunately, docker ps does not work.

saint@mymachine:~$ sudo docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
saint@mymachine:~$ 

linux – Docker not running on Ubuntu WSL due to error cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? – Stack Overflow suggests running sudo dockerd. Here is the tail end of the output, including an error.

ERRO[2023-10-17T09:09:19.059240012-06:00] failed to initialize a tracing processor "otlp"  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-10-17T09:09:19.059460691-06:00] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-10-17T09:09:19.059530687-06:00] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-10-17T09:09:19.059629051-06:00] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2023-10-17T09:09:19.059665540-06:00] containerd successfully booted in 0.025117s
INFO[2023-10-17T09:09:19.114570236-06:00] [graphdriver] using prior storage driver: overlay2
INFO[2023-10-17T09:09:19.114803099-06:00] Loading containers: start.
INFO[2023-10-17T09:09:19.297993571-06:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2023-10-17T09:09:19.298958219-06:00] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2023-10-17T09:09:19.299104948-06:00] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: unable to add return rule in DOCKER-ISOLATION-STAGE-1 chain:  (iptables failed: iptables --wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN: iptables v1.8.7 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain DOCKER-ISOLATION-STAGE-1
 (exit status 4))

I start by searching for the first error, failed to initialize a tracing processor “otlp” error=”no OpenTelemetry endpoint: skip plugin” – Google Search, instead of the last. However, I find Failed to start docker on WSL · Issue #8450 · microsoft/WSL (github.com) and it has the solution:

edit /etc/default/docker and add DOCKER_OPTS="--iptables=false"

Failed to start docker on WSL · Issue #8450 · microsoft/WSL (github.com)

Categories: SysAdmin, Windows

Disabled Device & Domain Join Issues

I recently had a Windows 11 device that was disabled by IT. The process of getting assistance exposed me to the types of Windows tools I never use: administration tools. IT would have me launch Quick Assist and give them control of my computer. This is when I was learning of the existence of tools like dsregcmd /status, which turn out to be well documented, e.g. see Troubleshoot hybrid Azure Active Directory-joined devices – Microsoft Entra | Microsoft Learn.

The Mobile Device Management tools were also used to generate some logs for inspection. These are documented in the section on how to Diagnose MDM failures in Windows 10 – Windows Client Management | Microsoft Learn. Unfortunately, these tools were not sufficient to restore my device to working order.

The last resort was to reset my device. After years of dumping stuff all over my hard drives, I was forced to do some cleanup to ensure I didn’t lose anything valuable. Going forward, everything will now be well organized so that whatever isn’t on OneDrive should be fine to lose. Ironically, the device reset tool could not let me sign in, which I needed to do to reset the device. We tried using the Reset this PC tool but it could not find the recovery partition.

As a last resort, I went to Download Windows 11 (microsoft.com) and downloaded the media creation tool to make a bootable USB drive (the Create Windows 11 Installation Media section). I picked up a 128GB onn stick from Target.

I discovered that setup wouldn’t proceed if the selected disk still had BitLocker enabled. After turning off BitLocker, I formatted my disks and got a fresh installation going. Now that I have so much disk space available, I have no idea why my disk was almost full – I’m not yet missing anything but time will tell if I erased something valuable. The last bit was Windows activation. This is supposed to happen automatically but since it didn’t, we had to use the Slmgr.vbs script.

slmgr.vbs /ckms
slmgr /skms KMS.host.computer.to.contact
slmgr /ipk AAAAA-BBBBB-CCCCC-DDDDD-EEEEE
slmgr /ato

The last command failed for some reason, so the workaround was to use these commands:

slmgr /upk
slmgr /cpky
slmgr /ipk AAAAA-BBBBB-CCCCC-DDDDD-EEEEE

Installing CentOS on Hyper-V

A few months ago I set out to install CentOS on a VM on my Windows 11 desktop. I selected the x86_64 RPM link on the CentOS Download page (centos.org), which linked to the CentOS Mirror. Browsing to the isos/x86_64 directory presented a list of mirrors with ISO images. I selected the MIT mirror: Index of /centos/7.9.2009/isos/x86_64/ (mit.edu) then downloaded the DVD-2009 ISO.

Next, I created a new VM in Hyper-V and set the downloaded ISO as the boot disk. This was not sufficient to start the VM. Hyper-V failed to boot because the signed image’s hash is not allowed.

Hyper-V VM Boot Failure Summary

The solution to this is from this article: Hyper-V Boot Error: The Image’s Hash and Certificate Are not Allowed (bobcares.com). Uncheck the Enable Secure Boot option in the VM’s settings then reboot the VM.

Disabling the Secure Boot Option

Setup is now straightforward. Here are the screenshots of the setup process. I selected the Server with GUI Base Environment with the Performance Tools and System Administration Tools add-ons.

Once setup completed, CentOS booted and prompted me to accept the license as shown in these screenshots.

This was my first time using CentOS in more than a decade so I was pleased that there wasn’t anything particularly jarring about the experience.


Sharing Files with Ubuntu Guest on Hyper-V Host

Of the many ways to transfer files to an Ubuntu guest on Hyper-V, running these PowerShell commands (as admin) suffices for a one-off file transfer. See 4 Ways to Transfer Files to a Linux Hyper-V Guest (altaro.com) for more details about this approach.

Enable-VMIntegrationService -VMName 'Ubuntu 22.04 LTS' -Name 'Guest Service Interface'

Copy-VMFile -Name 'Ubuntu 22.04 LTS' -SourcePath 'dumpfile.gz' -DestinationPath '/home/saint/Downloads' -FileSource Host
Copy-VMFile in Action

Backstory

Yesterday I had a core dump from a Linux process that I wanted to specifically inspect in an Ubuntu VM. My host machine is a Windows 11 (10.0.22621.674) machine. The simple question of how to share files with my Ubuntu VM took me all over the map. Searching for hyper-v share files linux guest led me to Shared Folders over Hyper-V Ubuntu Guest (linuxhint.com). This had me enabling SMB 1.0/CIFS File Sharing Support (already had SMB Direct enabled) and Public folder sharing.

SMB Windows Features
Public Folder Sharing Settings

I then created an empty directory and turned on sharing on it as instructed. However, accessing it from Ubuntu turned out to be the problem. These are the suggested commands:

sudo apt install cifs-utils
mkdir ~/SharedFolder
sudo mount.cifs //<NAME OF YOUR PC>/<SHARED FOLDER NAME>
~/SharedFolder -o user=<YOUR WINDOWS USERNAME>

mount.cifs failed though.

saint@linuxvm:~$ sudo mount.cifs //DEVICENAME/virtual-machines
~/shared -o user=USERNAME
Password for USERNAME@//DEVICENAME/virtual-machines: ***
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)

There doesn’t seem to be anything particularly interesting at mount.cifs(8) – Linux man page (die.net). Running dmesg showed these messages:

[  425.318905] CIFS: Attempting to mount \\DEVICENAME\virtual-machines
[  425.318905] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
[  425.318905] CIFS: VFS: \\DEVICENAME Send error in SessSetup = -13
[  425.318905] CIFS: VFS: cifs_mount failed w/return code = -13

cifs status_logon_failure – Search (bing.com) leads to a comment at STATUS_LOGON_FAILURE (0xc000006d) · Issue #478 · hierynomus/smbj (github.com) stating that STATUS_LOGON_FAILURE means that your credentials were rejected. This error code (and others) at [MS-CIFS]: SMB Error Classes and Codes | Microsoft Learn. The Windows event logs do not contain any entries related to this (surprisingly). So I pivot to the next result from my search for hyper-v share files linux guest.

4 Ways to Transfer Files to a Linux Hyper-V Guest (altaro.com) instructs you to enable the file copy guest service (either using PowerShell or the GUI). Apparently a power cycle of the VM is not necessary. See article for more info.

Enable-VMIntegrationService -VMName LinuxVM3 -Name 'Guest Service Interface'

Copy-VMFile -Name LinuxVM3 -SourcePath 'dumpfile.gz' -DestinationPath '/home/saint/Downloads' -FileSource Host

Unfortunately, Copy-VMFile fails. The VM is running Ubuntu 20.04.1 (x86_64) with kernel 5.15.0-52-generic.

It shouldn’t be this hard to just get a file into a guest VM. Looking up the docs again and Use local resources on Hyper-V virtual machine with VMConnect | Microsoft Learn suggests VMConnect but looks like enhanced session mode and Type Clipboard text are only available on VMs running a recent Windows OS. For Ubuntu, that article points to Changing Ubuntu Screen Resolution in a Hyper-V VM | Microsoft Learn. At this point, I decide to create a new VM using Hyper-V’s quick create and perhaps that will have the proper configuration for what I’m trying to do.

Creating an Ubuntu VM

Click on Hyper-V’s Quick Create… command to start creating a VM. Select the latest Ubuntu LTS (22.04). Unfortunately, the only options available are the VM name and the network switch to use. Clicking on Create Virtual machine creates a VM on the primary/OS disk. I was pleasantly surprised to find that the Ubuntu 22.04 VM appeared to support enhanced session mode when Hyper-V asked for the screen resolution when connecting to it:

Connecting to Ubuntu VM

The enhanced session gives this xrdp login window:

xrdp Login Window

The window disappears when I enter my credentials and nothing happens for some time. I used the “Basic Session” toolbar button to switch back to the normal mode I’m used to. These are some of the errors I encounter:

Oh no! Something has gone wrong.
Internal Error Details

The error report points out that I have obsolete packages, among them gnome-shell (which crashed). I run sudo apt upgrade and says yes to the 368 upgrades (826 MB of archives). That is not sufficient to address this rdp bug so I stay in Basic Session mode for the rest of the time.

This leads me back to the PowerShell commands I used above. Lo and behold, they work this time! This is despite the fact that there don’t appear to be any processes displayed by ps -u root | grep hyper as described at 4 Ways to Transfer Files to a Linux Hyper-V Guest (altaro.com).

Enable-VMIntegrationService -VMName 'Ubuntu 22.04 LTS' -Name 'Guest Service Interface'

Copy-VMFile -Name 'Ubuntu 22.04 LTS' -SourcePath 'dumpfile.gz' -DestinationPath '/home/saint/Downloads' -FileSource Host

This is when I discover that I do not have enough space on the VM to expand my .gz file.

Inspecting Disk Usage

Unfortunately, the disk for the VM is only 12 GB (confirmed by launching Ubuntu and running out of space). Therefore, once the installation completes, expand the disk from 12 GB to a more reasonable size (e.g. 127 GB). If the default drive Quick Create used for the VM’s virtual disk does not have sufficient space, you will need to move the virtual hard disk to another drive then expand the partition in Ubuntu to use the whole virtual disk.

Moving Ubuntu VM to a Bigger Disk

My main desktop has a 500 GB SSD that had only about 20GB of space free. How unpleasant to then discover that Quick Create simply dumped the new VM on it AND created such a small disk to start with, all without asking. Turns out I’m not the only one that finds this behavior less than ideal: hyperv quick create disk size – Search (bing.com) pointed me to Hyper-V Ubuntu 18.04 Quick Create disk size is too small · Issue #82 · microsoft/linux-vm-tools (github.com) and unfortunately, doesn’t look like there’s a resolution of this issue. My solution was to create a new virtual disk on my secondary 3.5 TB hard drive.

If the VM was still running, this error dialog will most likely be displayed.

After starting the VM again, I still didn’t have enough space to decompress my .gz file.

Inspecting Disk Usage

Fortunately, there is a useful site explaining how to Expand Ubuntu disk after Hyper-V Quick Create – Anton Karl Ingason (linguist.is):

sudo apt install cloud-guest-utils
sudo growpart /dev/sda 1
sudo resize2fs /dev/sda1

growpart failed the first time I ran it. The disk was still 12 GB!

I had to turn off the VM, wait for the disk “merging” status to go away, then go to edit the disk in Hyper-V:

Some scary warnings about data loss that I promptly ignored and marched forward since I didn’t yet have any critical data on that disk.

Once the expansion completes, the growpart command can now be successfully exeuted as shown below.

Running growpart in Ubuntu

Open Questions

  1. Why does mount.cifs fail (on both VMs)?
  2. Why does Copy-VMFile work on Ubuntu 22 VM but not Ubuntu 20?

Setting up an Amazon AMI

Start with the CS462 AMI.

Edit multiverse.list

sudo vi /etc/apt/sources.list.d/multiverse.list

Add the following lines to multiverse.list:

deb http://us.ec2.archive.ubuntu.com/ubuntu/ karmic multiverse
deb-src http://us.ec2.archive.ubuntu.com/ubuntu/ karmic main

Then run the following commands:

sudo apt-get update
sudo apt-get install apache2
sudo apt-get install php5 php5-cli php-pear php5-gd php5-curl
sudo apt-get install libapache2-mod-php5
sudo apt-get install libapache2-mod-python
sudo apt-get install ec2-ami-tools
sudo apt-get install ec2-api-tools
sudo apt-get install python-cheetah
sudo apt-get install python-dev
sudo apt-get install python-setuptools
sudo apt-get install python-simplejson
sudo apt-get install python-pycurl
sudo apt-get install python-imaging
sudo apt-get install subversion
sudo apt-get install git-core

Note: the sun-java6-bin and libphp-cloudfusion packages are not strictly necessary (OpenJDK will be installed instead of the former, and the AWS PHP SDK instructions are given below instead of the latter). unzip could come in handy as well. The python packages are installed to allow for python web development without having to install the appropriate packages after starting the server.

git config --global user.name "Johny Boy"
git config --global user.email johnyboy@gmail.com
sudo vi /etc/apache2/sites-available/default

Next, install Smarty as per the Smarty documentation (lines 1-3) and the Zend Framework as well (lines 4-7) since it may come in handy.

cd /usr/local/lib
sudo wget http://www.smarty.net/files/Smarty-3.0.7.tar.gz
sudo tar vxzf Smarty-3.0.7.tar.gz
cd /opt
sudo wget http://framework.zend.com/releases/ZendFramework-1.11.4/ZendFramework-1.11.4-minimal.tar.gz
sudo tar vxzf ZendFramework-1.11.4-minimal.tar.gz
sudo mv ZendFramework-1.11.4-minimal ZendFramework-1.11.4

Install System_Daemon as well to enable running PHP Daemons. There’s also a sample daemon illustrating how to use this class.

sudo pear install -f System_Daemon

Clone the AWS PHP SDK into /usr/share/php as documented in the “Getting Started with the AWS SDK for PHP” tutorial (lines 1-3) and then configure the SDK security credentials (lines 4-6).

sudo mkdir -p /usr/share/php
cd /usr/share/php
sudo git clone git://github.com/amazonwebservices/aws-sdk-for-php.git awsphpsdk
mkdir -p ~/.aws/sdk
cp /usr/share/php/awsphpsdk/config-sample.inc.php ~/.aws/sdk/config.inc.php
vi ~/.aws/sdk/config.inc.php

Now we can prepare to create the image and then run the ec2 commands to create, upload, and register the image. See the AMI tools reference for information about these commands. Of course the actual access key, secret key, bucket names, etc need to be substituted with the correct values.

cd /mnt
sudo mkdir image
sudo mv /home/ubuntu/PrivateKey.pem .
sudo mv /home/ubuntu/X509Cert.pem .
sudo ec2-bundle-vol -k PrivateKey.pem -c X509Cert.pem -u 999988887777 -d /mnt/image
sudo ec2-upload-bundle -b cs462-machines/mybucket -m /mnt/image/image.manifest.xml -a AKIADQKE4SARGYLE -s eW91dHViZS5jb20vd2F0Y2g/dj1SU3NKMTlzeTNKSQ==
ec2-register cs462-machines/mybucket/image.manifest.xml --K PrivateKey.pem -C X509Cert.pem

Once the process is complete, the instance can be launched with the following user data:

#! /bin/bash
sudo git clone git://github.com/pathtorepo/cs462.git /home/ubuntu/www > /home/ubuntu/gitclone.log
sudo chown -R ubuntu /home/ubuntu/www/
sudo chown nobody:nogroup /home/ubuntu/www/smarty/templates_c/
sudo chown nobody:nogroup /home/ubuntu/www/smarty/cache/
sudo chmod 770 /home/ubuntu/www/smarty/templates_c/
sudo chmod 770 /home/ubuntu/www/smarty/cache/

Note that the owner of the checked out www folder is set to ubuntu to ensure files can be edited conveniently without sudo. The “nobody” user is then made the owner of the smarty folders and they are assigned to the “nogroup” group. The permissions are then set to 770 for maximum security. I actually ended up using 777 to speed up development on my server – see the apache error log if nothing is displayed from the templates (most likely a case of permission errors).

Here’s are some options to include in the apache configuration file:

        DocumentRoot /home/ubuntu/www/htdocs
        <Directory /home/ubuntu/www/htdocs/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

                DirectoryIndex index.php index.html index.py

                AddHandler mod_python .py
                AddHandler php5-script .php
                PythonHandler mod_python.publisher
                PythonDebug On
        </Directory>

I ended up pushing my server configuration as well to a public git server containing my entire application. Server configuration is then reduced to:

sudo cp /home/ubuntu/www/serverconfig/apache/appserver/default /etc/apache2/sites-available/default
sudo apache2ctl restart

Changing Screen Resolution of Ubuntu in Virtual Box

As suggested on one of the Ubuntu forums, the key here is to install the VirtualBox guest  additions. Having done so on my system, I ran these commands:

cd /media/VBOXADDITIONS_3.2.6_63112/
sudo ./VBoxLinuxAdditions-x86.run

Rebooting my virtual machine and maximizing the VirtualBox window left me running Ubuntu at my native screen resolution of 1680×1050 :).

Update: On VirtualBox 4.1.2, use the virtual machine’s Devices -> Install Guest Additions … menu item. The ISO Disc should be automatically mounted, and allowing autorun to continue should complete the installation. The Virtualbox website has more information on guest additions.


Categories: SysAdmin

Broken Windows Installer

I was fooling around with the Linux kernel source not too long ago (under the impression that I could build it in a VM on a 1GB Windows XP laptop to which I had access). However, I was dismayed to find that I could barely install Sun VirtualBox. The installer hung (thankfully after getting through all the required files) at the network configuration screen. Interestingly, I had failed at installing Visual C++ 2008 Express Edition as well – the downloads didn’t even begin. I convinced myself that getting the full ISO files was taking the easy out.

It wasn’t until later (ahem, a few weeks later) that I dug into the cause of the problem and looked into the Windows Installer. Turns out I was using v3.0.who-knows-what. The latest version of the installer from Microsoft fixes these “hanging” issues. You can find out which version of the installer you are running by… hold your breath… Start -> Run -> msiexec

And to think that I put this off for a few weeks!


Categories: SysAdmin

Module loaded but DllRegisterServer failed with error code 0x80070005

While recently monkeying around with registering some DLLs, I ran across this error message:

Looking at WinError.h (in the %ProgramFiles%\Microsoft SDKs\Windows\v6.0A\Include folder on my system) reveals that it’s just a fancy “you-don’t-have-permission-to-do-this” dialog box:

//
// MessageId: E_ACCESSDENIED
//
// MessageText:
//
// General access denied error
//
#define E_ACCESSDENIED                   _HRESULT_TYPEDEF_(0x80070005L)

I simply switched to an account with administrative privileges and voila! No failure this time. But couldn’t these codes be run through FormatMessage or something similar for crying out loud! Of course there are always other ways to figure out what is going on though.


Categories: SysAdmin

Hello!

First post!