Wednesday, January 19, 2022

Quick and Dirty Function to get Rounded Up Log2

 Here is a quick and dirty C/C++ function to return the rounded up log base 2 of any integer.

int ceil_log_base_2(int value) {
    int ones = 0;
    int shifts = 0;
    while (value > 0) {
        shifts++;
        if (1 & value)
            ones++;
        value = value >> 1;
    }
    return --shifts + (ones > 1 ? 1 : 0);
}

My buddy Rob jumped in and spun this one x86 assembly style:

int __builtin_popcount (unsigned int x);
int __builtin_clz (unsigned int x);

int rob_ceil_log_base_2 (int value) {
    return ((sizeof(int) * 8) - __builtin_clz (value)) +
        (__builtin_popcount(value) > 1 ? 1 : 0) - 1;
}

And if you prefer inline, Rob has you covered there too:

int __builtin_popcount (unsigned int x);
int __builtin_clz (unsigned int x);
#define rob_ceil_log_base_2(value) (int) (sizeof(value) * 8) - \     __builtin_clz (value) + (__builtin_popcount(value) > 1 ? 0 : -1)

Sunday, February 21, 2021

Bootable Linux Sparse Virtual Disk Images

The following recipe will get you a bootable sparse disk image that is 20GB in size, but only takes up a minimal amount of disk space (about 4.5MB to start). This process is suitable for creating disk images for Linux virtual machines.

First step is to create your sparse file:

truncate example.img --size 20G

This sets the size to about 20 gigabytes, but in reality it is not taking up any space:

# ls --size --block-size=1 example.img

0 example.img

# stat --format='%s' example.img

21474836480

Since you probably want to install a bootloader in order to make this a bootable image, we are going to need a partition table. I prefer the parted command for this over our old friend fdisk, since parted is a bit easier to script.

# parted example.img mklabel gpt

This creates a GPT partition table in the first 2048 sectors of the image file. The default partition type is MBR, which is fine if you plan on staying under 2TB, and do not mind dealing with extended and logical partitions. I see little to be lost by using GPT, since it part of UEFI, and is backward compatible with legacy BIOS type systems.

# parted example.img print

Model:  (file)
Disk /tmp/example.img: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

Checking on the size, we see that our image file takes up about 40 kbytes, despite still appearing to be 20 gigabytes in size.

# ls --size --block-size=1 example.img

40960 example.img

# stat --format='%s' example.img

21474836480
 

Now we can add a partition. In this case I am only going to create one partition that uses all of the available space and I am going to give it the name "vm-root" to avoid confusion later.

# parted example.img mkpart primary 0% 100%

# parted example.img name 1 vm-root

# parted example.img print

Model:  (file)
Disk /tmp/example.img: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  21.5GB  21.5GB               vm-root

Now we format the partition; I generally use ext4 these days. There is a fairly significant limitation to the ext4 mkfs tooling that forces us to use loopback devices at this point.

It was my hope that mkfs would figure out the partition size on its own, or perhaps let me specify it as an argument. But all attempts to do that caused mkfs to overrun the partition boundaries and break the backup GPT partition table.

When you specify fs-size to the mkfs.ext4 command, what you are specifying is the usable space you want, not the actual size of the available volume. The mkfs.ext4 command gets the volume size from the kernel's block layer, and then juggles a lot of complex logic to figure out how much space needs to be burned for meta information like superblocks and inode tables.

I probably could have figured out the math for the fs-size argument and formatted the image file partition directly, but there are too many variables to make me feel like that is a good use of my time. That being said, it would be of nice if the mkfs tooling got a virtual image mode that either allowed you to specify the device size, or detected the partition size from the partition table.

We need to bind our image file and partition to a loopback device:

DEV=$(losetup --show --find --partscan example.img)

 And now we can format our new partition:

# mkfs.ext4 -F ${DEV}p1

mke2fs 1.45.6 (20-Mar-2020)
/dev/loop0p1 contains a ext4 file system
        created on Sun Feb 21 19:32:16 2021
Discarding device blocks: done                            
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: ede5cf4d-f7f6-4747-8c7a-28d794f92b92
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

 Another size check shows that we are now up to about 4.5 megabytes.

# ls --size --block-size=1 example.img

4505600 example.img

# stat --format='%s' example.img

21474836480

Pretty good so far. Now we can mount our new volume and take a look around.

# mkdir img-mp

# mount ${DEV}p1 img-mp

# df img-mp

Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/loop0p1    20509264 45080  19399328   1% /tmp/img/img-mp

And now for the cleanup:

# umount ${DEV}p1

# losetup -d ${DEV}

To make this a bootable image, mount your volume again and copy the Linux OS file tree of your choice into the mounted volume. Then use the grub-install command to install the boot loader.

# grub-install --modules=part_gpt --root-directory /tmp/img/img-mp ${DEV}

Since this is a sparse image, it will grow larger as more data is written to it until you hit the size limit, but it will not get smaller when data is deleted.

The simplest way to compact this image is to use the zerofree command to zero out the empty space, and then use your VM hypervisor's tools to do the rest, such as virt-sparsify or VirtualBox modifymedium ${FILENAME} --compact.


Tuesday, February 09, 2021

Bash-ism to Display Last Command in xterm Title Bar

I got frustrated at the lack of support for displaying the last command entered in the xterm title bar when I bounced around from host to host, so I came up with a bash-ism to remedy the problem.

Add these to your .bashrc file on any hosts that this matters to you. I suspect there is a cleaner implementation but all of the command escaping was getting too complicated, so I broke it up into two separate functions.

This only works with versions of bash with PS0 support (bash 4.4 or higher) and I have only tested it using the macOS Terminal.app. It seems to work exactly as expected in screen sessions too, which is a nice bonus.

Fixes welcomed and appreciated, particularly those that make this work on earlier versions of bash. Just leave a comment or @ me on twitter.

 

settitle() {
    [[ -z $1 ]] && return
    case ${TERM} in
        xterm*) echo -n -e "\033]0;$1\007";;
    esac
}

lastcmd() {
    echo $(fc -l -1 -1 | sed -e 's/\s*[0-9]*\s*//');
}

export PS0='$(settitle "$(lastcmd)")'

 

Note: macOS does not deliver with bash 4.4 (yet?), so you should make sure to leave the "Active process name" setting checked in your Terminal.app to get this information locally. These are the settings that I am using.






Saturday, January 30, 2021

EdgeRouter Failover Configuration with Partial IPv6

I am in that fortunate first world situation to have two Internet connections wired to my humble abode. One is fast, and the other is pretty slow by modern standards. I considered getting rid of the slow connection, but I got a killer lifetime deal on the price, so it hardly seems worth getting rid of it.

My EdgeRouter supports failover so I figured I would take advantage of the second Internet connection and add a some redundancy to my home network. I wanted to bias things in favor of the faster connection, so it was important to ensure that I was only on the slower connection whenever the faster connection was unavailable. The faster connection also supports IPv6, which I wanted to retain as much as possible.

The basic failover configuration is simple enough that you can use one of the build-in Wizards in the EdgeMAX web UI to set it up. 

 

To ensure that the secondary connection only stands-in when the primary connection is unavailable, simply check this box:

In the EdgeOS config, this sets a failover priority value of 100 on the eth0 interface (the fast connection) and 60 on the eth1 interface (the slow connection). Your mileage may vary, but my experimentation showed that it only took six seconds to fail over to the slow connection, and about 40 seconds to fail back to the fast connection.

This particular configuration is quick and easy, but it omits IPv6, so that part requires some hand tweaking and a few compromises in my situation.

The intention behind IPv6 is that everything on the Internet gets a unique address. In contrast, IPv4 simply does not have enough addresses to go around, so the normal approach is to use a DHCP server to hand out RFC 1918 addresses to stuff inside of your private network.

Supporting IPv6 is great for the health of the Internet, but it complicates things when you want to set your SOHO network up for redundancy. When you failover, your alternate ISP does not recognize the IPv6 address range assigned to all of your devices, so everything stops communicating until each device updates their IPv6 address. In contrast, private IPv4 assignments are typically translated at your firewall, so nothing is required on the client side when you failover.

To fix this problem with IPv6 we have a few tools at our disposal NPTv6NAT66, and ULA. I agree with many other voices on the Internet that NAT66 is a fundamentally broken hack and should not be used. There are privacy concerns with NPTv6, but RFC 4941 seems to address most (all?) of them. ULA solves the problem that RFC 1918 solves with private addresses, and is probably not a significant factor in any SOHO failover design like this.

If I had two connections that supported IPv6, I would probably figure out how to get NPTv6 working, but since I only have one, I really just need to accept a minor compromise. If I choose to support IPv6, I have to accept that I will not have any IPv6 support during a failover. I have no problem with this compromise because we are still in the IPv6 transition phase and virtually everything is available via IPv4. I expect that applications sending traffic over an IPv6 interface will more or less transparently start using the IPv4 interface.

To add IPv6 support to the EdgeOS failover configuration, you can manually add the following sections to support an IPv6 firewall and a DHCPv6 prefix designation.

The firewall configuration (which goes into the firewall section) should look something like this:

    ipv6-name WANv6_IN {
        default-action drop
        description "WAN inbound traffic forwarded to LAN"
        enable-default-log
        rule 10 {
            action accept
            description "Allow established/related sessions"
            state {
                established enable
                related enable
            }
        }
        rule 20 {
            action drop
            description "Drop invalid state"
            state {
                invalid enable
            }
        }
    }
    ipv6-name WANv6_LOCAL {
        default-action drop
        description "WAN inbound traffic to the router"
        enable-default-log
        rule 10 {
            action accept
            description "Allow established/related sessions"
            state {
                established enable
                related enable
            }
        }
        rule 20 {
            action drop
            description "Drop invalid state"
            state {
                invalid enable
            }
        }
        rule 30 {
            action accept
            description "Allow IPv6 icmp"
            protocol ipv6-icmp
        }
        rule 40 {
            action accept
            description "allow dhcpv6"
            destination {
                port 546
            }
            protocol udp
            source {
                port 547
            }
        }
    }

 And this gets added to the eth0 interface configuration:

        dhcpv6-pd {
            pd 0 {
                interface eth1 {
                    host-address ::1
                    prefix-id :1
                    service slaac
                }
                interface switch0 {
                    host-address ::1
                    prefix-id :2
                    service slaac
                }
                prefix-length /60
            }
            rapid-commit enable
        }

Important Note: The /60 prefix length is provider specific. If you cannot get an IPv6 grant from your provider, you may need to change this value.

And finally, be sure to add the new IPv6 firewall labels to your eth0 interface firewall configuration to bring it all together:

        firewall {
            in {
                ipv6-name WANv6_IN
                name WAN_IN
            }
            local {
                ipv6-name WANv6_LOCAL
                name WAN_LOCAL
            }
         }

Load that configuration back to EdgeOS and you should be all set after a reboot.

Sunday, October 20, 2019

MSTest Unit Tests on Visual Studio 2019 for Mac

There are several unit test options available in Visual Studio 2019, but the decision to use MSTest for a particular project was out of my hands, so this post covers getting MSTest unit tests working on Visual Studio 2019 for Mac.

The first step is to make sure you are using Visual Studio, and not confusing it with Visual Studio Code. The Visual Studio icon looks like this:

 


And the Visual Studio Code icon looks like this:

 

These instructions apply to Visual Studio (for Mac), not Visual Studio Code.

If you want to download a pre-loaded solution with a working example of an MSTest unit test, feel free to clone this GitHub repository and make any necessary changes. Replace the C# files with yours as needed (or just copy and paste contents). All of the Assert class methods can be found here; they are necessary to write other forms of unit test functions.

Important Note: If you clone the above repository, but have never added all of the NuGet testing plugins, the project will probably fail to build. The resolution is to follow the first half of the instructions below to install all of the necessary NuGet packages.

To update an existing project...

If you run into trouble, it might help to clone the example repository above and compare your solution with mine.

Some guidance instructs you to create a separate sub-project under your project's solution to contain the unit test code. I suspect those instructions work properly for Visual Studio running on Windows, but there seem to be namespace issues in Visual Studio for Mac that prevents that approach from working correctly.

The solution is to apply all of the following instructions to the solution that your source files are stored in and not create a separate sub-project to contain the unit tests.

Add the proper unit test packages via NuGet. <ctrl>-click Dependencies and select Manage NuGet Packages...


You should see a screen similar to this one:


Use the search box in the upper right of that screen to find and add the following three packages:
  • Microsoft.NET.Test.Sdk
  • MSTest.TestAdapter
  • MSTest.TestFramework

Once you have added these packages, your solution will probably be broken with the following build message:

error CS0017: Program has more than one entry point defined. Compile with /main to specify the type that contains the entry point.

Many thanks to Andrew Lock for solving this one. His solution is geared towards xUnit, but it works in this case as well.

You should really read his incredibly detailed rundown; I did, and I learned a lot. But if you are short on time, the TL;DR is to add the following line of XML to any <PropertyGroup> section of your project's .csproj file:


<GenerateProgramFile>false</GenerateProgramFile>



And finally, to actually run the unit tests, select "Run Unit Tests" from the "Run" menu.


After you run the unit tests for the first time, you should have a button like this at the bottom of your IDE:

 

Click "Test Results" to display the test results browser and debug any test failures.

Thursday, March 08, 2018

Chasing a (possible) udev issue...

I have been chasing, what might possibly be, a problem in udev, and am using this blog post as a gathering point for evidence.

Because there is so little to be found on this issue, it is almost certainly a configuration issue in my own environment. Otherwise the Internet would be full of people seeing the same problems.

I also cannot guarantee that all of this evidence is related, nor can I be 100% certain that it is actually due to a problem with udev. All I can say for certain is that the evidence seems to be pointing towards an issue with device nodes not being created when they should be. Since udev is responsible for managing device nodes, it seems reasonable to start considering that udev may be at fault somehow.


First some background information. I am maintaining my own yocto based Linux distribution, currently working from the Pyro branch.

  • bash version - 4.3.47(1) (x86_64)
  • LVM version information:
    • LVM Version: 2.02.166(2) (2016-09-26)
    • Library Version: 1.02.135 (2016-09-26)
    • Driver Version: 4.31.0
  • udev version 232

The first problem...

When I attempt to create a new logical volume:
root@server:~# lvcreate -L1M -ntest.data vg00
Rounding up size to full physical extent 4.00 MiB
/dev/vg00/test.data: not found: device not cleared
Aborting. Failed to wipe start of new LV.
When you create a new logical volume with lvcreate, the first 4KiB needs to be zeroed out to avoid a potential hang when mounting the volume. It appears as if the device file (/dev/vg00/test.data) is not being created in time for the remaining lvcreate tasks to finish. The workaround is to use the -Zn argument to lvcreate and then do a manual zeroing with the dd command, like the following:
root@server:~# lvcreate -Zn -L1M -ntest.data vg00
Rounding up size to full physical extent 4.00 MiB
WARNING: Logical volume vg00/test.data is not zeroed.
Logical volume "test.data" created.
root@server:~# dd if=/dev/zero of=/dev/vg00/test.data bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1kB, 4.0 KiB) copied, 0.00415683 s, 985 kB/s

The second problem...

This one happens when I attempt to use bash process substitution. What I expect to see is something like the following:
root@server:~# echo <(true)
/dev/fd/63
What I actually see is the following:
root@server:~# echo <(true)
-sh: syntax error near unexpected token `('
The characters are absolutely identical in both cases - I have copied and pasted them in every way I know how. I have also tried the same command in a multitude of bash interpreters of various versions and it always works as expected.

Because bash process substitution depends on device nodes being created on the fly, this seems like it would be related to the LVM problem above.

Conclusions...

As I mentioned at the top, I am not 100% certain that this is all related, but it seems pretty suspicious. Two instances that rely on device files to be created on the fly, both seem to fail. Conversely, the boot process, which creates a lot of device nodes, seems to work just fine. I am also not able to find any smoking guns (or even faint whiffs) in the logs.

More to come...

Saturday, March 26, 2016

Converting global temperature change to energy...

TL;DR: Over the last 100 years, every 41 days the amount of energy we generated in 2013 gets "stuck" in the atmosphere.

The details...

I like reading Cliff Mass' blog because he takes such an even handed approach to the science of climate change. He is very careful with his assertions and takes on media outlets for being too sensational about what the data does and does not say.

recent post of his did a fantastic job of putting the affect of climate change into context with natural variability. The punchline is that climate change only enhances what is already natural variability. If it was going to be hot, it will be just a little bit hotter. If it was going to be cold, it will be slightly less cold. If you were going to get a hurricane, it will be a bit more energetic. And so on...

Or to put it in different terms, you cannot blame anything directly on climate change. If you are experiencing it, it was probably going to happen whether or not humans were on the Earth. You are just going to have a more intense experience.

So I applied some High School level math to figure out exactly what that means...

Mass of the Earth's Atmosphere: 5x1018 kg
Average atmospheric specific heat: 1005 j/kg/K
Current average temperature change: About 1 degree Kelvin (which is the same as 1 degree Celsius)
Global yearly energy generation in 2013: 5.67x1020 joules

Note: I assume energy consumption is equivalent to energy generation. While they may not be exactly equal in reality, it is certainly true that consumption could not be less than generation. I am also using 2013 energy consumption numbers. The numbers are most certainly higher in 2016.

So given the basic heat equation: Q = mcΔT

Q = Joules of energy required to cause a temperature change.
m = Mass (in kilograms) of the matter you are heating up.
c = A constant representing how difficult it is to change the temperature of the matter.
ΔT = The actual temperature change.

Using the above numbers we get:

Q = 5x1018 kg * 1005 j/kg/K * 1 = 5.025x1021 joules

So that means it takes 5.025x1021 joules of energy to raise the average temperature of the Earth one degree Celsius. But 5.025x1021 joules is a really big number that is hard to put into terms that anyone can understand, so let us look at how this compares to how much energy we actually generate on the Earth...

As I mentioned above, in 2013 we generated about 5.67x1020 joules of energy. If we divide the amount of energy it took to raise the atmosphere by 1 degree Celsius, by the amount of energy we generated in 2013, we get:

5.025x1021 joules  / 5.67x1020 joules = 8.86

So this means we have 8.86 times the energy we generated in 2013 currently trapped in the atmosphere. But this number is still not very interesting because it says nothing about the rate at which this is happening.

The industrial era has been going on for about 100 years now which is about 36,525 days. In that amount of time, we have managed to alter the atmosphere so that 8.86 times the 2013 energy generating capacity of the earth is stuck in it.

Averaging over that 36,525 days:

5.025x1021 joules / 36,525 days = 1.38x1017 joules stuck in the atmosphere per day

Which means that the atmosphere has been retaining an average of 1.38x1017 joules of energy every single day.

So how does that compare to the 2013 energy generating capacity?

1.38x1017 joules per day / 5.67x1020 joules in 2013 = 0.00024 * 100% = 0.024% of energy generated in 2013

So that means, for every single day in the last 100 years, the atmosphere has retained about 0.024% of the equivalent of the 2013 energy generating capacity. Or to flip that around, every 41 days the amount of energy we generated in 2013 gets "stuck" in the atmosphere.