Apr 242018
 

Time printing and parsing is always “fun” regardless of language but, for most of time C#, actually has it much better than one would expect. However, sometime one can get surprised by small details.

One application had the following (extremely simplified) code writing time entries:

var text = someDate.ToString("HH:mm", CultureInfo.InvariantCulture);

Other application had the (also simplified) parsing code:

var someTime = TimeSpan.ParseExact(text, "HH:mm", CultureInfo.InvariantCulture);

And all I got was System.FormatException: 'Input string was not in a correct format.'

Issue was not in text, all entries were valid time. Nope, issue was in a slight difference between custom DataTime and custom TimeSpan specifiers.

Reading documentation it’s really easy to note the first error as these is a clear warning: “The custom TimeSpan format specifiers do not include placeholder separator symbols.” In short that means we cannot use colon (:) as a TimeSpan separator. Nope, we need to escape it using backslash (\) character. I find this one mildly annoying as not implementing time separators into TimeSpan seems like a pretty basic functionality.

But there is also a second error that’s be invisible if you’re not watching carefully. TimeSpan has no knowledge of H specifier. It understands only its lowercase brother h. I would personally argue this is wrong. If any of those two had to be used, it should have been H as it denotes time more uniquely.

In any case, the corrected code was as follows:

var someTime = TimeSpan.ParseExact(text, "hh\\:mm", CultureInfo.InvariantCulture);

Ideally one would always want to use the same format for output as the input but just because semantics of one class are the same as the other for your use case, you cannot count of format specifier to be the same.

Apr 192018
 

Recently I decided to stop using Google Analytics for my website traffic analysis. Yes, it’s still probably the best analytics tool out there but I actually don’t care about details that much. The only thing I care about is trend – is site getting better or worse – and nothing much else. For that purpose, a simple Webalizer setup would do.

My Webalizer setup was actually as close to plain as it gets. The only trick up my sleeve was that I had a separate configuration file for each of my sites. Of course, since I use logrotate to split my Apache traffic logs, I also needed to add a bit of prerotate action into the /etc/logrotate.conf to ensure I don’t miss any entries.

My first try was this:


  prerotate
    for FILE in /srv/configs/**/webalizer.conf
    do
      /usr/bin/webalizer -c $FILE
    done
  endscript

And, while this did wonders from command line, it seemed to do absolutely nothing when executed by logrotate itself. Reason for its misbehavior was that logrotate (and crontab) uses sh as its shell and not bash.

To get around this, a Posix-compatible command is needed:


  prerotate
    find /srv/configs/ -name "webalizer.conf" -exec /usr/bin/webalizer -c {} \;
  endscript

This now executes just before my Apache log file gets rotated-out and Webalizer gets one last chance to catch-up with its stats.

Apr 142018
 

After my web server worked for a few days, I wanted to add additional domain. Easy enough, done it before – I though. However, I got hit by an error as I tried to get Let’s Encrypt certificate in place: The server experienced a TLS error during domain verification :: remote error: tls: handshake failure.

The most interesting thing was that I haven’t changed anything on the web server. So I tried a few old commands (history is handy here) with the Let’s Encrypt’s staging server. Although I was sure they worked when I was originally setting up the server, I was suddenly presented with the same error for all of them. And nothing changed!

After looking at the error message a bit more, I suddenly remembered that one thing did change – I moved my domain records to CloudFlare. Certbot was trying to authenticate my web server but ended up contacting CloudFlare and fetching their TLS certificate for my web site instead. Solution was simple – just temporarily disable caching on CloudFlare and certbot will successfully issue the certificate.

While solution was simple, it wasn’t ideal as this also meant I would need to repeat the same disabling every 90 days in order to do certificate renewal. I wanted solution that would allow automating renewal without the need for any manual action. This came by slightly altering how certbot performs verification.

# certbot renew --cert-name example.com --webroot --webroot-path /var/www/html/ --post-hook "apachectl graceful"

Alternative approach above stores certificate into .well-known directory under the root web site path and thus works nicely with the CloudFlare caching.

Apr 092018
 

While most people understand the structure behind IPv4 and even IPv6 address, MAC address is usually just assumed as something unique enough not to cause routing issues. But did you ever think whether there is any non-random meaning behind those 48 bits usually expressed as 12 hexadecimal digits?

First bit that gets transmitted is determining if given MAC is for individual device (0) or for a group (1). Most of the time we just care about individual type with group support mostly being used for multicast albeit historically a few more usages were known.

Next transmitted bit controls whether MAC is intended to be globally unique (0) or its intended to be unique only on local level (1). If properly manufactured, all physical network devices will have that bit set to 0 and you can count on it being unique. That being said, I have once personally experienced duplicate MAC on cheap network (ISA) card. However, that is unlikely scenario and hopefully one will never need to troubleshoot it these days.

While one would think locally administered addresses would be an awesome match for virtualization platforms, most don’t take advantage of it and have their addresses instead (incorrectly) allocated from globally unique pool. One major exception is KVM which by default does take care to set bit correctly. Realistically there is no difference in behavior for either choice but I find it nice that someone is actually paying attention.

Further 22 bits are identifier for the organization. Together with aforementioned 2 bits this makes total of 24 bits, also known as three bytes (or octets if you are a stickler). These identifiers are assigned by IEEE and it is easy to check which entity assigned your MAC address. If you despise manual search, you can also check one of the many OUI lookup web sites.

And finally the last 24 bits (the other 3 bytes) are assigned either sequentially or randomly – doesn’t really matter – by manufacturer within their OUI number. The only real requirement is to keep it unique so some manufacturers might have multiple assignments (e.g. almost 500 for Intel Corporations) while others might have just one.

While this covers all 48 bits, there is one piece of puzzle still missing – order of bits. Notice how I said that Individual/Group bit is the one first transmitted. Well, its not the first written. Bits on network get transmitted in least-significant first order.

If we take some imaginary MAC address 9C-DA-3E-E2-34-EB we can express this in binary as 10011100 11011010 00111110 11100010 00110100 11101011. When looking at binary form the Local/Group bit is actually at position 8 while at position 7 you can see the Unique/Local bit.

Those playing with blade servers can now also understand why their network adapters seem to have MAC addresses spaced by 4 – with the lowest two bits set to 0 that decision is forced by math and not some evil plan to waste already small MAC address space.