How to Choose Barcode Symbology

For one project I needed to select proper barcode symbology (way of encoding). Requirements were clear: variable length (that excludes EAN/UPC subset), some form of check-sum and it needs to work with standard (aka cheap) equipment. That left me with few candidates which I will try to describe.

2 of 5

This symbology is well supported since it is quite old (circa 1960) and it is used within industry. It is numeric only code and there is optional support for check-sum (modulo 10) digit, but that digit needs to be encoded and decoded by software (this may be a problem if you have some devices which are not under your control). Problem is also in width of code since this one encodes data only in bars which makes for a lot of wasted space. It’s low density could be a problem if you are “space challenged”.

To overcome excessive width of standard 2 of 5, somebody though of encoding data in both bars and spaces. That effectively made code twice as dense as standard one (or twice as short) for same amount of data but support is only available for even number of digits. Everything else said for standard version is same for this one.

Codabar (NW-7)

Illustration

Used mostly within blood banks this symbology allows for numbers with limited number of symbols (- $ : / . +). Since it is self-checking there is no additional check-sum needed but one can always use a software one. Code can start and end in four different ways (usually called A B C D) so there is possibility of differentiating code on that base also (but be aware of lock-in since no other code has that option). Since characters are separated with space code is not among shortest.

Code 3 of 9 (Code 39)

This is alphanumeric code and enables encoding numbers, characters (big case only) and some symbols (space - . $ / + % *). It is self checking so check-sum is not necessary but there is defined way of doing it if more security is needed. There is also possibility of concatenating multiple barcodes together but that possibility is rarely used (just don’t start your code with space). There is also extended variant that can encode whole ASCII (0-127) range.

Code 128

Illustration

This symbology is three in one deal. Three different encodings not only allow for full ASCII (0-127) to be encoded but there is also special double density mode which allows it to encode numeric only data in half the width. Check-sum (modulo 103) is mandatory part of encoding so no additional support is needed within software. Since symbols are also self-checking this gives very high confidence in reading. There were small problems with reading Code 128 with old barcode readers but everything that is recent on market supports it. Since there are three different ways of encoding data (and switching between them within) writing optimal encoder is not an easy task.

Conclusion

At the end, I selected Code 128. Not only that it gives highest possible level of security but it also has shortest code (with numeric data at double density). Slightly complex encoding was just another challenge that needed overcoming. C# source code is available.

Useful pages

Here are few pages where you can find more information - enough to write your own encoder.

Bug With CenterParent

One may think that centering a window is an easy task. Just set StartPosition to CenterParent and everything should work. This is true in most cases, however there is one bug that makes things more annoying than it should be.

Reproducing bug

First step should always be to reproduce buggy behaviour. In this case it is fairly easy. Just make new application with two forms. One parent we will at default settings and one child for which we will change StartPosition to CenterParent. On main form create two buttons. First button should show modal window (ShowDialog) and second should create just owned window (Show) like in this example.

When you click on second button you will notice that form isn’t centered at all. Every window created just goes to next default position.

Although this is definitely a bug, Microsoft decided not to fix it. Explanation is rather strange: “a fix here would be a breaking change to the behavior of WinForms 1, 1.1 and 2.”. I am not challenging that it would be a change to fix it but how many people would it really affect in negative manner. I personally cannot imagine one person setting child form to CenterParent and then being angry because form is centered.

Workaround

Like in “good old days” we need to center it manually. Do not forget to set StartPosition to manual and then just use some good old mathematics to fix it.

Resolution

Real solution would be to fix it in framework but since it is already rejected, I wouldn’t hold my breath for it. Easier solution (.NET Framework 3.5) would be to create extension method and solve it there. It is not resolution as such but it makes whole look nice when you need it.

Unsigned Drivers in 64-Bits

Electronics is as a hobby of mine. Nothing fancy, just some PIC programming. Of course to program them one needs PIC programmer. If you combine that with me being cheap bastard and wanting USB connection you get MCUmall’s USB PIC programmer which is nice little device for a very low price. Internally it is serial port programmer and it interfaces USB using very popular Prolific PL-2303 chip. I even found 64-bit driver for it. Sure, it was 64-bit XP driver but I didn’t saw any problem with that, driver model for serial ports is same in both XP and Vista.

64-bit paranoia

If you cannot find Vista driver for your serial device, installing XP one works like a charm. That is if you are using 32-bit Windows Vista. If you are using 64-bit version you may have a problem. Microsoft decided to require every driver for 64-bit version to be signed in order to be loaded in kernel space. I personally think that driver signing is not a bad idea - warn user that driver is not signed and let him decide whether he really wants that installed. On 64-bit version they took it one step further. Windows Vista (x64) will warn user, it will even let user decide that it is ok to install driver but after installing driver it will just ignore user’s request and prevent driver from loading. After you deduce that problem is not driver itself but its signature (or lack of it), you may try to find way to go around it on Internet.

What does not work?

First thing that looked promising to me is just setting DDISABLE_INTEGRITY_CHECKS (no, double d is not an error) option to loader. Next time Windows boots up, it will just disable signature checking and thus load your driver. This was great solution, but it doesn’t work for quite a while now. You may try to re-activate it by uninstalling some hot-fixes which may have been practical when there was only one to remove but there are quite a few of them that break that option now - including SP1. Removing all of them and running half-patched system (some other hot-fixes are dependent on them) is just asking for trouble. Just remember how nice Vista was before SP1 and you will quickly abandon this option.

Here I should mention setting of nointegritychecks option. Last time that this worked was while Windows Vista was still in beta.

Illustration

What does work (but not in nice way)

One can always test-sign his drivers and set bcdedit testsigning option. After reboot, your driver just works. Only thing that is preventing happiness is “Test mode” written in all four corners of your desktop. You may remove it by changing resource files but I would rather have “Test mode” in all four corners than make a change which will invalidate file’s check-sum and possibly make some update in future to fail.

How Microsoft thinks we should do it?

Illustration

On reboot press F8 and there is option to allow unsigned drivers. This works great except one small detail - you need to do it on every reboot. Currently there is no way to force it on but one can always hope for a brighter future. I personally use this mode since I don’t play with electronics too often and I rarely reboot my computer anyway.

Professional solution

Real solution would be to sign the drivers. If we ignore possible legal troubles of this act and we even find really nice guide on how to do it, there is still problem that you cannot just use any certificate. Your certificate needs to be one of few selected roots for which Microsoft gives cross-certificate. My cheap Comodo certificate which I normally use for getting “nice” UAC prompts (since it is recognized as trusted root by Vista) is quite useless for this particular purpose.

One solution that I had was annoying one MVP (he shall be unnamed since I am not quite sure that what follows is legal) and get him to download huge Windows Driver Kit and sign everything. If you are wondering why I thought that I should bother MVP - they get discounted GlobalSign certificate through their MVP gig. Although both of us consider ourself to be smart guys which are quite able to follow few steps, we didn’t have any success. Driver was signed, everything looked like it should except for the fact that Vista still refused to load drivers. We will probably try to do this one more time - both of us really hate unsolved misteries - but for now I must acknowledge defeat.

[2010-02-15: I just noticed - drivers for Windows 7 are finally available.]

[2010-07-12: Although drivers were available they didn’t work with my board. However, I found drivers that do work.]

X86, X64, IA64, AMD64, EM64T...

x86

Back in “good old times” Intel named processor model numbers 8086, 80286, 80386, 80486, 80586 (ok, this one didn’t really exist - Pentium was introduced instead). Since they all were pretty much same architecture, at one point somebody just said x86 processors. To make things more interesting although x86 could refer to 80286 which is 16-bit processor, we use it in current days to refer to 32-bit architecture only. Intel itself referred to it as IA-32, but that name never really sounded as good.

IA64

Intel decided to make new 64-bit processor. Since IA-32 was heavy with compatibility baggage from as far as 8086, they made a clean cut. Everything on this processor was bigger, better and incompatible with old IA-32 architecture. Since server market needed 64-bit, some advancements were made there but market penetration wasn’t as good as Intel hopped. Problem was that native applications came in small numbers while compatibility with old IA-32 (or x86) instruction set was really slow. Architecture still lives on with Itanium processor.

x64

AMD noticed problem and made own version of 64-bit processor. They just added some new 64-bit instructions to already existing x86. Solution was not all that clean as IA-64 but old applications worked at same speed (no emulation was needed) and new applications could address 64-bit space. That solution is known now as x86-64.

It took a while for Intel to see that his Itanium is going nowhere near consumer machines. When they finally took notice unbelievable thing happened - Intel used AMD64 instruction set. This made those two architectures same in sense of programming support. One doesn’t need to care whether he writes for Intel or AMD (not really true for early version of Pentiums since they lacked some instructions). Early name for Intel’s version was EM64T just in case you were interested.

64-Bit - How Hard Can It Be?

If you do programming in .NET world answer is clear: It is not hard at all. As long as you keep your hands out of cookie jar (know also as Win32), moving to 64-bit is just none to few clicks away.

Of course, some prerequisites are needed.

You need to have 64-bit operating system

This is pretty obvious - 32-bit operating system on 64-bit hardware will work in 32-bit mode. No big surprise here.

You need to work on .NET framework 2.0 and above

.NET framework 1.0 and 1.1 exist in 32-bit versions only. They will run on 64-bit platforms without any problem but they will do so through emulation layer called WOW64 and thus no support for 64-bit address space - everything above 2 GB stays unavailable.

You need to tell your favorite .NET compiler that you want that

Illustration

By default, .NET will compile code for target platform called “Any CPU”. Although one could think that this would make code that is common denominator of all - 32-bit, it will actually mark executable as both 32-bit and 64-bit. This is possible since code is translated to CLR and thus processor agnostic. On 32-bit systems it will run as 32-bit code, on 64-bit systems it will run as 64-bit code. In case that you need insane amounts of memory (for me insane is above 2 GB) all the times, you can select x64 or Itanium as your target and make your code unable to run in 32-bit mode at all.

If you use installer let him know about your bit-ness also.

Illustration

If you pack your code with MSI installer, you have a problem. There is no way to tell it that your code is both 32-bit and 64-bit (Any CPU).

If you select x86 as your platform, it will install correctly on both 32-bit and 64-bit Windows but on 64-bit it will install in Program Files (x86) folder. That folder is reserved for legacy 32-bit code and having your application there sends clear signal to users that something is wrong here althought it will run as 64-bit application when you start it.

If you select x64 or Itanium as your target platform you will end up with installer that will show error message and refuse to proceed if system is 32-bit (or that other 64-bit one) even though code would run just fine.

There are two solutions. Either make separate MSI package for every platform or switch installer. Neither of these two is nice one. :(