Feb 102016
 

.NET Framework is full of the small helper methods and I personally find this beneficial in general. They make code easier to read and most of the time they also make errors less likely. They also lend themselves beautifully to the quick troubleshooting one-liners as a nice, temporary, and concise solution. There is almost no greater example for that flexibility than File helper methods.

If you need to read the content of the file, just say

using (var fileStream = File.OpenRead(filename)) {
    //do something
}

Similarly, if you need to write something, equivalent code is

using (var fileStream = File.OpenWrite(filename)) {
    //do something
}

However, there is a trap in the last code chunk as it doesn’t necessarily do what you might expect. Yes, file is opened for writing, but existing content is untouched. To illustrate the issue, save first John in the file and then Mellisa. File content will be, as expected, Mellisa. However, if you save John again, the content will be somewhat unexpected Johnisa.

Once seen as an example, it is obvious computer did exactly what we’ve told it. It opened the file for writing and modified the content starting from the first byte. Nowhere did we tell it to discard the old content.

Proper code for this case would be slightly longer:

using (var fileStream = new FileStream(fileName, FileMode.Create, FileAccess.Write)) {
    //do something
}

This will ensure file will be truncated before writing the new content and thus avoid the problem.

Annoying thing about this helper is that, under normal circumstances, it will work most of the time, biting you only when you delete/shorten something. I believe there is a case to argue it should have been designed with FileMode.Create instead of FileMode.Write as a more reasonable behavior. However, as it goes with most of these things, decision has already been made and there is no going back.

Feb 042016
 

QR Authentication ExampleTwo-factor authentication is a beautiful thing. You have a key, apply a bit of TOTP magic and you’ll get an unique code changing with time. To use it just run a mobile application of your choice (e.g. Google Authenticator) and scan the QR code.

If you have a bunch of pre-existing keys in textual format (e.g. recovering after phone reinstall), wouldn’t it be really useful to generate a QR code based on them?

Fortunately, the key format is really well documented in the Google Authenticator repository. In its simplest form it is otpauth://totp/LABEL?secret=KEY. Simply swapping LABEL and KEY for desired values should do the trick – e.g. otpauth://totp/Test?secret=HXDMVJECJJWSRB3HWIZR4IFUGFTMXBOZ.

To generate a QR code scannable by mobile phone application, any QR service supporting simple text encoding will do. I personally prefer goqr.me as they offer a lot of customization options and (supposedly) they don’t store QR data. Final QR code will be perfectly well read by any authenticator application out there and the key will be imported without any issue.

For the advanced scenarios, there are quite a few more advanced setting and tweaks you can do but this simplest format probably covers 90% of needs.

Jan 272016
 

The same image can be saved in multitude of ways. Whether it is camera phone or editing application, usually goal is to save image quickly without caring for each and every byte. I mean, is it really important if image is 2.5 MB or 2.1 MB? Under most circumstances bigger file is written more quickly and slightly bigger size is perfectly acceptable compromise.

However, if you place the image on a website, this suddenly starts to matter. If your visitors are bandwidth-challenged, it makes a difference between the load time measured in seconds or tenths of seconds. However, if you start optimizing, you can spend way too much time dealing with this. If you are lazy like me and don’t want to change your flow too much, there is always an option to save unoptimized files now and optimize later.

For optimizing images I tend to stick with two utilities: OptiPNG for PNG and jpegoptim for JPEG files. Both of them do their optimizations in a completely lossless fashion. This might not bring you the best savings, especially for JPEG images, but it has one great advantage – if you run optimization over the already optimized images, there will be no harm. This means you don’t need to track what files are already optimized and which need work. Just run the tools every once in a while and you’re golden.

I created the following script to go over each image and apply optimizations:

@ECHO OFF

SET  EXE_OPTIPNG="\Tools\optipng-0.7.5\optipng.exe"
SET EXE_JPEGTRAN="\Tools\jpegoptim-1.4.3\jpegoptim.exe"

SET    DIRECTORY=.\pictures

ECHO = OPTIMIZE PNG =
FOR /F "delims=" %%F in ('DIR "%DIRECTORY%\*.png" /B /S /A-D') do (
    ECHO %%F
    DEL "%%F.tmp" 2> NUL
    %EXE_OPTIPNG% -o7 -silent -out "%%F.tmp" "%%F"
    MOVE /Y "%%F.tmp" "%%F" > NUL
    IF ERRORLEVEL 1 PAUSE && EXIT
)

ECHO.

ECHO = OPTIMIZE JPEG =
FOR /F "delims=" %%F in ('DIR "%DIRECTORY%\*.jpg" /B /S /A-D') do (
    ECHO %%F
    %EXE_JPEGTRAN% --strip-all --quiet "%%F"
    IF ERRORLEVEL 1 PAUSE && EXIT
)

And yes, this will take ages. :)

Jan 212016
 

One both advantage and disadvantage of the distributed source control is repository containing the whole history. Upon the first clone, when all data must be downloaded, this can turn into an exercise in futility if you are on a lousy connection. Especially when, in my case, downloading a huge SVN-originating Mercurial repository multi-GB in size. As connection goes down, all work has to be repeated.

Game got boring after a while so I made following script for incremental updates:

@ECHO OFF

SET SOURCE=https://example.org/BigRepo/
SET REPOSITORY=MyBigRepo

IF NOT EXIST "%REPOSITORY%" (
    hg --debug clone %SOURCE% "%REPOSITORY%" --rev 1
)

SET XXX=0
FOR /F %%i IN ('hg tip --cwd "%REPOSITORY%" --template {rev}') DO SET XXX=%%i

:NEXT
SET /A XXX=XXX+1

:REPEAT
ECHO.
ECHO === %XXX% === %DATE% %TIME% ===
ECHO.

hg pull --cwd "%REPOSITORY%" --debug --rev %XXX% --update
SET EXITCODE=%ERRORLEVEL%
ECHO.
IF %EXITCODE% GTR 0 (
    SET FAILED=%EXITCODE%
    hg recover --cwd "%REPOSITORY%" --debug
    SET EXITCODE=%ERRORLEVEL%
    ECHO.
    ECHO ======= FAILED WITH CODE %FAILED% =======
    IF %EXITCODE% GTR 0 (
        ECHO ======= FAILED WITH CODE %EXITCODE% =======
    ) else (
        ECHO === SUCCESS ===
    )
    GOTO REPEAT
) else (
    ECHO.
    ECHO === SUCCESS ===
)

GOTO NEXT

Script first clones just a first revision and then incrementally asks for revisions one at a time. If something goes wrong, recovery is started following by yet another download. Simple and effective.