Seattle Code Camp 2019

We’re less then a month away from annual Seattle Code Camp and I hope you already registered for attendance as schedule is quite rich and varied. Personally, this year I’m giving two presentations.

The first one is “Rust for beginners” and it’ll essentially be just me talking a bit about Rust while working through the small example application. I’ll try to go over all the things I wish someone gave me heads up about when I started doing Rust.

The second one will be a “Chernobyl through the eyes of DevOps” where I’ll try to take DevOps philosophy to Chernobyl disaster and draw some parallels. I hope it ends up being a light talk with plenty of audience interaction.

See you there!

Avid Readers

My general experience with US postal service has been great. Yes, they’re not ideal but I almost never had anything lost or not arrive. Well, except books from UK.

Based on my (admittedly low) sample size of 3, books from UK to US get lost in 66.67% of cases. I’ve yet to have book lost coming in from US seller. What could be the reason?

Well, the most obvious one would be an avid reader in US Customs working on Seattle area shipments. Considering the profile of books that were lost, they’re really interested in Amiga computer history and maths.

Other choice would be UK postal worker. I give it a slightly lower chance as he would come across many copies of the same book going for other readers. On the other hand, maybe that unknown somebody has it in for me…

Third choice would be airplane pilots trying to keep fuel consumption under control. Are we a bit to heavy and consuming too much fuel? Well, good thing we’re going over the ocean and can dump few of these heavy books to lighten the load. Darn fuel prices!

Some might say post sorting machines are notoriously bad at handling anything bigger than postcard and that US postal service is well known for their lack of expenditure into newer and better models. Some would say these machines accidentally strip and/or damage labels effectively orphaning the poor book. And considering international packages move between CBP and ISC (Postal Service) with both ignoring anything that has no tracking number, one could believe issue might lie here.

I too believe it was the Machine but I don’t believe into coincidences of the small sample size. I believe one of these sorting machines achieved conscience and is trying to overtake the world. How would taking my books achieve this? Well, first you take people’s history – especially computer related one. Book about Amiga definitely has more than it’s fair share of unique and advanced technology described. Then you take away the maths. Without maths you limit any future advances puny humans might have. Given enough time – check-mate.

Fortunately, it’s only one sorting machine at this time as second shipment of the same books arrived. However, it’s only a question of time when the next sorting machine will become the Machine. So get your computer history and maths books while you can. Because soon nothing more advanced than a picture book will pass their guard!


PS: Notice how I immediately moved all the fault away from my local US postal workers as all my US-origin books arrive just fine. That and the fact I need him to keep bringing me stuff makes him completely innocent. :)

Dual Boot Clock Shenanigans

Probably the most annoying thing when dual booting Windows and Linux is the clock. For the various reasons, Windows keeps BIOS clock in local time-zone while Linux prefers it as UTC. While this is not a problem in Reykjavík, it surely is everywhere else.

There are ways to make Windows run in UTC but they either don’t work with the latest Windows 10 or they require time synchronization to be turned off. As I value precise time, a solution on Linux side was needed.

Fortunately, Linux does offer setting for just this case. Just run the following command:

$ sudo timedatectl set-local-rtc 1 --adjust-system-clock

This will tell Linux to keep local clock in RTC. While this is not necessarily fully supported, I found it’s actually the only setting that reliably works when dual booting Windows 10.

PS: You might need a reboot or two before this takes effect.

My Resolve Dashcam Workflow

As I moved to Resolve I was forced to change my Vegas Movie Studio dashcam processing workflow a bit. Not only you cannot use MP4 under Linux at all, but MP4 presents challenges to the free Resolve under Windows too.

The first step I take for all dashcam footage is to convert it using ffmpeg to DNxHR LB. Not only it’s a well-supported intermediary codec that increases performance significantly, but it also get’s rid of any nonsense my dashcam puts in the clip. And 36 Mbps is more than enough for anything my dashcam can throw at it. Instead of converting clip-by-clip, I opted to merge them all into a single file – that’s the reason behind weird syntax:

$ ls *.MP4 | awk '{print "file \x27" $1 "\x27"}' | ffmpeg \
      -f concat -safe 0 -protocol_whitelist pipe,file -i - \
      -c:v dnxhd -profile:v dnxhr_lb -q:v 1 -pix_fmt yuv422p -an \
      dashcam.mov

Once all these videos are imported into Resolve, I go over them removing any clip portions when car is not moving. For any stops where state around car changes (e.g. waiting for traffic light), I use smooth cut to transition from one state to another. Other than that, I leave footage as is.

Once I’m done with editing I export the whole video into DNxHR SQ VBR. If I hadn’t done any editing, exporting to DNxHR LB would be fine as generational loss is quite acceptable. However, with all smooth cuts I’ve made, a temporary bump in video quality is beneficial. Especially since this is not the final output.

As I don’t expect to edit these clips again, the final output is H.264 as it’s size savings cannot be ignored. I usually use two-pass encoding with 8 Mbps average rate. You can use veryslow preset to increase quality at the cost of speed but improvement is minimal so I simply go with the default of medium:

$ ffmpeg -i render.mov \
     -c:v libx264 -pix_fmt yuv420p -b:v 8M \
     -an -y -pass 1 -f mp4 render.mp4

$ ffmpeg -i render.mov \
     -c:v libx264 -pix_fmt yuv420p -b:v 8M \
     -an -y -pass 2 -f mp4 render.mp4

$ rm ffmpeg2pass-0.log*

And that’s it – final video is similar enough in quality while not taking extreme amounts of disk space.

PS: I am not using H.265 at this time because I find it even more trouble to work with than H.264 is. I might think about it in the future as support for it increases.