Aug 232017
 

Before going on vacation I finished setting up my new ZFS backup machine, initialized first replication, and happily gone to see the big hole.

When I remotely connected to my main machine a few days later, I’ve found my sync command has failed before finishing. Also I couldn’t connect to my backup server. Well, that was unfortunate but I had enough foresight to get it connected via smart plug so I did power-off/power-on dance. My system booted and I restarted replication. I checked on it a few days later, to find it stuck again. Rince-repeat. And the next day too. And the one after that…

Why? I have not idea as I was connected only remotely and I literally came home on the last day when I could return it to Amazon. Since I did raise a case with Supermicro in regards to video card error (5 beeps) which seemed hardware related my suspicions were definitely pointing only in direction of motherboard issue. I know memory was fine as I tested it thoroughly in another machine and power supply is happily working even now.

For my peace of mind I needed something that would allow me not only to reboot machine but to also access its screen and keyboard directly without any OS involvement. Variants are known under different names and slightly different execution. Whether it is KVM, iLO, AMT, or IPMI.

So I decided to upgrade to more manageable Supermicro A1SRi-2558F. With its C2558 processor (4 cores) and quad LAN it was definitely an overkill for my purpose but it was the cheapest IPMI-capable board I could find at $225 (compared to $250 for X10SBA-L). Unfortunately for my budget its ECC requirement meant adding another $35 for ECC RAM. And of course, different layout made my 6″ right-angle SATA cable useless so now they decorate my drawer.

Board itself is really full of stuff with total of six USB ports (four are USB 3.0), one of which was even soldered on motherboard for internal USB needs. Having four gigabit ports is probably useless as Atom is unlikely to be able to drive them all at full speed but I guess it does allow for more relaxed network configuration. Moreover two SATA3 and four SATA2 just scream NAS. And rear bracket on my 1U case fits rear IO perfectly. Frankly, the only thing missing is HDMI albeit IPMI greatly reduces chance of ever needing it.

Total difference in system cost was $100 and it gave me a rock-solid experience (hasn’t crashed a single time in more than a month). Here is updated shopping list:

Supermicro SuperChassis 504-203B $100
Supermicro A1SRI-2558F $225
Kingston ValueRAM 4GB 1600MHz DDR3L ECC 2x $45
SATA cable, 8″, round (2x) $7
WD Red 4TB 2x $137
Total $696
Aug 182017
 

I have already written about getting Private Internet Access running on Linux Mint back in 2016. Main reason is that with Linux Mint 18, not all DNS changes are properly propagated.

As OpenVPN client is installed by default these days, we only need to download PIA’s OpenVPN configuration files. More careful ones will notice these files are slightly different than recommended default. These have VPN server IP instead of DNS name. While this might cause long term issues if that IP ever changes, it does help a lot with firewall setup as we won’t need to poke a hole for DNS over our eth0 adapter.

From downloaded archive select .ovpn file with desired destination (usually going with one closest to you gives the best results) and also get both .crt and .pem file. Copy them all to your desktop and we’ll use them later for setup. Yes, you can use any other directory too – this is just one I prefer.

With this done we can go into configuring VPN from Terminal window (replacing username and password with actual values):

# sudo mv ~/Desktop/*.crt /etc/openvpn/
# sudo mv ~/Desktop/*.pem /etc/openvpn/
# sudo mv ~/Desktop/*.ovpn /etc/openvpn/client.conf

# sudo sed -i "s*ca *ca /etc/openvpn/*" /etc/openvpn/client.conf
# sudo sed -i "s*crl-verify *crl-verify /etc/openvpn/*" /etc/openvpn/client.conf

# sudo echo "auth-user-pass /etc/openvpn/client.login" >> /etc/openvpn/client.conf
# sudo echo "mssfix 1400" >> /etc/openvpn/client.conf
# sudo echo "dhcp-option DNS 209.222.18.218" >> /etc/openvpn/client.conf
# sudo echo "dhcp-option DNS 209.222.18.222" >> /etc/openvpn/client.conf
# sudo echo "script-security 2" >> /etc/openvpn/client.conf
# sudo echo "up /etc/openvpn/update-resolv-conf" >> /etc/openvpn/client.conf
# sudo echo "down /etc/openvpn/update-resolv-conf" >> /etc/openvpn/client.conf

#  echo 'username' | sudo tee -a /etc/openvpn/client.login
#  echo 'password' | sudo tee -a /etc/openvpn/client.login

# sudo chmod 500 /etc/openvpn/client.login

Now we can test our VPN connection:

# sudo openvpn --config /etc/openvpn/client.conf

Assuming that this last step ended with Initialization Sequence Completed, we just need to verify whether this connection is actually used and I’ve found whatismyipaddress.com quite helpful here. Just check if IP detected there is different then IP you usually get without VPN.

Stop the test connection using Ctrl+C so we can configure automatic startup and test it.

# echo "AUTOSTART=all" | sudo tee -a /etc/default/openvpn
# sudo reboot

Once computer has booted and you are satisfied with VPN configuration, you can think about firewall and disabling default interface when VPN is not active. This means allowing traffic only on tun0 interface (VPN) and allowing only port 1198.

# sudo ufw reset
# sudo ufw default deny incoming
# sudo ufw default deny outgoing
# sudo ufw allow out on tun0
# sudo ufw allow out on `route | grep '^default' | grep -v "tun0$" | grep -o '[^ ]*$'` proto udp to `cat /etc/openvpn/client.conf | grep "^remote " | grep -o ' [^ ]* '` port 1198
# sudo ufw enable

Assuming all went well, VPN should be happily running.

Aug 142017
 

Behind Visual Studio’s slight version bump from 15.2 to 15.3, we have a major update.

First of all, .NET Core is finally here accompanied with .NET Standard 2.0. Both have greatly increased API coverage and hope is that they will help with greater acceptance of the whole new (open-source) ecosystem.

In addition there is C# 7.1 which is first time Microsoft updated a language in a minor Visual Studio version. True, there are not many changes to the language itself (albeit async Main was longed for) but it signals a new direction of decoupling Visual Studio, C#, .NET Standard releases.

I hope this .NET Standard 2.0 will do what .NET 2.0 did back in 2005 and unite fragmented developers around a new common denominator.

More details at Channel 9.

Aug 122017
 

Great thing about ZFS is that even with a single disk you get some benefits – data integrity being the most important. And all ZFS commands work perfectly well, for example status:

# zpool status
  pool: Data.Tertiary
 state: ONLINE
config:
        NAME                   STATE     READ WRITE CKSUM
        Data.Tertiary          ONLINE       0     0     0
          diskid/DISK-XXX.eli  ONLINE       0     0     0

However, what if one disk is not sufficient any more? It is clear zpool add can be used to create striped pool for higher speeds. And it is clear we can add another device to make a three way mirror. But what if we want to convert solo disk to mirror configuration?

Well, in that case we can get creative with attach command giving it both disks as an argument:

# zpool attach Data.Tertiary diskid/DISK-XXX.eli diskid/DISK-YYY.eli

After a few seconds, our mirror is created with all our data intact:

# zpool status
  pool: Data.Tertiary
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
config:
        NAME                     STATE     READ WRITE CKSUM
        Data.Replica             ONLINE       0     0     0
          mirror-0               ONLINE       0     0     0
            diskid/DISK-XXX.eli  ONLINE       0     0     0
            diskid/DISK-YYY.eli  ONLINE       0     0     0  (resilvering)

PS: Yes, I use encrypted disks from /dev/diskid/ as I used them in previous ZFS examples. If you want plain devices, just use ada0 and companions instead.