Go… the adventure continues

So far so good in my Go exploring. I’m still mainly focusing on my Go port of AzureCopy , and I’ve got to say it’s been heaps of fun.

Firstly, looking back at my previous blog post to see what I’ve actually done against my planned outline:

  • Dev environment. Check…  definitely up and working fine. Important note to Visual Studio Code users (on Windows at least). Make sure you follow the VERY useful instructions at StackOverflow if you want to get debugging (via Delve) working.
  • Basic solution. Check…   the structure is fairly different from the original AzureCopy but this isn’t anything to do with C# vs Go… it’s purely down to experience and knowing what works and what would work better. The newer structure I have with the Go version could easily be applied to any other language, but for now I’m not going to start changing my C# version.
  • Haven’t hit the local filesystem yet…  but have started with Azure. So far can list containers and read blobs (again all in the new structure). VERY happy with the results.


Ok, overall very happy with Go, but my complaints (which I think are just the usual bunch of complaints I’m seeing with people new to Go) are:

  • No exceptions, just heaps of bloody err != nil checking. Seems tedious… but I’m sure I’ll understand WHY they chose this (eventually)
  • Debugging with Delve isn’t that easy yet. I swear the debugger jumps about the place a little. So far I’m mostly relying on log files rather that realtime inspection of variables in a debugger.

Coding in Go really does give me a “retro rush” which makes me feel like I’m back in the late 90’s coding C. I’m REALLY enjoying it. Yes, it can seem primitive (compared to C#) but at the same time it feels pure, easy enough to fit in my memory. This is good! Smile

The adventure continues….

Adventures in GO!

I’ve dabbled (ok ok, writing and rewriting “hello world” many times) in Go for a few years but have never really given it a serious Go (boom boom!) But after buying a GO in Action and going through a number of great Pluralsight courses (particularly by Nigel Poulton and Mike Van Sickle ) I’ve decided to give it another crack.

Instead of going through various tutorials I’ve decided to try porting (well more likely rewriting from scratch) my AzureCopy project. The original AzureCopy is all C# running on the .NET Framework 4.*. Although I DO (well did until recently) want to get it migrated to DotNET Core I thought it would be a good chance to learn Go PROPERLY.

I’m still trying to get my head around OO in a “kinda-is, kinda-isn’t, sorta, maybe” OO language like Go. Going back to structs (ahh glory days of C/C++), interfaces and having the magic of pointers back is really giving me a nostalgia kick.

The rough outline for this AzureCopy rewrite is basically as follows:

  • Get my dev environment sorted out (currently VSCode)
  • Basic solution structure sorted, rough architecture
  • Be able to copy to/from the local filesystem to Azure Blob Storage
  • List blobs/containers in Azure
  • Add S3
  • Add DropBox
  • Add OneDrive

Really don’t think I’ll bother with Sharepoint this time around, was a bitch to maintain in the existing version.

I’m unsure what the Go support is like with those cloud providers etc. I know Azure one seems mostly there (well for the stuff I need) but I get the distinct impression it’s the poor cousin to .NET, Java, Python etc.  I’ve yet to investigate S3’s Go offerings. Hopefully if these libs aren’t in a great shape I might get a chance to finally get my name on a contributors list somewhere. Smile

I’m sure my Go will suck…  but am hoping it will get better. The new version of AzureCopy is ofcourse on Github.

Azure Table Storage, a revisit

It’s been a while since I used Azure Table Storage (ATS) in anger. Well, kind of. I use it most days for various projects but it’s been about 18 months since I tried performing any bulk loads/querying/benchmarking etc.

The reason for my renewed interest is that a colleague mentioned that as they added more non indexed parameters to their ATS query, it was slowing down in a major fashion. This didn’t tally with my previous experience. So I wondered. Yes, if you don’t query ATS via the Partition Key and Row Key (ie the indexed fields) then it gets a lot slower, but everything is relative, right? It might be a lot slower, but it could still be “quick enough” for your purpose. So, I’ve coded some very very basic tests and tinkered.

My dummy data consisted of 2 million rows, split equally over 5 partitions. Each entity consisted of 10 fields (including partition key and row key). The fields (excluding row and partition) were 2 x strings, 2 x ints, 2 x doubles and 2 x dates. Currently my tests only focus on the strings and 1 of the ints (but I’ll be performing more tests in the near future).

The string fields were populated with a random word selected from a pool of 104. The ints and doubles were random between 0 and 1 million. The datetime was a random date between Jan 1 1970 and 1 Jan 2016. I repeat, the doubles and dates have not yet been included in any tests.

I tested a few different types of queries, starting simple and getting slightly more complex with each change. Firstly there is the query that takes the partition key and a value for field 3 (a string). Interestingly, the results were:

Partition key and field1 4706ms (av)
Partition key, field1 and field2 5368ms (av)
Partition key, field1, field2 and field3 7232ms (av)

So, despite only one field (partition key) being indexed, the adding of the other fields into the search didn’t completely make Azure Table Storage completely unusable and slow. Yes, it *almost* doubled the query time but wasn’t the huge difference that my colleague had experienced.

One thing to remember, although I created 2 million rows, these were split over 5 partition keys, so in effect the above queries were *really* only going over 400k rows.

More tests to follow…..

AzureCopy UI version

Seeing the popularity of AzureCopy increasing (very satisfying I have to admit) I’ve been thinking about writing a few desktop application versions to assist people. These would be “native” applications (not native as in bare metal, but I mean purpose built per OS) and would require a fair bit of design and effort.

The question I keep asking myself is, “Is it worth it?”. Migration from one cloud storage provider to another is a rare task. Even then it would usually (I hope) would be performed by a fairly technical person, so command line tools (such as AzureCopy) isn’t daunting. So is the need/want for a UI based application remotely needed by anyone? Chances are I’ll write them regardless (have already started on the Windows version) but am honestly wondering if I’m writing them without an audience.


Azurecopy and Virtual Directories.

AzureCopy has finally had some love and has been updated as per some requests that have come in. Firstly, virtual directories in S3 and Azure Blob Storage are now handled in a consistent manner.

Remember, neither S3 nor ABS really have directories. They just use blob names with the ‘/’ character in them and various tools out on the interweb use that to simulate directories.

Now, copying files between S3 and ABS has always been easy, but what if you want to utilise these virtual directories?

eg. I have a blob on S3 called “dir1/subdir1/file1” and I want to copy that to Azure (or elsewhere for that matter). But I want the destination on Azure to be in my temp container and the resulting blob to be just be called “subdir1/file1”.

This example we’re pretending to copy a subdirectory and its file from S3 to Azure. Remember, there is no spoon directory.

Now we can perform the command:

azurecopy.exe –i https://s3.amazonaws.com/mybucket/dir1/ –o https://myacct.blob.core.windows.net/temp/

The result will be in my Azure container (temp) I’ll have a blob called “subdir1/file1”.

In addition, you can now copy these blobs with virtual directories to and from Dropbox but in this case it will make/read REAL directories.

Azurecopy is available via Nuget, command line executable and source.

Dropbox and direct links

During some refactoring of AzureCopy I’ve decided to finally add Azure CopyBlob support for Dropbox. This means that locally you can run a command to copy from Dropbox to Azure Blob Storage and none of the traffic actually goes through where AzureCopy is running, huge bandwidth/speed savings!

The catch is that it appears (I’ve NOT fully confirmed this yet) that Azure CopyBlob doesn’t like redirection URLs, which is what I was receiving from Dropbox. I was generating a “shared” URL for a particular Dropbox file which in turn generates an HTTP 302 redirection and then gives me the real URL. Azure CopyBlob doesn’t play friendly with this. The trick is to NOT generate a “shared” URL but to generate a “media” URL. Quoting from the Dropbox API documentation: “Similar to /shares. The difference is that this bypasses the Dropbox webserver, used to provide a preview of the file, so that you can effectively stream the contents of your media.

Once I made that change, hey presto, no more redirects and Azure CopyBlob is now a happy little ummm “thing”.

Upshot is now I can migrate a tonne of data from Dropbox to Azure without using up any of my own bandwidth.

woohoo Smile

DocumentDB, Node.js, CoffeeScript and Hubot

For anyone that doesn’t already know, Hubot is Githubs ever present “bot” that can be customized to respond to all sorts of commands on a number of different messaging platforms. From what I understand (I don’t work at Github, so I’m just going by what I’ve read) it is used for build/deploy to production (and all other environments), determining employee locations (distributed teams) and a million other things. Fortunately Github has made Hubot open source and anyone can download and integrate it into Skype, Hipchat, Campfire, Slack etc etc. I’ve decided to have a crack at integrating it into my work place, specifically against the Slack messaging system.

I utterly love it.
During a 24 hour “hackday”, I integrated it into Slack (see details) and grabbed a number of pre-existing scripts to start me off. Some obvious ones (for a dev team) are TeamCity integration, statistics and statuses of various 3rd party services that we use and information retrieval from our own production system. This last one will be particularly useful for support, having an easy way to retrieve information about a customer without having to write up new UI’s for every change we do. *very* dev friendly Smile

One thing I’ve been tinkering with is having Hubot communicate directly with the Azure DocumentDB service. Although I’ve only put the proverbial toe in the water I see LOTS of potential here. Hubot is able to run anywhere (behind corporate firewall, out on an Azure Website or anywhere in between). Having it access DocumentDB (which can be accessed by anywhere with a net connection) means that we do not need to modify production services/firewalls etc for Hubot to work. Hubot can then perform these queries, get the statistics/details with ease. This (to me) is a big win, I can provide a useful information retrieval system without having to modify our existing production platform.

Fortunately the DocumentDB team have provided a nice Node.js npm package to install (see here for some examples). This made things trivially easy to do. The only recommendation I’d suggest is for tools/services/hubots that are read-only, just use the read only DocumentDB Key which is available on the Azure Portal. I honestly didn’t realise that read-only keys were available until I recently did some snooping about, and although I’m always confident in my code, having a read-only key just gives me a safety net against production data.

Oh yes, CoffeeScript. I’m not a Javascript person (I’m backend as much as possible, C# these days) and Hubots default language is CoffeeScript. So first I had to deal with JS and THEN deal with CoffeeScript. Yes, this last part is just my personal failing (kicking and screaming into the JS era).

An example of performing a query against DocumentDB in Node.js (in Coffeescript) follows. First you need to get a database reference, then a collection reference (from the DB) then perform the real query you want.

DocumentClient = require(“documentdb”).DocumentClient;
client = new DocumentClient( process.env.HUBOT_DOCUMENTDB_ENDPOINT, “masterKey”:process.env.HUBOT_DOCUMENTDB_READONLY_KEY} );
GetDatabase client, ‘(database) –>
  GetCollection client, database._self, ‘(collection) –>
    client.queryDocuments(collection._self, “select * from docs d where d.id = ‘testid’”).toArray   (err, res) –>
      if res && res.length > 0

GetCollection = (client, databaseLink, callback) –> 
  collectionQuery = { query: ‘SELECT * FROM root r WHERE r.id=”mycollection”’};
    client.queryCollections( databaseLink, collectionQuery).toArray (err, results) –> 
      if !err
        if results.length > 0
            callback( results[0]);

GetDatabase = (client, databaseName, callback ) –>
  dbQuery = { query: ‘SELECT * FROM root r WHERE r.id=”mydatabase”’};
    client.queryDatabases(dbQuery).toArray (err, results) –> 
      if !err
        if results.length > 0  

Given CoffeeScript is white space sensitive and my blog editor doesn’t appear to allow me to format the code *exactly* how I need to, I’m hoping readers will be able to deduce where the white space is off.

End result is Hubot, Node.js and DocumentDB are really easy to integrate together. Thanks for a great service/library Microsoft!