CLI tools FTW

To make sure my Go skills are kept up to date, I’ve decided to attempt (will see how long this lasts) a useful CLI tool every few days to 1) help me work… and 2) keep me thinking about Go.

None of these tools will be huge, interesting or unique…  but they’ll keep the brain active!  But also, I really do think they will be useful (to me) in my day to day work.

So far I’ve got 2. One is a basic tail program (built for Windows but will work on anything)…. since, well you know… Windows doesn’t have a tail. 🙂   Secondly a similar program to tail but it parses and displays interesting parts of EventStore statistics logs. Basic, short…   got me thinking more about file IO in Go though 🙂

 

 

Advertisements

Practical coding vs “the fancy stuff”

How do I write this without sounding like I’m bashing on functional programming?

I’m not an FP convert. I get that having no side-effects can be appealing at times, but I do also believe that forcing every problem to be solved in a FP manner seems like the old “every tool is a hammer” type of situation. I recognise that what different people consider simple can vary a lot. Some people swear that FP is simple and that I just have to learn enough of it to appreciate and “live and breath” it. Hmmm time will tell on that one.

In the not-too-distant future I’ll be doing more F# (work), so will get to have more time to appreciate what FP can give me vs my heathen languages such as Go. But, in the case of Go, to me the language is SOOOO simple, so very few keywords, very basic concepts and a pretty decent (IMHO) standard library out of the box. Do I really care that more boiler plate needs to be coded every now and then? (especially for error handling?). Honestly… no. To me the code is readable, completely understandable and most importantly maintainable.

I’m still new enough to FP that my mindset isn’t naturally tuned to grok it immediately. No, I’m not strictly OO but I’m not FP. One of the main complaints that I keep hearing about Go is that it doesn’t have map, reduce, filter etc….  I’m not sure I really care.

IF I really wanted something (not generic… ooooo don’t go there) to do mapping over an array of strings (for arguments sake), yes, I could write up something like:

func mapper( input []string, myfunc func(string) string) []string {

   results := make([]string, len(input))

   for pos,i := range input {

     res := myfunc(i)

     results[pos] = res

   }

   return results

}

This is just a simple map type function. Give it a list (slice) of strings and a function that takes a string and returns a string. Bang, done… yes, it’s some utility method but is it critical for my codes readability or maintainability? Personally I don’t think so.

Same goes for filter, reduce etc. To me they’re utility functions that sure, sometimes I might like to use… and other times not so much. They don’t define a language nor state if it’s a good one (or not).

Even if I wanted to go down the route of making a generic version (fortunately I don’t have to, Rob Pike has already done it. ) But as he says on github

“I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn’t hard.Having written it a couple of years ago, I haven’t had occasion to use it once. Instead, I just use “for” loops.You shouldn’t use it either.”

I completely agree about using for loops. Yes, they’re simple…  they might look like “oh, this coder doesn’t know how to use feature X, so they went for the dumb approach”. But seriously, I don’t have issues debugging nor reading for loops.

I often felt the same when coding lots of C# in the past. Some co-workers (who I still completely admire) would be jumping into some complex looking (to me) LINQ queries. Apart from LINQ being a memory hog, I never EVER found them easier to read than a simple for loop. Yes, their code was shorter and as we know fewer lines mean fewer lines to debug. But these LINQ queries took TIME to read and understand. I know I spoke to other devs there and they also had the same experience of 1-2 people might instantly know what these LINQ queries did… but most others would need to sit there for a few minutes and figure it out.

You don’t have to do that with for loops (or any other basic component of a language). Maybe this indicates that I’ve just worked with mediocre devs (and a few stars) and I myself am just an ok pleb. But I bet that’s not the situation. Sometimes using simple primitives makes for easy to understand code.

 

 

 

 

 

Go Azure SDK, Network restrictions

Continuing my exploration of the Azure SDK for Go, the next project is to start tinkering with IP restrictions for Azure App Services. The way I usually use App Services (and attempt to make them remotely secure) is to have an API Management instance in front of an App Service. The App Service is then configured to only accept traffic from the APIM (and maybe the office/home IPs).

So, how do we get and set IP restrictions?  I’m glad you asked. 🙂

As usual, go get (literally) the Azure-sdk-for-go project at github. The key parts of working with network restrictions are 1) getting the app service and 2) getting the site configs.

To get an app service (or list of app services with a common prefix) the code is simply

client := web.NewAppsClient(subscriptionID)

apps, err := client.ListComplete(ctx)
if err != nil {
   log.Fatalf("unable to get list of appservices", err)
}


for apps.NotDone() {
   v := apps.Value()
   if strings.HasPrefix(*v.Name, prefix) {
      appServiceList = append(appServiceList, v)
   }
   apps.NextWithContext(ctx)
}

Here we’re just getting the entire list of app services though ListComplete then going through the pages of results, searching for a given prefix and storing the ones I’m interested in.

Now that we have the list of app services (most importantly the list of resource groups and app service names) we can start getting configurations for them.

for _, app := range appServiceList {
   config, err := client.ListConfigurationsComplete(ctx, 
                         *app.ResourceGroup, *app.Name)
   if err != nil {
      log.Fatalf("unable to list configs", err)
   }

   cv := config.Value()
.
.
.
}

Here we’re just looping over the app services we retrieved earlier. Using the resource group and names we’re able to get the configuration for the given app service using ListConfigurationsComplete method. This returns a slice of SiteConfigurationResource structs.

From there we can inspect all the juicy details. In the above case we’d loop over  cv.IPSecurityRestrictions and get details such as IP restriction rule name, priority, IP address mask etc. All the details we want to confirm we’re restricting the way we’d like to.

If we then want to modify/add a rule, you simply call client.UpdateConfiguration passing resource group, app service name and most importantly the instance of SiteConfigResource that holds the above information. Hey presto, you have a new rule created/updated.

An entire example of this can be seen at github

Azure Network SecurityGroup IP Rules

As part of my day to day work, I often have to either log into an Azure a “jumpbox” (VM) or allow others to do so. Like any self-respecting paranoid dev, the jumpbox has a whitelist of IP addresses that are allowed to connect to it. Also, like a lot people I (and my co-workers) have dynamic IP addresses at home. Manually going into the Azure portal every time to adjust all the Network Security Group inbound IP settings is a pain.

I wanted to give the latest Go SDK for Azure another try. Fortunately it turned out to be pretty easy .

There are only really a couple of steps required.

1) Create authorizer to communicate with the Azure Management API.

2) Create SecurityGroup client

3) List all security groups

4) modify appropriate one and save

I won’t bother repeating the code here (see the github link earlier), but one thing that was slightly annoying is that for steps 1-3 I didn’t need to know the Azure Resource Group. In fact I intentionally didn’t want to have to specify one, I wanted the tool to be able to find any matching NSG rule. BUT, to save the change I needed the Resource Group name. To get this I had to regex it out of part of the initial response (containing the security groups). Annoying but not critical.

Overall I now have a useful tool that lets me easily modify anyones NSG rule without a bunch of manual clicking about.

The Go SDK is definitely improving 🙂

 

 

 

Azure Storage Tools

After quite a number of comments about having some consistent tools to perform basic operations on Azure Storage resources (blobs, queues and tables) I’ve decided to write up a new set of tools called “Azure Storage Tools” (man I’m original with naming, I should be in marketing).

The primary aims of AST are:

  • Cross platform set of tools so there is a consistent tool across a bunch of platforms
  • Be able to do all the common operations for blobs/queues/tables.
    • eg. for blobs we should be able to create container, upload/download blobs, list blobs etc.  You get the idea.
  • Be completely easy to use, I want to be able to provide a simple tool with a simple set of parameters that commands should be fairly “guessable”

Instead of a single tool that provides blobs, queues and tables in one binary, I’ve decided to split this into 3, one for each type of resource. First cab off the rank is for blobs!  The tool/binary name is astblob (again, marketing GENIUS at work!!).

If a picture is worth a thousand words, here’s 3 thousand words worth.

 

listcontainers

Firstly, in this image we have 3 machines all running astblob, a Windows machine, Linux machine and OSX machine. Each of them are connected to the same Azure Storage container (configured through environment variables), and we’re simply asking for the containers in that Azure Storage account.

Easy enough. Now, for some more details.

listcontainercontents

Here we’re asking for the blob contents of the “temp” container. Remember Azure (like S3) doesn’t really have the concept of directories within containers/buckets. The blobs have ‘/’ in them to “fake” directories, but really they’re just part of the blob name.

So now what happens if we download the temp container?

 

download

Here we download the temp container to some place on the local filesystem. You’ll see that the blob name that had “fake” directories in it’s name has actually had the directories made for real (executive decision made there… by FAR I believe this is what is wanted). You’ll see in both Windows and the *nix environments that that “ken1” directory was made and within them (although not shown in the screen shot) the files are contained within.

AST blob is just the first tool from the AST suite to be released. The plan is that Queue and Tables will follow shortly, also for more functionality for Blobs to be released.

The download for AST is at Github and binary releases are under the usual releases link there. Binaries are generated for Windows, OSX, Linux, FreeBSD, OpenBSD and NetBSD (all with 386 and AMD64 variants) although only Windows, Linux and OSX have been tested by me personally.

The AST tools (although 3 separate binaries) will all be self contained. The ASTBlob binary is literally a single binary, no associated libs need to be copied along with it.

Before anyone comments, yes the official Azure CLI 2 handles all of the above and more but it has more dependencies (rather than a single binary) and is also a lot more complex. AST is just aimed at simple/common tasks….

Hopefully more people will find a consistent tool across multiple platforms useful!

Go embedded structs and pointer receivers

While working on a PR for the Azure storage GO library I discovered (yet another) corner of Go that confused the hell out of me. I know enough about embedded types/structs to use them effectively (so I thought) and I know the difference between receivers and pointer receivers. But when using them together you really need to make sure you’re careful.

The specifics of the problem I hit was that in the Azure GO library we have the struct called “Entity” which is used for table storage. Now, for purposes I wont go into I needed to create a another struct called BatchEntity which was declared as such:

type BatchEntity struct {
Entity
otherstuff string
}

Now, the Entity struct has a very useful JSON formatting method with the signature:

func (e *Entity) MarshalJSON() ([]byte, error)

Please please PLEASE note that this is a pointer receiver! Now I’d spend an embarrassingly long time trying to figure out why when I tried to marshal my BatchEntity class, why it didn’t call this MarshalJSON method as I expected. Of course it turned out that when I passed the marshalling method BatchEntity.Entity (referencing the embedded type) it would *somehow* (I haven’t figure out the specifics yet) treat it as a regular receiver and not call the MarshalJSON method since that specifically had a pointer receiver. I still need to dig into the details but this is a big warning to me, when dealing with embedded types make sure you check the receivers of the base structs. Would have saved me hours!!!

Kind of surprised there wasn’t a compile warning/error around that….   but I need to dig further.

AzureCopy Go now with added CopyBlob flag

Azurecopy (Go version) 0.2.2 has now been released. Major benefit is now that when copying to Azure we can use the absolutely AWESOME CopyBlob functionality Azure provides. This allows blobs to be copied from S3 (for example) to Azure without having to go via the machine executing the instructions (and use my bandwidth!)

An example of copying from S3 to Azure is as simple as:

azurecopycommand_windows_amd64.exe  -S3DefaultAccessID=”S3 Access ID” -S3DefaultAccessSecret=”S3 Access Secret” -S3DefaultRegion=”us-west-2″ -dest=”https://myaccount.blob.core.windows.net/mycontainer/” -AzureDefaultAccountName=”myaccount” -AzureDefaultAccountKey=”Azure key” -source=https://s3.amazonaws.com/mybucket/ –copyblob

The key thing is the –copyblob flag. This tells AzureCopy to do it’s magic!

By default AzureCopy-Go will copy 5 blobs concurrently, this is to not overload your own bandwidth, but with Azure CopyBlob feature feel free to crank that setting up using the –cc flag (eg add -cc=20) Smile