Practical coding vs “the fancy stuff”

How do I write this without sounding like I’m bashing on functional programming?

I’m not an FP convert. I get that having no side-effects can be appealing at times, but I do also believe that forcing every problem to be solved in a FP manner seems like the old “every tool is a hammer” type of situation. I recognise that what different people consider simple can vary a lot. Some people swear that FP is simple and that I just have to learn enough of it to appreciate and “live and breath” it. Hmmm time will tell on that one.

In the not-too-distant future I’ll be doing more F# (work), so will get to have more time to appreciate what FP can give me vs my heathen languages such as Go. But, in the case of Go, to me the language is SOOOO simple, so very few keywords, very basic concepts and a pretty decent (IMHO) standard library out of the box. Do I really care that more boiler plate needs to be coded every now and then? (especially for error handling?). Honestly… no. To me the code is readable, completely understandable and most importantly maintainable.

I’m still new enough to FP that my mindset isn’t naturally tuned to grok it immediately. No, I’m not strictly OO but I’m not FP. One of the main complaints that I keep hearing about Go is that it doesn’t have map, reduce, filter etc….  I’m not sure I really care.

IF I really wanted something (not generic… ooooo don’t go there) to do mapping over an array of strings (for arguments sake), yes, I could write up something like:

func mapper( input []string, myfunc func(string) string) []string {

   results := make([]string, len(input))

   for pos,i := range input {

     res := myfunc(i)

     results[pos] = res

   }

   return results

}

This is just a simple map type function. Give it a list (slice) of strings and a function that takes a string and returns a string. Bang, done… yes, it’s some utility method but is it critical for my codes readability or maintainability? Personally I don’t think so.

Same goes for filter, reduce etc. To me they’re utility functions that sure, sometimes I might like to use… and other times not so much. They don’t define a language nor state if it’s a good one (or not).

Even if I wanted to go down the route of making a generic version (fortunately I don’t have to, Rob Pike has already done it. ) But as he says on github

“I wanted to see how hard it was to implement this sort of thing in Go, with as nice an API as I could manage. It wasn’t hard.Having written it a couple of years ago, I haven’t had occasion to use it once. Instead, I just use “for” loops.You shouldn’t use it either.”

I completely agree about using for loops. Yes, they’re simple…  they might look like “oh, this coder doesn’t know how to use feature X, so they went for the dumb approach”. But seriously, I don’t have issues debugging nor reading for loops.

I often felt the same when coding lots of C# in the past. Some co-workers (who I still completely admire) would be jumping into some complex looking (to me) LINQ queries. Apart from LINQ being a memory hog, I never EVER found them easier to read than a simple for loop. Yes, their code was shorter and as we know fewer lines mean fewer lines to debug. But these LINQ queries took TIME to read and understand. I know I spoke to other devs there and they also had the same experience of 1-2 people might instantly know what these LINQ queries did… but most others would need to sit there for a few minutes and figure it out.

You don’t have to do that with for loops (or any other basic component of a language). Maybe this indicates that I’ve just worked with mediocre devs (and a few stars) and I myself am just an ok pleb. But I bet that’s not the situation. Sometimes using simple primitives makes for easy to understand code.

 

 

 

 

 

Advertisements

Go Azure SDK, Network restrictions

Continuing my exploration of the Azure SDK for Go, the next project is to start tinkering with IP restrictions for Azure App Services. The way I usually use App Services (and attempt to make them remotely secure) is to have an API Management instance in front of an App Service. The App Service is then configured to only accept traffic from the APIM (and maybe the office/home IPs).

So, how do we get and set IP restrictions?  I’m glad you asked. 🙂

As usual, go get (literally) the Azure-sdk-for-go project at github. The key parts of working with network restrictions are 1) getting the app service and 2) getting the site configs.

To get an app service (or list of app services with a common prefix) the code is simply

client := web.NewAppsClient(subscriptionID)

apps, err := client.ListComplete(ctx)
if err != nil {
   log.Fatalf("unable to get list of appservices", err)
}


for apps.NotDone() {
   v := apps.Value()
   if strings.HasPrefix(*v.Name, prefix) {
      appServiceList = append(appServiceList, v)
   }
   apps.NextWithContext(ctx)
}

Here we’re just getting the entire list of app services though ListComplete then going through the pages of results, searching for a given prefix and storing the ones I’m interested in.

Now that we have the list of app services (most importantly the list of resource groups and app service names) we can start getting configurations for them.

for _, app := range appServiceList {
   config, err := client.ListConfigurationsComplete(ctx, 
                         *app.ResourceGroup, *app.Name)
   if err != nil {
      log.Fatalf("unable to list configs", err)
   }

   cv := config.Value()
.
.
.
}

Here we’re just looping over the app services we retrieved earlier. Using the resource group and names we’re able to get the configuration for the given app service using ListConfigurationsComplete method. This returns a slice of SiteConfigurationResource structs.

From there we can inspect all the juicy details. In the above case we’d loop over  cv.IPSecurityRestrictions and get details such as IP restriction rule name, priority, IP address mask etc. All the details we want to confirm we’re restricting the way we’d like to.

If we then want to modify/add a rule, you simply call client.UpdateConfiguration passing resource group, app service name and most importantly the instance of SiteConfigResource that holds the above information. Hey presto, you have a new rule created/updated.

An entire example of this can be seen at github

Azure Network SecurityGroup IP Rules

As part of my day to day work, I often have to either log into an Azure a “jumpbox” (VM) or allow others to do so. Like any self-respecting paranoid dev, the jumpbox has a whitelist of IP addresses that are allowed to connect to it. Also, like a lot people I (and my co-workers) have dynamic IP addresses at home. Manually going into the Azure portal every time to adjust all the Network Security Group inbound IP settings is a pain.

I wanted to give the latest Go SDK for Azure another try. Fortunately it turned out to be pretty easy .

There are only really a couple of steps required.

1) Create authorizer to communicate with the Azure Management API.

2) Create SecurityGroup client

3) List all security groups

4) modify appropriate one and save

I won’t bother repeating the code here (see the github link earlier), but one thing that was slightly annoying is that for steps 1-3 I didn’t need to know the Azure Resource Group. In fact I intentionally didn’t want to have to specify one, I wanted the tool to be able to find any matching NSG rule. BUT, to save the change I needed the Resource Group name. To get this I had to regex it out of part of the initial response (containing the security groups). Annoying but not critical.

Overall I now have a useful tool that lets me easily modify anyones NSG rule without a bunch of manual clicking about.

The Go SDK is definitely improving 🙂

 

 

 

AzureCopy GO

The Go version of AzureCopy is slowly making progress. So far I’ve just been focusing on local filesystem and Azure (since I can do those while offline on the train commute thanks to the Azure Storage Emulator). The next plan is for S3 integration, primarily because S3 –> Azure seems to be the big use case for the original AzureCopy.

I’m planning on frequent releases once the basic S3 code is added (hopefully within the next few days). Not all features from the original AzureCopy will be available, but will simply be focusing on 1) list content and 2) copy content. There will be a few new additions such as a “don’t overwrite” flag so copies can be continued after being stopped (has been requested by a few people).

Ofcourse, the original AzureCopy will still be developed (mainly from a Nuget packaging point of view) but if you just need a command line tool to copy (and maybe need it on multiple platforms) then this new version is probably the way to go.

Hopefully the S3 code will drop in a few days then I’ll have a first binary release for Linux, MacOS and Windows, and see how things proceed from there.

Adventures in GO!

I’ve dabbled (ok ok, writing and rewriting “hello world” many times) in Go for a few years but have never really given it a serious Go (boom boom!) But after buying a GO in Action and going through a number of great Pluralsight courses (particularly by Nigel Poulton and Mike Van Sickle ) I’ve decided to give it another crack.

Instead of going through various tutorials I’ve decided to try porting (well more likely rewriting from scratch) my AzureCopy project. The original AzureCopy is all C# running on the .NET Framework 4.*. Although I DO (well did until recently) want to get it migrated to DotNET Core I thought it would be a good chance to learn Go PROPERLY.

I’m still trying to get my head around OO in a “kinda-is, kinda-isn’t, sorta, maybe” OO language like Go. Going back to structs (ahh glory days of C/C++), interfaces and having the magic of pointers back is really giving me a nostalgia kick.

The rough outline for this AzureCopy rewrite is basically as follows:

  • Get my dev environment sorted out (currently VSCode)
  • Basic solution structure sorted, rough architecture
  • Be able to copy to/from the local filesystem to Azure Blob Storage
  • List blobs/containers in Azure
  • Add S3
  • Add DropBox
  • Add OneDrive

Really don’t think I’ll bother with Sharepoint this time around, was a bitch to maintain in the existing version.

I’m unsure what the Go support is like with those cloud providers etc. I know Azure one seems mostly there (well for the stuff I need) but I get the distinct impression it’s the poor cousin to .NET, Java, Python etc.  I’ve yet to investigate S3’s Go offerings. Hopefully if these libs aren’t in a great shape I might get a chance to finally get my name on a contributors list somewhere. Smile

I’m sure my Go will suck…  but am hoping it will get better. The new version of AzureCopy is ofcourse on Github.

Azure Table Storage, a revisit

It’s been a while since I used Azure Table Storage (ATS) in anger. Well, kind of. I use it most days for various projects but it’s been about 18 months since I tried performing any bulk loads/querying/benchmarking etc.

The reason for my renewed interest is that a colleague mentioned that as they added more non indexed parameters to their ATS query, it was slowing down in a major fashion. This didn’t tally with my previous experience. So I wondered. Yes, if you don’t query ATS via the Partition Key and Row Key (ie the indexed fields) then it gets a lot slower, but everything is relative, right? It might be a lot slower, but it could still be “quick enough” for your purpose. So, I’ve coded some very very basic tests and tinkered.

My dummy data consisted of 2 million rows, split equally over 5 partitions. Each entity consisted of 10 fields (including partition key and row key). The fields (excluding row and partition) were 2 x strings, 2 x ints, 2 x doubles and 2 x dates. Currently my tests only focus on the strings and 1 of the ints (but I’ll be performing more tests in the near future).

The string fields were populated with a random word selected from a pool of 104. The ints and doubles were random between 0 and 1 million. The datetime was a random date between Jan 1 1970 and 1 Jan 2016. I repeat, the doubles and dates have not yet been included in any tests.

I tested a few different types of queries, starting simple and getting slightly more complex with each change. Firstly there is the query that takes the partition key and a value for field 3 (a string). Interestingly, the results were:

Partition key and field1 4706ms (av)
Partition key, field1 and field2 5368ms (av)
Partition key, field1, field2 and field3 7232ms (av)

So, despite only one field (partition key) being indexed, the adding of the other fields into the search didn’t completely make Azure Table Storage completely unusable and slow. Yes, it *almost* doubled the query time but wasn’t the huge difference that my colleague had experienced.

One thing to remember, although I created 2 million rows, these were split over 5 partition keys, so in effect the above queries were *really* only going over 400k rows.

More tests to follow…..

AzureCopy UI version

Seeing the popularity of AzureCopy increasing (very satisfying I have to admit) I’ve been thinking about writing a few desktop application versions to assist people. These would be “native” applications (not native as in bare metal, but I mean purpose built per OS) and would require a fair bit of design and effort.

The question I keep asking myself is, “Is it worth it?”. Migration from one cloud storage provider to another is a rare task. Even then it would usually (I hope) would be performed by a fairly technical person, so command line tools (such as AzureCopy) isn’t daunting. So is the need/want for a UI based application remotely needed by anyone? Chances are I’ll write them regardless (have already started on the Windows version) but am honestly wondering if I’m writing them without an audience.

Thoughts?