Go Azure SDK, Network restrictions

Continuing my exploration of the Azure SDK for Go, the next project is to start tinkering with IP restrictions for Azure App Services. The way I usually use App Services (and attempt to make them remotely secure) is to have an API Management instance in front of an App Service. The App Service is then configured to only accept traffic from the APIM (and maybe the office/home IPs).

So, how do we get and set IP restrictions?  I’m glad you asked. 🙂

As usual, go get (literally) the Azure-sdk-for-go project at github. The key parts of working with network restrictions are 1) getting the app service and 2) getting the site configs.

To get an app service (or list of app services with a common prefix) the code is simply

client := web.NewAppsClient(subscriptionID)

apps, err := client.ListComplete(ctx)
if err != nil {
   log.Fatalf("unable to get list of appservices", err)
}


for apps.NotDone() {
   v := apps.Value()
   if strings.HasPrefix(*v.Name, prefix) {
      appServiceList = append(appServiceList, v)
   }
   apps.NextWithContext(ctx)
}

Here we’re just getting the entire list of app services though ListComplete then going through the pages of results, searching for a given prefix and storing the ones I’m interested in.

Now that we have the list of app services (most importantly the list of resource groups and app service names) we can start getting configurations for them.

for _, app := range appServiceList {
   config, err := client.ListConfigurationsComplete(ctx, 
                         *app.ResourceGroup, *app.Name)
   if err != nil {
      log.Fatalf("unable to list configs", err)
   }

   cv := config.Value()
.
.
.
}

Here we’re just looping over the app services we retrieved earlier. Using the resource group and names we’re able to get the configuration for the given app service using ListConfigurationsComplete method. This returns a slice of SiteConfigurationResource structs.

From there we can inspect all the juicy details. In the above case we’d loop over  cv.IPSecurityRestrictions and get details such as IP restriction rule name, priority, IP address mask etc. All the details we want to confirm we’re restricting the way we’d like to.

If we then want to modify/add a rule, you simply call client.UpdateConfiguration passing resource group, app service name and most importantly the instance of SiteConfigResource that holds the above information. Hey presto, you have a new rule created/updated.

An entire example of this can be seen at github

Advertisements

Azure Network SecurityGroup IP Rules

As part of my day to day work, I often have to either log into an Azure a “jumpbox” (VM) or allow others to do so. Like any self-respecting paranoid dev, the jumpbox has a whitelist of IP addresses that are allowed to connect to it. Also, like a lot people I (and my co-workers) have dynamic IP addresses at home. Manually going into the Azure portal every time to adjust all the Network Security Group inbound IP settings is a pain.

I wanted to give the latest Go SDK for Azure another try. Fortunately it turned out to be pretty easy .

There are only really a couple of steps required.

1) Create authorizer to communicate with the Azure Management API.

2) Create SecurityGroup client

3) List all security groups

4) modify appropriate one and save

I won’t bother repeating the code here (see the github link earlier), but one thing that was slightly annoying is that for steps 1-3 I didn’t need to know the Azure Resource Group. In fact I intentionally didn’t want to have to specify one, I wanted the tool to be able to find any matching NSG rule. BUT, to save the change I needed the Resource Group name. To get this I had to regex it out of part of the initial response (containing the security groups). Annoying but not critical.

Overall I now have a useful tool that lets me easily modify anyones NSG rule without a bunch of manual clicking about.

The Go SDK is definitely improving 🙂

 

 

 

AzureCopy GO

The Go version of AzureCopy is slowly making progress. So far I’ve just been focusing on local filesystem and Azure (since I can do those while offline on the train commute thanks to the Azure Storage Emulator). The next plan is for S3 integration, primarily because S3 –> Azure seems to be the big use case for the original AzureCopy.

I’m planning on frequent releases once the basic S3 code is added (hopefully within the next few days). Not all features from the original AzureCopy will be available, but will simply be focusing on 1) list content and 2) copy content. There will be a few new additions such as a “don’t overwrite” flag so copies can be continued after being stopped (has been requested by a few people).

Ofcourse, the original AzureCopy will still be developed (mainly from a Nuget packaging point of view) but if you just need a command line tool to copy (and maybe need it on multiple platforms) then this new version is probably the way to go.

Hopefully the S3 code will drop in a few days then I’ll have a first binary release for Linux, MacOS and Windows, and see how things proceed from there.

Adventures in GO!

I’ve dabbled (ok ok, writing and rewriting “hello world” many times) in Go for a few years but have never really given it a serious Go (boom boom!) But after buying a GO in Action and going through a number of great Pluralsight courses (particularly by Nigel Poulton and Mike Van Sickle ) I’ve decided to give it another crack.

Instead of going through various tutorials I’ve decided to try porting (well more likely rewriting from scratch) my AzureCopy project. The original AzureCopy is all C# running on the .NET Framework 4.*. Although I DO (well did until recently) want to get it migrated to DotNET Core I thought it would be a good chance to learn Go PROPERLY.

I’m still trying to get my head around OO in a “kinda-is, kinda-isn’t, sorta, maybe” OO language like Go. Going back to structs (ahh glory days of C/C++), interfaces and having the magic of pointers back is really giving me a nostalgia kick.

The rough outline for this AzureCopy rewrite is basically as follows:

  • Get my dev environment sorted out (currently VSCode)
  • Basic solution structure sorted, rough architecture
  • Be able to copy to/from the local filesystem to Azure Blob Storage
  • List blobs/containers in Azure
  • Add S3
  • Add DropBox
  • Add OneDrive

Really don’t think I’ll bother with Sharepoint this time around, was a bitch to maintain in the existing version.

I’m unsure what the Go support is like with those cloud providers etc. I know Azure one seems mostly there (well for the stuff I need) but I get the distinct impression it’s the poor cousin to .NET, Java, Python etc.  I’ve yet to investigate S3’s Go offerings. Hopefully if these libs aren’t in a great shape I might get a chance to finally get my name on a contributors list somewhere. Smile

I’m sure my Go will suck…  but am hoping it will get better. The new version of AzureCopy is ofcourse on Github.

Azure Table Storage, a revisit

It’s been a while since I used Azure Table Storage (ATS) in anger. Well, kind of. I use it most days for various projects but it’s been about 18 months since I tried performing any bulk loads/querying/benchmarking etc.

The reason for my renewed interest is that a colleague mentioned that as they added more non indexed parameters to their ATS query, it was slowing down in a major fashion. This didn’t tally with my previous experience. So I wondered. Yes, if you don’t query ATS via the Partition Key and Row Key (ie the indexed fields) then it gets a lot slower, but everything is relative, right? It might be a lot slower, but it could still be “quick enough” for your purpose. So, I’ve coded some very very basic tests and tinkered.

My dummy data consisted of 2 million rows, split equally over 5 partitions. Each entity consisted of 10 fields (including partition key and row key). The fields (excluding row and partition) were 2 x strings, 2 x ints, 2 x doubles and 2 x dates. Currently my tests only focus on the strings and 1 of the ints (but I’ll be performing more tests in the near future).

The string fields were populated with a random word selected from a pool of 104. The ints and doubles were random between 0 and 1 million. The datetime was a random date between Jan 1 1970 and 1 Jan 2016. I repeat, the doubles and dates have not yet been included in any tests.

I tested a few different types of queries, starting simple and getting slightly more complex with each change. Firstly there is the query that takes the partition key and a value for field 3 (a string). Interestingly, the results were:

Partition key and field1 4706ms (av)
Partition key, field1 and field2 5368ms (av)
Partition key, field1, field2 and field3 7232ms (av)

So, despite only one field (partition key) being indexed, the adding of the other fields into the search didn’t completely make Azure Table Storage completely unusable and slow. Yes, it *almost* doubled the query time but wasn’t the huge difference that my colleague had experienced.

One thing to remember, although I created 2 million rows, these were split over 5 partition keys, so in effect the above queries were *really* only going over 400k rows.

More tests to follow…..

AzureCopy UI version

Seeing the popularity of AzureCopy increasing (very satisfying I have to admit) I’ve been thinking about writing a few desktop application versions to assist people. These would be “native” applications (not native as in bare metal, but I mean purpose built per OS) and would require a fair bit of design and effort.

The question I keep asking myself is, “Is it worth it?”. Migration from one cloud storage provider to another is a rare task. Even then it would usually (I hope) would be performed by a fairly technical person, so command line tools (such as AzureCopy) isn’t daunting. So is the need/want for a UI based application remotely needed by anyone? Chances are I’ll write them regardless (have already started on the Windows version) but am honestly wondering if I’m writing them without an audience.

Thoughts?

Azurecopy and Virtual Directories.

AzureCopy has finally had some love and has been updated as per some requests that have come in. Firstly, virtual directories in S3 and Azure Blob Storage are now handled in a consistent manner.

Remember, neither S3 nor ABS really have directories. They just use blob names with the ‘/’ character in them and various tools out on the interweb use that to simulate directories.

Now, copying files between S3 and ABS has always been easy, but what if you want to utilise these virtual directories?

eg. I have a blob on S3 called “dir1/subdir1/file1” and I want to copy that to Azure (or elsewhere for that matter). But I want the destination on Azure to be in my temp container and the resulting blob to be just be called “subdir1/file1”.

This example we’re pretending to copy a subdirectory and its file from S3 to Azure. Remember, there is no spoon directory.

Now we can perform the command:

azurecopy.exe –i https://s3.amazonaws.com/mybucket/dir1/ –o https://myacct.blob.core.windows.net/temp/

The result will be in my Azure container (temp) I’ll have a blob called “subdir1/file1”.

In addition, you can now copy these blobs with virtual directories to and from Dropbox but in this case it will make/read REAL directories.

Azurecopy is available via Nuget, command line executable and source.