AzureCopy API update.

The AzureCopy library Nuget package has hit a milestone of 103 downloads! (probably 6 of which are mine admittedly) but it appears as if people are at least curious about what it can provide.

So to celebrate I’ve decided to change the API. Better to do it sooner rather than later I believe. The API itself isn’t a breaking change, but I’ve been adding some methods to simplify the process of reading and writing blobs.

Up ‘til now every call had to deal with URLs. URLs aren’t fun when they’re potentially long and complex. To rectify this I’ve started the process of having the library itself generate the URLs and require the user to only provided the minimal input. This changes the way AzureCopy is used though, but not in any critical fashion.

When using URLs, it meant you could (in theory) specify any URL for any Azure/S3/Skydrive account you liked. Of course in practice your app.config file has the login details for only specific accounts so this flexibility wasn’t ever really there. AzureCopy now has the option of providing a base URL to the constructors of the various IBlobHandler implementations. This base URL is then used behind the scenes for constructing the full URLs at runtime.

eg. If I supplied the base URL as  and then started to copy blob ABC from container XYZ, the library would simply concat the details in the right order to get the correct URL.

This means that an example I wrote earlier is still valid, but now there is an easier way.

var s3Url = “”;
var azureUrl = “”;

var sourceHandler = new S3Handler( s3Url);
var targetHandler = new AzureHandler(azureUrl);

var blob = sourceHandler.ReadBlob(“”, “test.png”);

targetHandler.WriteBlob(“temp”, blob);

This means that manual URLs only need to be used when creating a new instance of an IBlobHandler. In the above case it’s saying copy “test.png” from my S3 account. The “” indicates the container to copy from (so in this case it just means the root container). The blob will be copied to Azure, specifically into my “temp” container.

On a side note:

Speaking of containers, I’m still in debate on how to handle “fake” directories in S3. Keeping with what people are used too with S3, I think I’ll follow the herd and just concat container names to the blob name and pretend it’s a directory. Ugly, but its the status quo.

4 thoughts on “AzureCopy API update.

  1. Very nice project. I’d like to use your nuget lib to direct-copy an S3 URL which includes a SharedAccessKey to an Azure Blob of the same name. The S3 link looks like this:

    I’m poking around to find the proper combination of API calls. Any tips greatly appreciated!

    • If the Shared Access Key is still valid, then yes you could probably just hit StartCopyFromBlob directly. The AzureBlobCopyHandler also generates the shared access key itself when required. Out of interest, is this a one off task or you’re planning on integrating this into an application of your own?

      If you just need a tool to do this now, you could try the pre-compiled azurecopy binary I have on Github ( ). You can just give the S3 URL and Azure URL and it should be fine (and yet it regenerate the SAK itself.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s