The AzureCopy library Nuget package has hit a milestone of 103 downloads! (probably 6 of which are mine admittedly) but it appears as if people are at least curious about what it can provide.
So to celebrate I’ve decided to change the API. Better to do it sooner rather than later I believe. The API itself isn’t a breaking change, but I’ve been adding some methods to simplify the process of reading and writing blobs.
Up ‘til now every call had to deal with URLs. URLs aren’t fun when they’re potentially long and complex. To rectify this I’ve started the process of having the library itself generate the URLs and require the user to only provided the minimal input. This changes the way AzureCopy is used though, but not in any critical fashion.
When using URLs, it meant you could (in theory) specify any URL for any Azure/S3/Skydrive account you liked. Of course in practice your app.config file has the login details for only specific accounts so this flexibility wasn’t ever really there. AzureCopy now has the option of providing a base URL to the constructors of the various IBlobHandler implementations. This base URL is then used behind the scenes for constructing the full URLs at runtime.
eg. If I supplied the base URL as http://kenfaulkner.blob.windows.net and then started to copy blob ABC from container XYZ, the library would simply concat the details in the right order to get the correct URL.
This means that an example I wrote earlier is still valid, but now there is an easier way.
var sourceHandler = new S3Handler( s3Url);
var targetHandler = new AzureHandler(azureUrl);
var blob = sourceHandler.ReadBlob(“”, “test.png”);
This means that manual URLs only need to be used when creating a new instance of an IBlobHandler. In the above case it’s saying copy “test.png” from my S3 account. The “” indicates the container to copy from (so in this case it just means the root container). The blob will be copied to Azure, specifically into my “temp” container.
On a side note:
Speaking of containers, I’m still in debate on how to handle “fake” directories in S3. Keeping with what people are used too with S3, I think I’ll follow the herd and just concat container names to the blob name and pretend it’s a directory. Ugly, but its the status quo.