AzureCopy Go now with added CopyBlob flag

Azurecopy (Go version) 0.2.2 has now been released. Major benefit is now that when copying to Azure we can use the absolutely AWESOME CopyBlob functionality Azure provides. This allows blobs to be copied from S3 (for example) to Azure without having to go via the machine executing the instructions (and use my bandwidth!)

An example of copying from S3 to Azure is as simple as:

azurecopycommand_windows_amd64.exe  -S3DefaultAccessID=”S3 Access ID” -S3DefaultAccessSecret=”S3 Access Secret” -S3DefaultRegion=”us-west-2″ -dest=”https://myaccount.blob.core.windows.net/mycontainer/” -AzureDefaultAccountName=”myaccount” -AzureDefaultAccountKey=”Azure key” -source=https://s3.amazonaws.com/mybucket/ –copyblob

The key thing is the –copyblob flag. This tells AzureCopy to do it’s magic!

By default AzureCopy-Go will copy 5 blobs concurrently, this is to not overload your own bandwidth, but with Azure CopyBlob feature feel free to crank that setting up using the –cc flag (eg add -cc=20) Smile

18 thoughts on “AzureCopy Go now with added CopyBlob flag

  1. Hello, thank you for providing this tool.
    While trying this new azurecopy, I got below error. Could you please help with this?
    $ azurecopycommand_windows_amd64.exe -list “https://s3.amazonaws.com/tapad-thinkcx”
    Error message below:
    time=”2017-05-09T09:55:48-07:00″ level=fatal msg=”ERR OpenFile open \\: The system cannot find the path specified.”

    • And I try with below command got 403 error.
      $ azurecopycommand_windows_amd64.exe -list -source=”https://s3.amazonaws.com/tapad-thinkcx/” -S3DefaultAccessID=”S3 Access ID” -S3DefaultAccessSecret=”S3 Secret Key” -S3DefaultRegion=”us-east-1″
      ERROR MSG:
      time=”2017-05-09T09:58:17-07:00″ level=info msg=”&{ 0xc042039200 0xc0421304f0 }”
      time=”2017-05-09T09:58:17-07:00″ level=fatal msg=”Unable to get S3 buckets%!(EXTRA *awserr.requestError=AccessDenied: Access Denied\n\tstatus code: 403, request id: D5DD295380FB0B6F)”

      • Hi

        Just a couple of things, are you 100% certain that your AccessID and AccessSecret are correct?
        Also, I notice that the double quotes around the example string you’ve pasted don’t quite look correct. They appear to be the character ” instead of ” . Can you retry your command with definitely using regular double quotes?

        Thanks

        Ken

  2. Hiya,
    I am trying to use your application to move contents from S3 to Azure. I am using the following command.

    azurecopycommand_windows_amd64.exe -copyblob -S3DefaultAccessID=”MyId” -S3DefaultAccessSecret=”MySecret” -S3DefaultRegion=”eu-west-1″ dest=”MyAzureDest/Temp/” -AzureDefaultAccountName=”MyAccountName” -AzureDefaultAccountKey=”MyAccountKey” -source=”MyS3Dest/MyBucket/Temp/”

    But I am getting the following error,
    goroutine 1 [running]:
    panic(0x729140, 0xc042004050)
    C:/Users/kenfa/Packages/Go/src/runtime/panic.go:500 +0x1af
    azurecopy/azurecopy.(*AzureCopy).CopyBlobByURL(0xc04205e580, 0x101, 0x5, 0x1)
    C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/azurecopy.go:117 +0x282
    main.main()
    C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopycommand/main.go:179 +0x455

    Is the way I am running it wrong? Thanks.

    • Hi

      Just to clarify, are the real dest and source entries full URLs? eg. source would be something like “https://s3.amazonaws.com/mybucket/” and dest would be something like “https://kentest.blob.core.windows.net/mycontainer/” ??

      Thanks

      Ken

      • Hiya Ken,
        Yes – they are, but I am giving a folder after bucket name in S3, is that supported or is it only root bucket I am supposed to provide? Wait a minute, the same error happens in my list command as well with or without the added folder in the url after my root bucket.

  3. yes, you can have the virtual directory entries after the bucket name.(they’re just really blob name prefixes). They work fine and will be reflected on the Azure side as well.

    if you’re having a problem with the listing of S3, am guessing it’s some other configuration issue.
    For example if want to list out my S3 bucket I use the parameters:

    -S3DefaultAccessID=”XXXX” -S3DefaultAccessSecret=”YYYY” -S3DefaultRegion=”us-west-2″ -list -source=”https://s3.amazonaws.com/mybucket/ken1/”

    In this case “ken1/” is just the “virtual directory” in the S3 file hierarchy. (which I believe is what you want to do).

    What is the error when you try to do the list? (can’t be the same error since that other stacktrace you pasted was purely a copying function)

    Thanks

    Ken

    • This is the error I get when I try listing

      panic: runtime error: slice bounds out of range

      goroutine 1 [running]:
      panic(0x729140, 0xc04200a040)
      C:/Users/kenfa/Packages/Go/src/runtime/panic.go:500 +0x1af
      azurecopy/azurecopy/handlers.generateBasePath(0xc04200e198, 0x43, 0xc04213db01, 0x110000c04213dba0, 0x0, 0xc04200e230)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/handlers/FilesystemHandler.go:41 +0x143
      azurecopy/azurecopy/handlers.NewFilesystemHandler(0xc04200e198, 0x43, 0x783301, 0x6, 0xc04210a838, 0xc04216e5a0)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/handlers/FilesystemHandler.go:55 +0x5a
      azurecopy/azurecopy/utils.GetHandler(0x5, 0xc04200e101, 0xc0420f90e0, 0x0, 0x1, 0x1, 0x5, 0x1, 0x0, 0x0)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/utils/handlerutils.go:31 +0x4aa
      azurecopy/azurecopy.(*AzureCopy).GetHandlerForURL(0xc042066500, 0xc04200e198, 0x43, 0x101, 0xc04210a800, 0x476c01)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/azurecopy.go:305 +0x9c
      azurecopy/azurecopy.NewAzureCopy(0xc0420f90e0, 0x0, 0x1, 0x1, 0x5, 0x60)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/azurecopy.go:54 +0x1e8
      main.main()
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopycommand/main.go:168 +0x21c

  4. Does the dots in the bucket name throw it off?

    azurecopycommand_windows_amd64.exe -S3DefaultAccessID=”MyAccessId” -S3DefaultAccessSecret=”MyAccessSecret” -S3DefaultRegion=”eu-west-1″ -source=”https://s3-eu-west-1.amazonaws.com/some.bucketname.com/Temp/” -list

  5. Dont worry about the bug, this is after all provided as is and I am the one trying to use it. 🙂 Thank you for being so proactive and quick on your responses.

    I think we have definitely made some progress because now the list command returns,
    time=”2017-06-12T07:44:40+05:30″ level=info msg=”&{ 0xc0420394a0 0xc042121140 }”
    S3 specific container &{Temp 0 false 0xc0420301c0 [] [] false false}
    S3 specific container name Temp
    +Temp

    Where Temp is my aws bucket’s folder name. Is this expected? I am thinking the file names will be listed here.

    • The copy also gives me the –
      panic: runtime error: slice bounds out of range

      goroutine 1 [running]:
      azurecopy/azurecopy.(*AzureCopy).CopyBlobByURL(0xc042054800, 0x101, 0x5, 0x1)
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/azurecopy.go:124 +0x291
      main.main()
      C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopycommand/main.go:179 +0x341

  6. Hi

    hmmm yes, you should get a response like:

    +ken1
    test2(https://ken1.s3.amazonaws.com/ken1/test2)
    + fred
    ssdsd(https://ken1.s3.amazonaws.com/ken1/fred/ssdsd)
    test.txt(https://ken1.s3.amazonaws.com/ken1/fred/test.txt)
    + aaa
    ssdsd(https://ken1.s3.amazonaws.com/ken1/fred/aaa/ssdsd)

    In this case its showing the “virtual” directories (ken1, fred, aaaa) and the blobs inside them. Again, this is just a representation that shows the virtual directory structure.

    How many blobs do you have in your acct? Just noticed that out of range error you posted. Will investigate.

  7. This is what I get when I debug the listing
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”after config setup”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”Got Filesystem Handler”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”rootContainerPath ”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”Got Filesystem Handler”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”rootContainerPath ”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”Listing contents of ”
    time=”2017-06-12T10:36:24+05:30″ level=debug msg=”rootContainerPath ”
    time=”2017-06-12T10:36:24+05:30″ level=fatal msg=”ERR OpenFile open $RECYCLE.BIN\\: The system cannot find the file specified.”

    This is what I get when I debug the copy
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”after config setup”
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”Got Filesystem Handler”
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”rootContainerPath ”
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”Got Filesystem Handler”
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”rootContainerPath ”
    time=”2017-06-12T10:38:45+05:30″ level=debug msg=”CopyBlobByURL sourceURL ”
    panic: runtime error: slice bounds out of range

    goroutine 1 [running]:
    azurecopy/azurecopy.(*AzureCopy).CopyBlobByURL(0xc042054800, 0x101, 0x5, 0x1)
    C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopy/azurecopy.go:124 +0x291
    main.main()
    C:/Users/kenfa/projects/gopath/src/azurecopy/azurecopycommand/main.go:179 +0x341

    I only have about 7 small files in that directory, just to check this out.
    Thanks again for keeping on top of this… !

  8. VERY confused that it’s trying to copy from the recycle bin!
    Were you still trying to copy from S3 to Azure for that one?
    If so, can you post the command you used again? (assuming it changed a little from the last one). It’s still trying to use the filesystem for some reason so want to confirm parameters.

    Thanks

    Ken

Leave a reply to kpfaulkner Cancel reply