Migrate private s3 hosted objects to gcs

Hi,

I have around 1.2 PB of data which is residing in S3 with private access. I would like to know if it is possible to migrate objects via a private channel. 

Thanks in advance 

Regards,
Ganesh

Solved Solved
0 4 368
1 ACCEPTED SOLUTION

That is correct. In step 3.d, an Amazon authentication method is required, which will give access to the S3 buckets, and as long as the provided credentials have access to those S3 buckets, there will be no problem with the transfer whether they are private or public buckets.

View solution in original post

4 REPLIES 4

As shown in the documentation, it is recommended to use Storage Transfer Service to move or back up your data from other cloud storage providers to Cloud Storage. Supported cloud storage providers include Amazon S3 and Microsoft Azure Blob Storage.

Note that transfers from Amazon S3, Microsoft Azure, URL lists, or Cloud Storage to Cloud Storage do not require agents and agent pools.

Before configuring your transfers, make sure you have configured access:

Then you can create transfers:

    1. Go to the Storage Transfer Service page in the Google Cloud console.
    2. Click Create transfer job. The Create a transfer job page is displayed.
    3. Choose a source:
      • Access key: Enter your access key in the Access key ID field and the secret associated with your access key in the Secret access key field.
      • ARN: Enter your ARN in the AWS IAM role ARN field, with the following syntax:
        arn:aws:iam::ACCOUNT:role/ROLE-NAME-WITH-PATH
      • Where:
        • ACCOUNT: The AWS account ID with no hyphens.
        • ROLE-NAME-WITH-PATH: The AWS role name including path.
      • For more information on ARNs, see IAM ARNs.
      1. Under Source type, select Amazon S3.
      2. Click Next step.
      3. In the Bucket name field, enter the source bucket name.
        The bucket name is the name as it appears in the AWS Management Console.
      4. Select your Amazon Web Services (AWS) authentication method. You can provide an AWS access key or an Amazon Resource Name (ARN) for identity federation:
      5. Click Next step.
    4. Choose a destination:
      1. In the Description field, enter a description of the transfer. As a best practice, enter a description that is meaningful and unique so that you can tell jobs apart.
      2. Under Metadata options, choose to use the default options, or click View and select options to specify values for all supported metadata. See Metadata preservation for details.
      3. Under When to overwrite, select one of the following:
        • If different: Overwrites destination files if the source file with the same name has different Etags or checksum values.
        • Always: Always overwrites destination files when the source file has the same name, even if they're identical.
      4. Under When to delete, select one of the following:
        • Never: Never delete files from either the source or destination.
        • Delete file from source after they're transferred: Delete files from the source after they're transferred to the destination.
          Important: If you don't have local backup, this option is a non-reversible action.
        • Delete files from destination if they're not also at source: If files in the destination Cloud Storage bucket aren't also in the source, then delete the files from the Cloud Storage bucket.
          This option ensures that the destination Cloud Storage bucket exactly matches your source.
      5. Under Notification options, select your Pub/Sub topic and which events to notify for. See Pub/Sub notifications for more details.
      1. In the Bucket or folder field, enter the destination bucket and (optionally) folder name, or click Browse to select a bucket from a list of existing buckets in your current project. To create a new bucket, click Create new bucket.cristianrm_1-1664995398338.png
      2. Click Next step
      3. Choose settings for the transfer job. Some options are only available for certain source/sink combinations.
      4. Click Next step.
    5. Choose your scheduling options:
      Note: The Storage Transfer Service displays transfer job schedules in your local timezone, but it stores those times in Universal Time Coordinated (UTC). If you are affected by Daylight Savings Time (DST), you might experience a transfer job schedule change when DST starts or ends.
      • Run once: Runs a single transfer, starting at a time that you select.
      • Run every day: Runs a transfer daily, starting at a time that you select.
        You can enter an optional End date, or leave End date blank to run the transfer continually.
      • Run every week: Runs a transfer weekly, starting at a time that you select.
      • Run with custom frequency: Runs a transfer at a frequency that you select. You can choose to repeat the transfer at a regular interval of Hours, Days, or Weeks.
        You can enter an optional End date, or leave End date blank to run the transfer continually.
      • Starting now: Starts the transfer after you click Create.
      • Starting on: Starts the transfer on the date and time that you select. Click Calendar to display a calendar to select the start date.
      1. From the Run once drop-down list, select one of the following:
      2. From the Starting now drop-down list, select one of the following:
      3. To create your transfer job, click Create.
Note: Storage Transfer Service is currently available to transfer data from the following Amazon S3 regions: ap-east-1, ap-northeast-1, ap-northeast-2, ap-south-1, ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-north-1, eu-west-1, eu-west-2, eu-west-3, me-south-1, sa-east-1, us-east-1, us-east-2, us-west-1, us-west-2.

Service Level Agreement

Storage Transfer Service currently does not provide an SLA, and some performance fluctuations may occur. For example, we do not provide SLAs for transfer performance or latency.

Whether it is private or public S3 bucket, it should still work with STS. Is it correct?

That is correct. In step 3.d, an Amazon authentication method is required, which will give access to the S3 buckets, and as long as the provided credentials have access to those S3 buckets, there will be no problem with the transfer whether they are private or public buckets.

Iva
Bronze 4
Bronze 4

Whether it is a private or public S3 bucket, tools like Gs Richcopy360, AvePoint, and Cloudsfer can migrate S3 objects directly and securely to GCS, search all