How To Upload Local Files to AWS S3 with the AWS CLI

choubertsprojects

The Best WordPress plugins!

1. WP Reset

2. WP 301 Redirects

3. WP Force SSL

This blog post will walk you through the steps of uploading files to AWS S3 using the Amazon Web Services Command Line Interface (AWS CLI).

The “upload file to s3 command line” is a command-line tool that allows users to upload local files to AWS S3. This tool can be used with the AWS CLI, which is installed by default on Linux systems.

How To Upload Local Files to AWS S3 with the AWS CLI

You’re presumably utilizing the S3 web interface to download, copy, or upload data to S3 buckets while dealing with Amazon S3 (Simple Storage Service). Using the console is absolutely acceptable; after all, that is what it was meant for.

The web interface is arguably the simplest, especially for administrators who are more accustomed to mouse clicks than keyboard instructions. However, administrators will ultimately need to use Amazon S3 to handle massive file operations, such as an unattended file upload. The graphical user interface isn’t the ideal instrument for this.

The AWS CLI tool offers administrators with command-line options for managing Amazon S3 buckets and objects for such automation needs with Amazon Web Services, including Amazon S3.

In this post, you’ll learn how to upload, copy, download, and synchronize files with Amazon S3 using the AWS CLI command-line tool. You’ll also learn how to set up an access profile for your S3 bucket and configure it to operate with the AWS CLI tool.

Prerequisites

The next parts will include examples and demos, since this is a how-to article. You must satisfy many prerequisites in order to follow along properly.

  • An AWS account is required. If you don’t already have an AWS account, you may join up for the AWS Free Tier.
  • A bucket on Amazon Web Services (AWS). If you like, you may utilize an existing bucket. Nonetheless, it is suggested that you make an empty bucket instead. Please see Creating a Bucket for further information.
  • A PC running Windows 10 with at least Windows PowerShell 5.1 installed. PowerShell 7.0.2 will be used in this tutorial.
  • Your PC must have the AWS CLI version 2 utility installed.
  • Local folders and files that you’ll synchronize or upload to Amazon S3

Getting Your AWS S3 Access Ready

Assume you already have the prerequisites in place. You’d think you’d be able to use AWS CLI with your S3 bucket right away. Wouldn’t it be lovely if it was as easy as that?

This section intends to assist users who are just getting started with Amazon S3 or AWS in general in setting up access to S3 and configuring an AWS CLI profile.

This link will take you to the whole documentation for establishing an IAM user on AWS. In your AWS account, create an IAM user.

Adding S3 Access Permissions to an IAM User

When utilizing the CLI to access AWS, you’ll need to establish one or more IAM users with sufficient access to the resources you’ll be working with. You’ll establish an IAM user with access to Amazon S3 in this step.

You must first go into your AWS IAM panel to establish an IAM user with access to Amazon S3. Select Users from the Access management group. After that, choose Add user.

Menu for IAM UsersMenu for IAM Users

In the User name* box, type the name of the IAM user you’re creating, such as s3Admin. Select Programmatic access from the Access type* drop-down menu. Then choose Next: Permissions from the drop-down menu.

Set the IAM user's informationSet the IAM user’s information

After that, go ahead and click on Directly attach existing insurance. After that, look for the AmazonS3FullAccess policy name and check it. When you’re finished, click Next: Tags.

Assign permissions to IAM users.Assign permissions to IAM users.

In the Add tags page, creating tags is optional; you may just click the Next: Review button instead.

Tags for IAM usersTags for IAM users

On the Review page, you’ll see a summary of the new account you’re creating. Create a new user by clicking the button.

Summary of IAM usersSummary of IAM users

Finally, when the user has been created, copy the Access key ID and Secret access key values and store them for a future user. This is the only time you’ll be able to view these numbers.

Credentials for IAM usersCredentials for IAM users

Creating an Amazon Web Services (AWS) Profile on Your Computer

The next step is to set up the AWS CLI profile on your PC once you’ve established the IAM user with suitable access to Amazon S3.

This part assumes you’ve previously completed the needed installation of the AWS CLI version 2 tool. You’ll need the following information to create your profile:

  • The IAM user’s access key ID.
  • This is the secret access key for the IAM user.
  • The default region name is the same as your AWS S3 bucket’s location. This link will take you to the list of endpoints. The AWS S3 bucket in this article is situated in the Asia Pacific (Sydney) region, with the endpoint ap-southeast-2.
  • The output format that is used by default. For this, use JSON.

Open PowerShell and enter the command below, then follow the steps to create the profile.

Access key ID, Secret access key, Default region name, and Default output name are all required fields. Please see the example below.

Set up an AWS CLI profile.Set up an AWS CLI profile.

Testing AWS CLI Access

You can verify that the AWS CLI profile is operating by executing the command below in PowerShell once you’ve configured it.

The command above should provide a list of Amazon S3 buckets in your account. The command is shown in the video below. The fact that the profile setting resulted in a list of accessible S3 buckets suggests that the profile configuration was successful.

S3 buckets should be included.S3 buckets should be included.

Visit the AWS CLI Command Reference S3 page to learn about the AWS CLI commands that are particular to Amazon S3.

Using S3 to Manage Files

Typical file management tasks such as uploading files to S3, downloading files from S3, deleting objects in S3, and moving S3 objects to another S3 location may all be done using the AWS CLI. Knowing the proper command, syntax, arguments, and options is all that is required.

The following are the components of the environment utilized in the next sections.

  • atasync1 and atasync2 are two S3 buckets. The screenshot below displays the Amazon S3 console’s existing S3 buckets.

The Amazon S3 console has a list of possible S3 bucket names.The Amazon S3 console has a list of possible S3 bucket names.

  • c:sync contains the Local Listings and files.

Local ListingsLocal Listings

Individually uploading files to S3

When uploading files to S3, you have the option of uploading one file at a time or recursively uploading numerous files and directories. Depending on your needs, you may choose one over the other.

To upload a file to S3, you’ll need to use the aws s3 cp command with two parameters (source and destination).

You may use the command below to upload the file c:synclogslog1.xml to the root of the atasync1 bucket.

s3:/atasync1/aws s3 cp c:synclogslog1.xml

When using the AWS CLI, S3 bucket names are always prefixed with S3:/.

Change the source and destination to match your environment before running the following command in PowerShell. The final product should like the example below.

Upload the file to Amazon S3.Upload the file to Amazon S3.

The file c:synclogslog1.xml was successfully uploaded to the S3 destination s3:/atasync1/ in the sample above.

To list the items at the base of the S3 bucket, use the command below.

When you run the command above in PowerShell, you’ll see something similar to what you see in the sample below. The file log1.xml is present at the root of the S3 site, as seen in the output below.

In S3, make a list of the files you've uploaded.In S3, make a list of the files you’ve uploaded.

Recursively uploading many files and folders to S3

You learned how to transfer a single file to an S3 location in the previous section. What if you want to upload many files from a folder and its subfolders? You wouldn’t want to execute the same command for various filenames several times, right?

The —recursive option in the aws s3 copy command allows you to process files and directories recursively.

The directory c:sync, for example, has 166 items (files and sub-folders).

Multiple files and subfolders are included in this folder.Multiple files and subfolders are included in this folder.

All of the contents of the c:sync folder will be uploaded to S3 using the —recursive option, while the folder structure is preserved. Use the sample code below to test, but be sure to adjust the source and destination to suit your needs.

The source is c:sync, and the destination is s3:/atasync1/sync, as you can see in the code below. The /sync key after the S3 bucket name tells the AWS CLI to upload the files in the S3 /sync folder. If the /sync folder does not already exist in S3, it will be created for you.

—recursive aws s3 cp c:sync s3:/atasync1/sync

The output from the code above is given in the example below.

Multiple files and folders may be uploaded to S3.Multiple files and folders may be uploaded to S3.

Selectively uploading multiple files and folders to S3

Uploading ALL sorts of files isn’t always the best solution. For example, if you just need to upload files with specified file extensions (e.g., *.ps1), you may use this feature. The —include and —exclude parameters are two more options accessible to the cp command.

Unlike the code in the previous section, which includes all files in the recursive upload, the command below only includes files with the *.ps1 file extension and excludes all other files.

—recursive aws s3 cp c:sync s3:/atasync1/sync –exclude * –include *.ps1

The video below demonstrates how the code above works when run.

Files with a specified file extension were uploaded.Files with a specified file extension were uploaded.

If you wish to add numerous distinct file extensions, for example, you’ll have to use the —include option multiple times.

The copy command in the example below will only include the *.csv and *.png files.

—recursive aws s3 cp c:sync s3:/atasync1/sync –exclude * –include *.csv –include *.png

When you run the code above in PowerShell, you’ll get a similar result, as seen below.

Upload files with a variety of choices for inclusion.Upload files with a variety of choices for inclusion.

Objects from S3 are being downloaded.

You may also reverse the copy operations using the examples you’ve learnt in this section. To put it another way, you can download things from the S3 bucket to your local PC.

Switching the source and destination locations while copying from S3 to local is required. The S3 site serves as the source, while the local route, such as the one illustrated below, serves as the destination.

aws s3 cp s3:/atasync1/sync c:sync aws s3 cp s3:/atasync1/sync aws s3 cp s3:/

Note that the same options used when uploading files to S3 are also applicable when Objects from S3 are being downloaded. to local. For example, downloading all objects using the command below with the –recursive option.

aws s3 cp s3:/atasync1/sync c:sync aws s3 cp s3:/atasync1/sync aws s3 cp s3:/ –recursive

Object Copying Between S3 Locations

You may transfer or move files between two S3 bucket locations using AWS CLI in addition to uploading and downloading files and folders.

You’ll see that the command uses one S3 location as the source and another S3 site as the destination in the example below.

aws s3 cp s3:/atasync1/Log1.xml s3:/atasync2/ aws s3 cp s3:/atasync1/Log1.xml s3:/atasync2/

Using the command above, the source file is transferred to another S3 location in the example below.

Objects may be copied from one S3 location to another.Objects may be copied from one S3 location to another.

S3 File and Folder Synchronization

So far, you’ve learned how to use the AWS CLI commands to upload, download, and copy files in S3. In this part, you’ll learn about the sync command, which is another file operation function accessible in the AWS CLI for S3. Only the updated, new, and deleted files are processed by the sync command.

There are some cases where you need to keep the contents of an S3 bucket updated and synchronized with a Local Listings on a server. For example, you may have a requirement to keep transaction logs on a server synchronized to S3 at an interval.

The command below will sync *.XML log files from the c:sync folder on the local server to the S3 address at s3:/atasync1.

—exclude * —include *.xml aws s3 sync C:sync s3:/atasync1/

After performing the command above in PowerShell, all *.XML files were uploaded to the S3 location s3:/atasync1/, as seen below.

Local files to S3 synchronizationLocal files to S3 synchronization

Using S3 to Sync New and Updated Files

The contents of the log file Log1.xml are expected to have been changed in the following example. As illustrated in the video below, the sync command should pick up that modification and transfer the changes made on the local file to S3.

The command is the same as it was in the previous example.

Changes to S3 must be synchronized.Changes to S3 must be synchronized.

Because just the file Log1.xml was modified locally, it was also the only file synced to S3, as you can see from the output above.

Using S3 to Synchronize Deletions

The sync command does not handle deletions by default. Any files that are deleted from the source location are not erased from the destination location. Unless you use the —delete option, that is.

The file entitled Log5.xml has been removed from the source in this example. The —delete option will be applied to the command to synchronize the files, as demonstrated in the code below.

—exclude * —include *.xml aws s3 sync C:sync s3:/atasync1/ –delete

The destroyed file called Log5.xml should likewise be removed at the target S3 location when you perform the command above in PowerShell. The following is an example of a result.

File removals should be synchronized to S3.File removals should be synchronized to S3.

Summary

Amazon S3 is a great place to put your stuff on the cloud. The AWS CLI tool expands the ways you may use Amazon S3 and allows you to automate your workflows.

You learned how to upload, download, and synchronize files and folders between local locations and S3 buckets using the AWS CLI tool in this tutorial. You’ve also discovered that the contents of S3 buckets may be duplicated or relocated to other S3 sites.

There are many more cases where the AWS CLI tool may be used to automate file management with Amazon S3. You may also try combining it with PowerShell scripting to create your own reusable tools or modules. It is up to you to seek out such possibilities and demonstrate your abilities.

Additional Reading

The “aws cli upload multiple files to s3” is a command-line tool that allows uploading local files to the Amazon Web Services.

Related Tags

  • upload file to s3 command line linux
  • aws s3 cli
  • aws s3 cp
  • aws s3 cp example
  • aws s3 cli commands

Table of Content