Quick Facts
- Category: Cloud Computing
- Published: 2026-05-01 09:04:23
- GitHub Overhauls Status Page with New Severity Tiers and Per-Service Uptime Data
- Building a Smarter Community Search: A Guide to Hybrid Retrieval and Model-Based Evaluation
- 271 Zero-Day Flaws Found in Firefox via Advanced AI – A Record Security Haul
- A Look at 10 Best Chrome Extensions That Are Perfect for Everyone
- Rethinking Rice: Innovative Strategies to Reduce Methane Emissions from Paddies
Introduction
Amazon S3 Files revolutionizes how you interact with object storage by making your S3 buckets accessible as high-performance file systems. This guide walks you through the process of mounting an S3 bucket on AWS compute resources—whether EC2 instances, containers (ECS/EKS), or Lambda functions—using the new S3 Files feature. By the end, you'll have a working file system that automatically syncs changes to S3, supports NFS v4.1+ operations, and offers intelligent pre-fetching for optimal performance. No more choosing between object storage economics and file system interactivity.

What You Need
- AWS Account with permissions to create and manage S3 buckets, EC2 instances, or container clusters.
- Basic familiarity with the AWS Management Console or AWS CLI.
- An existing S3 general purpose bucket (or create a new one).
- Compute resource (EC2 instance, ECS task, EKS pod, or Lambda function) running a Linux operating system with NFS client support.
- Network connectivity between the compute resource and the S3 bucket (usually within the same region).
- IAM role for your compute resource with the minimum required permissions:
s3:ListBucket,s3:GetObject,s3:PutObject,s3:DeleteObject(adjust for your use case).
Step-by-Step Guide
Step 1: Enable S3 Files on Your Bucket
Before mounting, you must enable the S3 Files filesystem on the target S3 bucket. This is a one-time configuration.
- Open the S3 Console and select your bucket.
- Go to the Properties tab and locate the section S3 Files.
- Click Enable S3 Files. Confirm any prompts.
- (Optional) Configure high-performance storage settings: choose whether to load full file data or metadata only. This affects caching behavior.
Tip: For most workloads, leaving the default settings is fine. Tune later based on access patterns (see Tips).
Step 2: Attach an IAM Role to Your Compute Resource
Your compute resource needs permissions to access the S3 bucket via the S3 Files interface.
- Create or update an IAM role with a policy that grants at least
s3:ListBucketands3:GetObjecton the bucket and its objects. - Attach this role to your EC2 instance (instance profile), ECS task definition, EKS service account, or Lambda execution role.
- If you plan to write data back to S3, add
s3:PutObjectands3:DeleteObjectpermissions.
Step 3: Install NFS Client on Your Compute Resource
S3 Files uses the NFS v4.1 protocol. Your compute resource must have an NFS client installed.
- For Amazon Linux 2/2023:
sudo yum install -y nfs-utils - For Ubuntu:
sudo apt update && sudo apt install -y nfs-common - For Containers: Ensure the NFS client is included in your Docker image (e.g.,
apt-get install nfs-common). - For Lambda: You can use NFS in a custom runtime or container image with the client pre-installed.
Step 4: Mount the S3 Bucket as a File System
Obtain the mount point from the S3 Files settings (usually an NFS export path) and mount it on your compute instance.
- Create a local mount point directory:
sudo mkdir -p /mnt/s3files - Mount using the NFS export path. The format is
mount.nfs4 -o sync,rw,hard,noatime <S3-Files-Endpoint>:/<bucket-name> /mnt/s3files. - To make the mount persistent across reboots, add an entry to
/etc/fstabwith the appropriate options.
Once mounted, interact with the directory like any local folder: list files, create directories, read/write files. All changes sync back to S3 automatically.

Step 5: Verify and Test
Confirm the file system is working correctly.
- List the contents:
ls /mnt/s3files– you should see your S3 objects. - Create a test file:
echo 'Hello S3 Files' > /mnt/s3files/test.txt - Check in S3 console: the object
test.txtshould appear in your bucket. - Modify the file using a text editor or
cat– changes propagate to S3.
Step 6: Attach to Multiple Compute Resources (Optional)
S3 Files supports concurrent access from multiple instances. Simply repeat Step 4 on any number of EC2/ECS/EKS/Lambda resources using the same NFS export path. All clients see consistent data, and S3 remains the single source of truth. No need to duplicate data across clusters.
Tips for Optimal Use
- Performance Tuning: By default, frequently accessed files are cached on high-performance storage. For large sequential reads, S3 Files serves directly from S3 to maximize throughput. If your workload involves many small files, consider pre-loading metadata to reduce latency.
- Intelligent Pre-fetching: Enable the intelligent pre-fetching option (available in bucket settings) to have the file system anticipate your access patterns. This reduces read latency for common operations.
- Cost Management: Only the data stored on high-performance storage incurs additional costs. Use the
metadata onlysetting for archives or rarely accessed data. Monitor storage usage with CloudWatch metrics. - Security: Always use IAM roles instead of long-term credentials. Restrict NFS access via security groups (ensure port 2049 is open only to necessary sources).
- Backward Compatibility: You can still use S3 via API/console while the file system is mounted. Changes made from any interface are reflected in real time.
- Logging: Enable S3 server access logs to audit file operations performed through the file system.
By following these steps, you can seamlessly integrate S3 into your existing workflows as a native file system, combining the durability and cost savings of object storage with the interactivity of a local filesystem.