Hot questions for Using Amazon S3 in upload

Question:

I create PDF docs in memory as OutputStreams. These should be uploaded to S3. My problem is that it's not possible to create a PutObjectRequest from an OutputStream directly (according to this thread in the AWS dev forum). I use aws-java-sdk-s3 v1.10.8 in a Dropwizard app.

The two workarounds I can see so far are:

  1. Copy the OutputStream to an InputStream and accept that twice the amount of RAM is used.
  2. Pipe the OutputStream to an InputStream and accept the overhead of an extra thread (see this answer)

If i don't find a better solution I'll go with #1, because it looks as if I could afford the extra memory more easily than threads/CPU in my setup.

Is there any other, possibly more efficient way to achive this that I have overlooked so far?

Edit: My OutputStreams are ByteArrayOutputStreams


Answer:

I solved this by subclassing ConvertibleOutputStream:

public class ConvertibleOutputStream extends ByteArrayOutputStream {
    //Craetes InputStream without actually copying the buffer and using up mem for that.
    public InputStream toInputStream(){
        return new ByteArrayInputStream(buf, 0, count);
    }
}

Question:

I am trying to batch upload a couple of files in S3 using TranferManager. Below is my code:

@GetMapping("s3/batch/upload/base64")
public void uploadBase64ToWebp() {

     List<File> fileList = new ArrayList<>();
    String rawData = "1";
    String base64Data = Base64.encodeBase64String(rawData.getBytes(StandardCharsets.UTF_8));
     byte[] data = getBinaryImageData(base64Data);
     File file = new File("1234.webp");
     try {

         FileUtils.writeByteArrayToFile(file, data);

     } catch (IOException e) {

         System.out.println(e);
     }
     fileList.add(file);
     ObjectMetadataProvider metadataProvider = new ObjectMetadataProvider() {
            public void provideObjectMetadata(File file, ObjectMetadata metadata) {

                metadata.setContentType("image/webp");
                metadata.getUserMetadata().put("filename", file.getPath());
                metadata.getUserMetadata().put("createDateTime", new Date().toString());
            }
        };
        TransferManager transferManager = TransferManagerBuilder.standard().withS3Client(amazonS3).build();
     transferManager.uploadFileList(bucketName, "school/transactions", new File("."), fileList, metadataProvider);
}

private byte[] getBinaryImageData(String image) {

        return Base64.decodeBase64(
            image
                .replace("data:image/webp;base64,", "")
                .getBytes(StandardCharsets.UTF_8)
        );
    }

Here,as you can see, I am giving the file name as '1234.webp', but the file name that is getting saved in S3 is '34.webp'. I tried a bigger name like '1234567.webp' and again the first two digits get truncated and the file name is '34567.webp'. What i am doing wrong?

Please note, that in the example that i have pasted here, i am just uploading one file but in my actual code, I do upload multiple files, but in both the cases, the names get truncated anyhow.


Answer:

Ok, so it was a Java IO issue. I updated the below to show the path and it worked.

Old:

File file = new File("1234.webp");

New:

File file = new File("./1234.webp");

Still trying to figure out why the first two letters got dropped.

Question:

I've been reading about TransferManager in the Amazon's AWS SDK for doing S3 uploads, the provided API allows for non-blocking usage, however it's unclear to me if the underlying implementation actually does asynchronous I/O.

I did some reading on the source-code of TransferManager and I cannot understand if the threads in the provided ExecutorService are being blocked or not.

My problem is that if this manager actually does asynchronous I/O without blocking that executor, then I could use the application's global thread-pool that is meant for CPU-bound stuff. So is this actually doing asynchronous I/O or not?


Answer:

After profiling and trying to understand the SDK's source-code I have come to the conclusion that yes, TransferManager does not work asynchronously, because it piggybacks on AmazonS3Client.putObject and such calls, while not blocking the threads per se, go in a loop until the http requests are finished, thus preventing progress in processing the thread-pool's queue.

Question:

I am trying to upload files to Amazon S3 storage using Amazon’s Java API for it. The code is

Byte[] b = data.getBytes();
InputStream stream  = new ByteArrayInputStream(b);
//InputStream stream = new FileInputStream(new File("D:/samples/test.txt"));
AWSCredentials credentials = new BasicAWSCredentials("<key>", "<key1>");
AmazonS3 s3client = new AmazonS3Client(credentials);
s3client.putObject(new PutObjectRequest("myBucket",name,stream, new ObjectMetadata()));

When I run the code after commenting the first two lines and uncommenting the third one, ie stream is a FileoutputStream, the file is uploaded correctly. But when data is a base64 encoded String, which is image data, the file is uploaded but image is corrupted. Amazon documentation says I need to create and attach a POST policy and signature for this to work. How I can do that in java? I am not using an html form for uploading.


Answer:

First you should remove data:image/png;base64, from beginning of the string:

Sample Code Block:

byte[] bI = org.apache.commons.codec.binary.Base64.decodeBase64((base64Data.substring(base64Data.indexOf(",")+1)).getBytes());

InputStream fis = new ByteArrayInputStream(bI);

AmazonS3 s3 = new AmazonS3Client();
Region usWest02 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest02);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(bI.length);
metadata.setContentType("image/png");
metadata.setCacheControl("public, max-age=31536000");
s3.putObject(BUCKET_NAME, filename, fis, metadata);
s3.setObjectAcl(BUCKET_NAME, filename, CannedAccessControlList.PublicRead);

Question:

I have to iterate over 130 Data Transfer Objects, and each time will generate a json to be uploaded to aws S3.

With no improvements, it takes around 90 seconds the complete the whole process. I tried using lamba and not using lamba, same results for both.

for(AbstractDTO dto: dtos) {
    try {
        processDTO(dealerCode, yearPeriod, monthPeriod, dto);
    } catch (FileAlreadyExistsInS3Exception e) {
        failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
    }
}
dtos.stream().forEach(dto -> {
    try {
        processDTO(dealerCode, yearPeriod, monthPeriod, dto);
    } catch (FileAlreadyExistsInS3Exception e) {
        failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
    }
});

After some investigation, I concluded that the method processDTO takes around 0.650ms per item to run.

My first attempt was to use parallel streams, and the results were pretty good, taking around 15 seconds to complete the whole process:

dtos.parallelStream().forEach(dto -> {
    try {
        processDTO(dealerCode, yearPeriod, monthPeriod, dto);
    } catch (FileAlreadyExistsInS3Exception e) {
        failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
    }
});

But I still need to decrease that time. I researched about improving parallel streams, and discovered the ForkJoinPool trick:

ForkJoinPool forkJoinPool = new ForkJoinPool(PARALLELISM_NUMBER);
forkJoinPool.submit(() ->
dtos.parallelStream().forEach(dto -> {
    try {
        processDTO(dealerCode, yearPeriod, monthPeriod, dto);
    } catch (FileAlreadyExistsInS3Exception e) {
        failedToUploadDTOs.add(e.getLocalizedMessage() + ": " + dto.fileName() + ".json");
    }
})).get();
forkJoinPool.shutdown();

Unfortunately, the results were a bit confusing for me.

  • When PARALLELISM_NUMBER is 8, it takes around 13 seconds to complete the whole process. Not a big improve.
  • When PARALLELISM_NUMBER is 16, it takes around 8 seconds to complete the whole process.
  • When PARALLELISM_NUMBER is 32, it takes around 5 seconds to complete the whole process.

All tests were done using postman requests, calling the controller method which will end-up iterating the 130 items

I'm satisfied with 5 seconds, using 32 as PARALLELISM_NUMBER, but I'm worried about the consequences.

  • Is it ok to keep 32?
  • What is the ideal PARALLELISM_NUMBER?
  • What do I have to keep in mind when deciding its value?

I'm running on a Mac 2.2GHZ I7

sysctl hw.physicalcpu hw.logicalcp
hw.physicalcpu: 4
hw.logicalcpu: 8

Here's what processDTO does:

private void processDTO(int dealerCode, int yearPeriod, int monthPeriod, AbstractDTO dto) throws FileAlreadyExistsInS3Exception {
    String flatJson = JsonFlattener.flatten(new JSONObject(dto).toString());
    String jsonFileName = dto.fileName() + JSON_TYPE;;
    String jsonFilePath = buildFilePathNew(dto.endpoint(), dealerCode, yearPeriod, monthPeriod, AWS_S3_JSON_ROOT_FOLDER);
    uploadFileToS3(jsonFilePath + jsonFileName, flatJson);
}
public void uploadFileToS3(String fileName, String fileContent) throws FileAlreadyExistsInS3Exception {
    if (s3client.doesObjectExist(bucketName, fileName)) {
        throw new FileAlreadyExistsInS3Exception(ErrorMessages.FILE_ALREADY_EXISTS_IN_S3.getMessage());
    }
    s3client.putObject(bucketName, fileName, fileContent);
}

Answer:

The parallelism parameters decides how many threads will be used by ForkJoinPool. That's why by default parallelism value is the available CPU core count:

Math.min(MAX_CAP, Runtime.getRuntime().availableProcessors())

In your case the bottlneck should be checking that a file exists and uploading it to S3. The time here will depend on at least few factors: CPU, network card and driver, operating system, other. It seems that S3 network operation time is not CPU bound in your case as you are observing improvement by creating more simulations worker threads, perhaps the network request are enqueued by the operating system.

The right value for parallelism varies from one workload type to another. A CPU-bound workflow is better with the default parallelism equal to CPU cores due to the negative impact of context switching. A non CPU-bound workload like yours can be speed up with more worker threads assuming the workload won't block the CPU e.g. by busy waiting.

There is no one single ideal value for parallelism in ForkJoinPool.

Question:

I am using this code for uploading directory on S3.

TransferManager transferManager = new TransferManager(s3client);
MultipleFileUpload uploaded = transferManager.uploadDirectory(BUCKET_NAME, "DirectoryName", new File(uploadDirectory), true);

While uploading Directory to Amazon S3 bucket I get the following exception

com.amazonaws.SdkClientException: Upload canceled
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:159)
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

This exception generally occur around 10 times in 100 attempt

Note- The above code is executing in Multithreaded environment with the same s3client object

Thanks a lot!


Answer:

The Solution of above problem i encountered while doing R&D in Java Amazon Sdk. Instead of creating different TransferManager instances for every thread share the same instances with multiple threads if you are using same AmazonS3Client.

It will not cause any issue because it is thread safe object and problem was also solved.

Question:

I currently have a working implementation that works as follows:

UI select a file => click upload => call to my backend API to request a signature since I don't want to expose my access + secretkey => return the signature + policy => do an upload to s3.

This works fine and dandy for v2.

String base64Policy = (new BASE64Encoder()).encode(policy.toString().getBytes("UTF-8")).replaceAll("\n", "").replaceAll("\r", "");

        Mac hmac = Mac.getInstance("HmacSHA1");
        hmac.init(new SecretKeySpec(secretKey.getBytes("UTF-8"), "HmacSHA1"));
        String signature = (new BASE64Encoder()).encode(hmac.doFinal(base64Policy.getBytes("UTF-8"))).replaceAll("\n", "");

Now I get to the fun bit where my new buckets are in a region where v2 isn't supported.

I was following the AWS documentation but I think I am misunderstanding the payload bit a bit. Do I really need to have my UI pass in a sha256 hash of my whole file? Since that would seem to be a bit of a pain, especially since my files can be > 1 gig.

The code I was attempting to use:

        byte[] signatureKey = getSignatureKey(secretKey, LocalDate.now().format(DateTimeFormatter.ofPattern("yyyyMMdd")),  bucketRegion, "s3");
        StringBuilder sb = new StringBuilder();
        for (byte b : signatureKey) {
            sb.append(String.format("%02X", b));
        }

private static byte[] getSignatureKey(String key, String dateStamp, String regionName, String serviceName) throws Exception {
        byte[] kSecret = ("AWS4" + key).getBytes("UTF8");
        byte[] kDate = HmacSHA256(dateStamp, kSecret);
        byte[] kRegion = HmacSHA256(regionName, kDate);
        byte[] kService = HmacSHA256(serviceName, kRegion);
        byte[] kSigning = HmacSHA256("aws4_request", kService);
        return kSigning;
    }



private static byte[] HmacSHA256(String data, byte[] key) throws Exception {
        String algorithm="HmacSHA256";
        Mac mac = Mac.getInstance(algorithm);
        mac.init(new SecretKeySpec(key, algorithm));
        return mac.doFinal(data.getBytes("UTF8"));
    }

But this gives an invalid signature response when I try to use the rest of my code.

Am I derping that hard, and just misunderstanding: https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html ?

Any help would be much appreciated since I've been hanging my head against this way too long and I'd prefer not to overhaul too much.


Answer:

You can upload a file to S3 by using standard SDK methods without generating a signature, please see the documentation. But if you need a signature for some reason, I think, the simplest way to generate a signature is to use methods from AWS SDK, please see the following class which extends AWS4Signer:

public class AwsAuthUtil extends AWS4Signer {
    private String serviceName;
    private AWSCredentials credentials;
    private String region;

    public AwsAuthUtil(AWSCredentials credentials, String region, String serviceName) {
        this.credentials = credentials;
        this.region = region;
        this.serviceName = serviceName;
    }

    public String getSignature(String policy, LocalDateTime dateTime) {
        try {
            String dateStamp = dateTime.format(ofPattern("yyyyMMdd"));
            return Hex.encodeHexString(hmacSha256(newSigningKey(credentials, dateStamp, region, serviceName), policy));
        } catch (Exception e) {
            throw new RuntimeException("Error", e);
        }
    }

    private byte[] hmacSha256(byte[] key, String data) throws Exception {
        Mac mac = Mac.getInstance(SigningAlgorithm.HmacSHA256.name());
        mac.init(new SecretKeySpec(key, SigningAlgorithm.HmacSHA256.name()));
        return mac.doFinal(data.getBytes(StandardCharsets.UTF_8));
    }
}

where AWS4Signer is from

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-s3</artifactId>
    <version>1.11.213</version>
</dependency>

and AWSCredentials can be built as

AWSCredentials awsCredentials = new BasicAWSCredentials(s3AccessKey, s3SecretKey);

Also you should consider http headers when you use multipart data, for example, please see the following method which builds HttpEntity

public HttpEntity buildPostMultipartDataEntity(String objectKey, byte[] data, String signature, LocalDateTime dateTime) {

    String dateTimeStr = dateTime.format(ofPattern("yyyyMMdd'T'HHmmss'Z'"));
    String date = dateTime.format(ofPattern("yyyyMMdd"));

    return MultipartEntityBuilder
        .create()
        .addTextBody("key", objectKey)
        .addTextBody("Policy", policy)
        .addTextBody("X-Amz-Signature", signature)
        .addTextBody("X-Amz-Algorithm", algorithm)
        .addTextBody("X-Amz-Date", dateTimeStr)
        .addTextBody("X-Amz-Credential", String.format("%s/%s/%s/s3/aws4_request", accessKey, date, region))
        .addBinaryBody("file", data)
        .build();
}

Question:

I am trying to set the content-MD5 value when I upload a file to S3. I can see the md5 hash string and am passing that into metadata.setContentMD5() but after the file is uploaded, I can't see this value in the web console, and I can't retrieve it via java code.

I've come to think that it's likely I'm misunderstanding the goal of the content MD5 get/set methods. Are they used to let the aws server validate that the received file content is consistent with what I am sending? If that's the case then I should send in a value with setContentMD5(my_md5) when uploading, but should I then just compare the value of getETag() with my calculated md5 hex string when I later try to download that object from S3?

Am I doing something wrong in trying to set this md5 value?

String access_key = "myaccesskey";
String secret_key = "mysecretkey";
String bucket_name = "mybucketname";
String destination_key = "md5_test.txt";
String file_path = "C:\\my-text-file.txt";

BasicAWSCredentials creds = new BasicAWSCredentials(access_key, secret_key);
AmazonS3Client client = new AmazonS3Client(creds);
client.setRegion(RegionUtils.getRegion("us-east-1"));

File file = new File(file_path);

ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("text/plain");
metadata.setContentLength(file.length());

FileInputStream fis = new FileInputStream(file);
byte[] content_bytes = IOUtils.toByteArray(fis);
String md5 = new String(Base64.encodeBase64(DigestUtils.md5(content_bytes)));
metadata.setContentMD5(md5);

PutObjectRequest req = new PutObjectRequest(bucket_name, destination_key, file).withMetadata(metadata);
PutObjectResult result = client.putObject(req);

GetObjectMetadataRequest mreq = new GetObjectMetadataRequest(bucket_name, destination_key);
ObjectMetadata retrieved_metadata = client.getObjectMetadata(mreq);

// I think I expected getContentMD5 below to show the string I passed in
// during the upload, but the below prints "md5:null"
System.out.println("md5:" + retrieved_metadata.getContentMD5());

Am I calculating the MD5 string incorrectly? If I pass in a random string, I do get an error message, so it seems like S3 is happy with what I am sending via the above code. And if the MD5 string is correct, why can't I retrieve it later when using the client.getContentMD5() method? I understand that ETag should be the MD5 hex string, and I can also calculate that for my uploaded file (and get the same string that S3 calculates), so is it the case that I shouldn't expect the getContentMD5() to ever have a value for a downloaded file?


Answer:

I think you are correct: getContentMD5() is just the corresponding getter for setContentMD5() 1. It tells you what the callee side of the request thinks the MD5 hash is. If you want to know what AWS thinks the hash is, you should use the ETag.

getContentMD5

This field represents the base64 encoded 128-bit MD5 digest digest of an object's content as calculated on the caller's side. The ETag metadata field represents the hex encoded 128-bit MD5 digest as computed by Amazon S3.

Returns: The base64 encoded MD5 hash of the content for the associated object. Returns null if the MD5 hash of the content hasn't been set.

That last part presumably means: Returns null unless you have previously called setContentMD5()

Question:

I want to upload an image to an amazon s3 bucket in android. I don't get any errors but it's just not working can anybody help me? I can't find any good examples or questions about this.

I assign a image to 'File images3'

images3 = new File(uri.getPath());

public void addEventToDB(){

        Thread thread = new Thread()
        {
            @Override
            public void run() {
                try {
                    CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
                            getActivity().getApplicationContext(), // get the context for the current activity
                            "...",
                            "us-east-1:...",
                            "arn:aws:iam::...:role/Cognito_WitpaAuth_DefaultRole",
                            "arn:aws:iam::...:role/Cognito_WitpaAuth_DefaultRole",
                            Regions.US_EAST_1
                    );

                    String bucket_name = "witpa";
                    String key = "images.jpeg";

                    TransferManager transferManager = new TransferManager(credentialsProvider);
                    transferManager.upload(bucket_name, key, images3);
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        };

        thread.start();

    }

I created my bucket, in the permissions I set that everyone can write and read.

In amazon cognito I just left everything as default.

Anybody knows how I can get this to work?


Answer:

Try this one. Since i had the same issue that you faced.

I have fixed by using the below code.

ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentEncoding("UTF-8");
size = inputStream.available();
metadata.setContentLength(size);
TransferManager transferManager = new TransferManager(credentialsProvider);
Upload upload = transferManager.upload(bucket_name, key, images3, metadata);
upload.waitForCompletion();

Question:

I am a total newbie to amazon and java. I am trying two things here.

1st - I am trying to create a folder in my Amazon S3 bucket that i have already created and have got the credentials for.

2nd - I am trying to upload a file to this bucket.

As per my understanding i can use putObjectRequest() method for acheiving both of my tasks.

PutObjectRequest(bucketName, keyName, file) 

for uploading a file.

I am not sure if i should use this method

PutObjectRequest(String bucketName, String key, InputStream input,
        ObjectMetadata metadata) 

for just creating a folder. I am struggeling with InputSteam and ObjectMetadata. I don't know what exactly is this for and how can i use it.

Any help would be greatly appreciated. :)


Answer:

You do not need to create a folder in Amazon S3. In fact, folders do not exist!

Rather, the Key (filename) contains the full path and the object name.

For example, if a file called cat.jpg is in the animals folder, then the Key (filename) is: animals/cat.jpg

Simply Put an object with that Key and the folder is automatically created. (Actually, this isn't true because there are no folders, but it's a nice simple way to imagine the concept.)

As to which function to use... always use the simplest one that meets your needs. Therefore, just use PutObjectRequest(bucketName, keyName, file).

Question:

I want to upload a file to S3 without using my access and secret key from AWS server. AWS keys should be taken as default. However running the below command in server I can access it without providing any access and secret keys.

aws s3 cp somefile.txt s3://somebucket/

From java code its not accessible since it was unable to load credentials. Below is my code.

AmazonS3 s3client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());

Answer:

You can use the below Java code to get the s3client instance when you are trying to connect to S3 bucket from EC2 instance.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
              .withCredentials(new InstanceProfileCredentialsProvider(false))
              .build();

This is the recommended way as the application doesn't require to maintain the access keys in property files.

  • IAM role should be created and S3 access should be provided for that role. See the sample policy below.
  • The IAM role should be assigned to the EC2 instance

Sample policy for IAM role:-

{
        "Action": ["s3:PutObject",
        "s3:ListBucket",
        "s3:GetObject",
        "s3:DeleteObject"],
        "Resource": ["arn:aws:s3:::yourBucketName",
        "arn:aws:s3:::yourBucketName/*"],
        "Effect": "Allow",
        "Sid": "AllowBucketLinux"
    }

Question:

I want to compress data which is created dynamically using GZIP stream and upload it to S3 while I expect the data to be ±1Giga per compressed file.

Since the file size is big and I'm going to handle multiple files in parallel, I can't hold the entire data on memory and I wish to stream data to S3 as soon as possible.

Moreover, I can't know the exact size of the compress data. Reading this question "Can I stream a file upload to S3 without a content-length header?" But I can't figure out how to combine it with GZIPing.

I think I could have done that if I was able to create GZIPOutputStream, send data to it part by part while simultaneously read chunks of the compressed data (hopefully of 5Mb) and upload them to S3 using Amazon S3: Multipart upload

Is what I'm trying to do is possible? Or my only option is to Compress the data to local storage (my hard disk) and than upload the compressed file?


Answer:

I don't take no for an answer, so this is how I did it:

package roee.gavriel;

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;

public class S3UploadStream extends OutputStream {

    private final static Integer PART_SIZE = 5 * 1024 * 1024;

    private final AmazonS3 s3client;
    private final String bucket;
    private final String key;

    // The upload id given to the multiple parts upload by AWS.
    private final String uploadId;
    // A tag list. AWS give one for each part and expect then when the upload is finish.
    private final List<PartETag> partETags = new LinkedList<>();
    // A buffer to collect the data before sending it to AWS.
    private byte[] partData = new byte[PART_SIZE];
    // The index of the next free byte on the buffer.
    private int partDataIndex = 0;
    // Total number of parts that where uploaded.
    private int totalPartCountIndex = 0;
    private volatile Boolean closed = false;
    // Internal thread pool which will handle the actual part uploading.
    private final ThreadPoolExecutor executor;

    public S3UploadStream(AmazonS3 s3client, String bucket, String key, int uploadThreadsCount) {
        this.s3client = s3client;
        this.bucket = bucket;
        this.key = key;
        InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucket, key);
        InitiateMultipartUploadResult initResponse = s3client.initiateMultipartUpload(initRequest);
        this.uploadId = initResponse.getUploadId();
        this.executor = new ThreadPoolExecutor(uploadThreadsCount, uploadThreadsCount, 60, TimeUnit.SECONDS,
                new LinkedBlockingQueue<Runnable>(100));
    }


    @Override
    public synchronized void write(int b) throws IOException {
        if (closed) {
            throw new IOException("Trying to write to a closed S3UploadStream");
        }
        partData[partDataIndex++] = (byte)b;
        uploadPart(false);
    }

    @Override
    public synchronized void close() {
        if (closed) {
            return;
        }
        closed = true;

        // Flush the current data in the buffer
        uploadPart(true);

        executor.shutdown();
        try {
            executor.awaitTermination(2, TimeUnit.MINUTES);
        } catch (InterruptedException e) {
            //Nothing to do here...
        }

        // Close the multiple part upload
        CompleteMultipartUploadRequest compRequest = 
                new CompleteMultipartUploadRequest(bucket, key, uploadId, partETags);

        s3client.completeMultipartUpload(compRequest);

    }

    private synchronized void uploadPart(Boolean force) {

        if (!force && partDataIndex < PART_SIZE) {
            // the API requires that only the last part can be smaller than 5Mb
            return;
        }

        // Actually start the upload
        createUploadPartTask(partData, partDataIndex);

        // We are going to upload the current part, so start buffering data to new part
        partData = new byte[PART_SIZE];
        partDataIndex = 0;          
    }

    private synchronized void createUploadPartTask(byte[] partData, int partDataIndex) {
        // Create an Input stream of the data
        InputStream stream = new ByteArrayInputStream(partData, 0, partDataIndex);

        // Build the upload request
        UploadPartRequest uploadRequest = new UploadPartRequest()
                .withBucketName(bucket)
                .withKey(key)
                .withUploadId(uploadId)
                .withPartNumber(++totalPartCountIndex)
                .withInputStream(stream)
                .withPartSize(partDataIndex);

        // Upload part and add response to our tag list.
        // Make the actual upload in a different thread
        executor.execute(() -> {
            PartETag partETag = s3client.uploadPart(uploadRequest).getPartETag();
            synchronized (partETags) {
                partETags.add(partETag);
            }
        });
    }   
}

And here is a small snippest of code that use it to write many guid to S3 GZIP file:

int writeThreads = 3;
int genThreads = 10;
int guidPerThread = 200_000;
try (S3UploadStream uploadStream = new S3UploadStream(s3client, "<YourBucket>", "<YourKey>.gz", writeThreads)) {
    try (GZIPOutputStream stream = new GZIPOutputStream(uploadStream)) {
        Semaphore s = new Semaphore(0);
        for (int t = 0; t < genThreads; ++t) {
            new Thread(() -> {
                for (int i = 0; i < guidPerThread; ++i) {
                    try {
                        stream.write(java.util.UUID.randomUUID().toString().getBytes());
                        stream.write('\n');
                    } catch (IOException e) {
                    }
                }
                s.release();
            }).start();
        }
        s.acquire(genThreads);
    }
}

Question:

We're looking to begin using S3 for some of our storage needs and I'm looking for a way to perform a batch upload of 'N' files. I've already written code using the Java API to perform single file uploads, but is there a way to provide a list of files to pass to an S3 bucket?

I did look at the following question is-it-possible-to-perform-a-batch-upload-to-amazon-s3, but it is from two years ago and I'm curious if the situation has changed at all. I can't seem to find a way to do this in code.

What we'd like to do is to be able to set up an internal job (probably using scheduled tasking in Spring) to transition groups of files every night. I'd like to have a way to do this rather than just looping over them and doing a put request for each one, or having to zip batches up to place on S3.


Answer:

The easiest way to go if you're using the AWS SDK for Java is the TransferManager. Its uploadFileList method takes a list of files and uploads them to S3 in parallel, or uploadDirectory will upload all the files in a local directory.

Question:

I am trying to upload a Quicktime video to S3 but it is getting corrupted. I am using Java SDK. I assume that this is something to do with multipart? What do I need to do to stop it being corrupted. A file is being sent to S3 ok, but when downloaded and viewed it fails.

//upload video to S3
    PutObjectRequest request2 = new PutObjectRequest(bucketName, "movie.mov", new File(picturePath + "/" + "movie.mov"));

    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setContentType("video/quicktime");
    request2.setMetadata(metadata);

    request2.setCannedAcl(CannedAccessControlList.PublicRead);
    s3Client.putObject(request2);

Answer:

From your question, I can't tell if you're using v1 or v2 of the AWS Java SDK.

I was able to find a similar issue where the user seemed to only be able to upload .mov files that were less than 5 MB in size. To upload files larger than 5 MB, you can try using Amazon's high-level multipart upload example that utilizes the TransferManager class, which is built-in to v1 of the AWS Java SDK.

With this, your code will look something like:

try {
    TransferManager tm = TransferManagerBuilder.standard()
            .withS3Client(s3Client)
            .build();

    // TransferManager processes all transfers asynchronously,
    // so this call returns immediately.
    Upload upload = tm.upload(bucketName, "movie.mov", 
        new File(picturePath + "/" + "movie.mov"));
    System.out.println("Object upload started");

    // Optionally, wait for the upload to finish before continuing.
    upload.waitForCompletion();
    System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
    // The call was transmitted successfully, but Amazon S3 couldn't process 
    // it, so it returned an error response.
    e.printStackTrace();
} catch (SdkClientException e) {
    // Amazon S3 couldn't be contacted for a response, or the client
    // couldn't parse the response from Amazon S3.
    e.printStackTrace();
}

Question:

I'm trying to upload a file to Amazon's S3 using a pre-signed URL. I get the URL from a server which generates the URL & sends it to me as part of a JSON object. I get the URL as a String, something like this:

https://com-example-mysite.s3-us-east-1.amazonaws.com/userFolder/ImageName?X-Amz-Security-Token=xxfooxx%2F%2F%2F%2F%2F%2F%2F%2F%2F%2Fxxbarxx%3D&X-Amz-Algorithm=xxAlgoxx&X-Amz-Date=20170831T090152Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=xxcredxx&X-Amz-Signature=xxsignxx

Unfortunately, when I pass this to Retrofit2, it modifies the String attempting to make it into a URL. I've set encoding=true which took care of most of the problem but not completely. I know the String works as it is. I've tried it in Postman & get a successful response.

1st I tried just putting the String (except for what I cut out as baseUrl) as a whole into the Path

public interface UpdateImageInterface {
    @PUT("{url}")
    Call<Void> updateImage(@Path(value="url", encoded=true) String url, Body RequestBody image);
}

The calling code:

    Retrofit retrofit = new Retrofit.Builder()
            .baseUrl("https://com-example-mysite.s3-us-east-1.amazonaws.com/userFolder/")
            .build();

    UpdateImageInterface imageInterface = retrofit.create(UpdateImageInterface.class);
    // imageUrl is "ImageName..."
    Call<Void> call = imageInterface.updateImage(imageUrl, requestFile);

This works mostly except the the '?' (after "ImageName") get converted to "%3F". This causes a Bad Request / 400.

My next attempt was to create a query with Retrofit2 but then dump the whole String (with multiple queries) into the query.

public interface UpdateImageInterface {
    @PUT("ImageName")
    Call<Void> updateProfilePhoto(@Query(value="X-Amz-Security-Token", encoded = true) String token, @Body RequestBody image);
}

The calling code:

    Retrofit retrofit = new Retrofit.Builder()
            .baseUrl("https://com-example-mysite.s3-us-east-1.amazonaws.com/userFolder/")
            .build();

    UpdateImageInterface imageInterface = retrofit.create(UpdateImageInterface.class);
    // imageUrl is "xxfooxx..."
    Call<Void> call = imageInterface.updateImage(imageUrl, requestFile);

This gets the '?' rendered correctly but all of the '&' get changed to "%26"

Lastly I tried passing the whole String in baseUrl() but that gives an IllegalArgumentException for not having '/' on the end.

I know that I could parse the pre-signed URL to make multiple queries & assemble them in Retrofit2 as queries should be done but I'd like to avoid that processing.

To restate the question:

Is there a way to easily (without heavy String parsing) upload a file to S3 with a pre-signed URL using Retrofit2?


Answer:

With help from a colleague, this is the solution.

public interface UpdateImageInterface {
    @PUT
    Call<Void> updateImage(@Url String url, @Body RequestBody image);
}

Calling code:

    String CONTENT_IMAGE = "image/jpeg";

    File file = new File(localPhotoPath);    // create new file on device
    RequestBody requestFile = RequestBody.create(MediaType.parse(CONTENT_IMAGE), file);

    /* since the pre-signed URL from S3 contains a host, this dummy URL will
     * be replaced completely by the pre-signed URL.  (I'm using baseURl(String) here
     * but see baseUrl(okhttp3.HttpUrl) in Javadoc for how base URLs are handled
     */
    Retrofit retrofit = new Retrofit.Builder()
        .baseUrl("http://www.dummy.com/")
        .build();

    UpdateImageInterface imageInterface = retrofit.create(UpdateImageInterface.class);
    // imageUrl is the String as received from AWS S3
    Call<Void> call = imageInterface.updateImage(imageUrl, requestFile);

Javadoc for info on @Url (class Url) & baseUrl() (class Retrofit.Builder)

MediaType is a class in the OkHttp library that is often used with Retrofit (both from Square). Info about constants passed to the parse method can be found in the Javadoc.

Question:

I'm trying to upload a file on AWS S3 by using Java-AWS API. The problem is my application is unable to upload large sized files because the heap is reaching its limit. Error: java.lang.OutOfMemoryError: Java heap space

I personally think extending heap memory isn't a permanent solution because I have to upload file upto 100 gb. What should I do ?

Here is the code snippet:

        BasicAWSCredentials awsCreds = new BasicAWSCredentials(AID, Akey);
        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
        .withRegion(Regions.fromName("us-east-2"))
        .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
        .build();

        InputStream Is=file.getInputStream();

        boolean buckflag = s3Client.doesBucketExist(ABuck);
        if(buckflag != true){
           s3Client.createBucket(ABuck);
        }
        s3Client.putObject(new PutObjectRequest(ABuck, AFkey,file.getInputStream(),new ObjectMetadata() ).withCannedAcl(CannedAccessControlList.PublicRead));

Answer:

I strongly recommend to setContentLength() on ObjectMetadata, since:

..If not provided, the library will have to buffer the contents of the input stream in order to calculate it.

(..which predictably will lead to OutOfMemory on "sufficient large" files.)

source: PutObjectRequest javadoc

Applied to your code:

 // ...
 ObjectMetadata omd = new ObjectMetadata();
 // a tiny code line, but with a "huge" information gain and memory saving!;)
 omd.setContentLength(file.length());

 s3Client.putObject(new PutObjectRequest(ABuck, AFkey, file.getInputStream(), omd).withCannedAcl(CannedAccessControlList.PublicRead));
 // ...

Question:

I am working on an iOS application which connects to my Tomcat/Jersey server.

The way I am currently uploading images to S3 is using the following workflow: - upload image to a temp folder on my server using a POST request Multipart Form data - take the file from the temp folder and using the Amazon Java SDK I upload it to S3 - delete the image file from the temp folder on the server once upload is completed to S3

I do not want my iOS app to upload directly to S3 as I want to go through my server to perform some operations but in my opinion this seems redundant and will make the process slower than it may need to be. Is there a way to stream the file directly through the Amazon SDK instead of having to temp save it to my server?


Answer:

Thanks for the suggestions. I used your answers to solve it by uploading the Input Stream directly through the Amazon SDK like so:

byte[] contents = IOUtils.toByteArray(inputStream);
InputStream stream = new ByteArrayInputStream(contents);

ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(contents.length);
meta.setContentType("image/png");

s3client.putObject(new PutObjectRequest(
        bucketName, fileName, stream, meta)
        .withCannedAcl(CannedAccessControlList.Private));

inputStream.close();

Where inputStream is the input stream received from the iOS application to my server and s3client is the AmazonS3Client initialization with the BasicAWSCredentials.

Question:

I have a ThreadPoolExecutorService to which I'm submitting runnable jobs that are uploading large (1-2 GB) files to Amazon's S3 file system, using the AWS Java SDK. Occasionally one of my worker threads will report a java.net.SocketException with "Connection reset" as the cause and then die.

AWS doesn't use checked exceptions so I actually can't catch SocketException directly---it must be wrapped somehow. My question is how I should deal with this problem so I can retry any problematic uploads and increase the reliability of my program.

Would the Multipart Upload API be more reliable?

Is there some exception I can reliably catch to enable retries?

Here's the stack trace. The com.example.* code is mine. Basically what the DataProcessorAWS call does is call putObject(String bucketName, String key, File file) on an instance of AmazonS3Client that's shared across threads.

14/12/11 18:43:17 INFO http.AmazonHttpClient: Unable to execute HTTP request: Connection reset
java.net.SocketException: Connection reset
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
    at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:377)
    at sun.security.ssl.OutputRecord.write(OutputRecord.java:363)
    at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:830)
    at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:801)
    at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:122)
    at org.apache.http.impl.io.AbstractSessionOutputBuffer.write(AbstractSessionOutputBuffer.java:169)
    at org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:119)
    at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:102)
    at com.amazonaws.http.RepeatableInputStreamRequestEntity.writeTo(RepeatableInputStreamRequestEntity.java:153)
    at org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
    at org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
    at org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
    at org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
    at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
    at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
    at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:47)
    at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
    at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:685)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3697)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1434)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
    at com.example.DataProcessorAWS$HitWriter.close(DataProcessorAWS.java:156)
    at com.example.DataProcessorAWS$Processor.run(DataProcessorAWS.java:264)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Answer:

For that you have to use only AmazonS3Client not Trasfermanager for upload and download

U have to configure AmazonS3Client with following properties:

1. connectionTimeout=50000 in ms
2.maxConnections=500
3.socketTimeout=50000 in ms
4.maxErrorRetry=10

For upload use

AmazonS3Client.putObject(bucketName, key, inputFile);

For download use

S3Object s3Object = AmazonS3Client.getObject(new GetObjectRequest(bucketName, key));`
InputStream downloadStream  = s3Object.getObjectContent();  

and store that stream by reading bytes.

Question:

I'm trying to use TransferManager.upload(String bucket, String key, File file) to upload a moderately-sized file (around 10 MB) to my AWS S3 bucket from an Android app.

The following code works intermittently:

CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(...);
mTransfer = new TransferManager(credentialsProvider);
mTransfer.upload("bucket", key, file);

About half the time, this works great. The other half of the time, the upload fails because of an SSLException. Is it expected that uploading to S3 would be this unreliable? Should I be handling retries in the client code?

06-25 18:27:50.326: I/AmazonHttpClient(10765): Unable to execute HTTP request: Write error: ssl=0xa1ef1c00: I/O error during system call, Connection reset by peer
06-25 18:27:50.326: I/AmazonHttpClient(10765): javax.net.ssl.SSLException: Write error: ssl=0xa1ef1c00: I/O error during system call, Connection reset by peer
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.org.conscrypt.NativeCrypto.SSL_write(Native Method)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.org.conscrypt.OpenSSLSocketImpl$SSLOutputStream.write(OpenSSLSocketImpl.java:794)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okio.Okio$1.write(Okio.java:73)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:116)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okio.RealBufferedSink.write(RealBufferedSink.java:44)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okhttp.internal.http.HttpConnection$FixedLengthSink.write(HttpConnection.java:310)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:116)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.android.okio.RealBufferedSink$1.write(RealBufferedSink.java:131)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.http.UrlHttpClient.write(UrlHttpClient.java:172)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.http.UrlHttpClient.writeContentToConnection(UrlHttpClient.java:129)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.http.UrlHttpClient.execute(UrlHttpClient.java:65)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4234)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1644)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:134)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.call(UploadCallable.java:126)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.upload(UploadMonitor.java:182)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:140)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:54)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at java.util.concurrent.FutureTask.run(FutureTask.java:237)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
06-25 18:27:50.326: I/AmazonHttpClient(10765):  at java.lang.Thread.run(Thread.java:818)

Answer:

Unfortunately, this usually occurs when the network is very spotty. You may also encounter a Unable to execute HTTP request: Unable to resolve host "YOURBUCKET.s3.amazonaws.com": No address associated with hostname. There is unfortunately not a lot the SDK can do. It has automatic re-tries built in, but in the case of a spotty connection this may happen many times and eventually the SDK will give up. You can re-try the request again, but if that helps or not is really up to the network strength.

Weston

Question:

I would like to know how we can upload the video files to azure media services from aws s3 buckets using the API (JAVA). I have checked the documentations and samples everywhere and couldn't find any reference on how to upload the video from s3 to the media services.

I was able to upload to the azure storage. But i want to upload to the media services to create streaming urls.


Answer:

We don't have any method to directly transfer the data between S3 and Azure, We can get the InputStream from Amazon S3 and write them to Azure storage.

To get InputStream from S3, use this

AWSCredentials awsCredentials = awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3Client amazonS3 = new AmazonS3Client(awsCredentials);
InputStream is = amazonS3.getObject(bucket, filePath).getObjectContent();

Question:

I'm uploading a mp4 video to AWS S3 using a pre-signed URL, the upload succeeds but when I try to download the video from S3 and play it in a media player (VLC or quickTime), it doesn't play!.

Generated pre-signed URL works fine with mp3 but the same problem as above also occurs for WAV and FLAC.

Code to generate the pre-signed url:

public String getPreSignedS3Url( final String userId, final String fileName )
    {
        Date expiration = new Date();
        long expTimeMillis = expiration.getTime();
        expTimeMillis += urlExpiry;
        expiration.setTime(expTimeMillis);

        String objectKey = StringUtils.getObjectKey( userId, fileName );

        GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(
                recordingBucketName, objectKey)
                .withMethod(HttpMethod.PUT)
                .withExpiration(expiration);

        URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

        return url.toString();

    }

After I get the pre-signed URL from the method above, I make a HTTP PUT request from Postman with the multipart/form-data in the request body like this:

-H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW' \
  -F 'file=@/Users/john/Downloads/sampleDemo.mp4'

pre-signed url looks like this:

https://meeting-recording.s3.eu-west-2.amazonaws.com/331902257/sampleDemo.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20190720T125751Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3599&X-Amz-Credential=AKIAZDSMLZ3VDKNXQUXH%2F20190720%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Signature=dfb8054f0738e07e925e9880e4a8e5ebba0a1bd3c84a3ec78913239f65221992

I tried to set the content type to mp4 in the getPreSignedS3Url() method using generatePresignedUrlRequest.setContentType( "video/mp4" ); and add Content-Type : "video/mp4" in the HTTP PUT request header but it didn't work and it fails with an error Signature doesn't match.

I'm using S3 as my personal back-up hard-drive, I expect to upload video and audio files to S3 using a pre-signed URL, download them at some point in the future and be able to play them, but I'm unable to play them after I have downloaded them.

Does anyone know what could be causing this?


Answer:

PUT requests to S3 don't support multipart/form-data. The request body needs to contain nothing but the binary object data. If you download your existing file from S3 and open it with a text editor, you'll find that S3 has preserved the multipart form structure inside the file, instead of interpreting it as a wrapper for the actual payload.

Question:

I am using android aws dependency com.amazonaws:aws-android-sdk-s3:2.6.+

While uploading Image getting error as bellow

com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXXXXXXXXXX), S3 Extended Request ID:XXXXXXXXXXXX

Here is the Code for uploading Image

 private void beginUpload(String filePath, final String mediaCaption, 
 Message message,boolean isThumb,final 
 UploadFileToStorageCompletionListener listener) {
    getLogger().log(Strings.TAG, "########## 3:  " + filePath);
    //construct a bucket path
    final String fullBucketPath = 
 constructBucketPath(message.getMediaType(), message.getId(), 
 isThumb);
    File file = new File(filePath);
    mObserver = mTransferUtility.upload(fullBucketPath, mediaCaption, 
   file);

    mObserver.setTransferListener(new TransferListener() {
        @Override
        public void onStateChanged(int id, TransferState state) {
            getLogger().log(Strings.TAG," onStateChanged() " + state);
            if (state.equals(TransferState.COMPLETED)) {
                listener.onUploadSuccess(fullBucketPath);
            }
        }

        @Override
        public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
            getLogger().log(Strings.TAG,"onProgressChanged() " + bytesCurrent + "/" + bytesTotal);
            dismissProgressDialog();
        }

        @Override
        public void onError(int id, Exception ex) {
            listener.onDatabaseError(new FirebaseFailure(ex));
            getLogger().log(Strings.TAG, "onError() " + ex.getMessage());
        }
    });
}

Answer:

First, need to check the permissions for the s3 bucket. And go to the bucket policy and check the json object which is permissions for put, get and post.

 {
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "AddPerm",
        "Effect": "Allow",
        "Principal": "*",
        "Action": [
            "s3:PutObject",
            "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::{FILE NAME}/*"
    }
]
}

Try the above permissions.

Question:

I've searched the web for ways to upload a simple file to s3 from android but couldn't find anything that work and I think it is because lack of concrete steps.

1.) https://mobile.awsblog.com/post/Tx1V588RKX5XPQB/TransferManager-for-Android

That is the best guide I found but it does not tell what libraries to include. I downloaded the aws sdk for android and inside there were many jars. So intuitively, I included the aws-android-sdk-s3-2.2.5.jar however that is not complete. It does not have the class BasicAWSCredentials.

2.)I've looked at the more at the sample but it is not straightforward as #1 and I couldn't get the core upload functionality with credentials.

I appreciate your help


Answer:

Sorry the post was written a long time ago :).

1) To make it work, you need aws-android-sdk-core-2.2.5.jar in addition to aws.android-sdk-s3-2.2.5.jar.

2) Which sample are you referring to? Recently the AWS Android SDK introduced TransferUtility as an replacement of TransferManager. You can find a sample here. Also there is a blog which explains the migration AWS SDK for Android Transfer Manager to Transfer Utility Migration Guide.

PS: it's not recommended to use AWSBasicCredentials in a mobile app. Instead, try Cognito Identity. See its developer guide.

Question:

I have two buckets, one private and one public. Private will have file with CannedAccessControlList.Private and public will have file with CannedAccessControlList.PublicRead. Apart from these they are all same.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>PUT</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>



AmazonS3 s3client = new AmazonS3Client(new BasicAWSCredentials(AWS_ACCESS_KEY,AWS_SECRET_KEY));
            generatePresignedUrlRequest = new GeneratePresignedUrlRequest(AWS_BUCKET_PRIVATE_NAME, path,HttpMethod.PUT);
            generatePresignedUrlRequest.setExpiration(expiration); 
            generatePresignedUrlRequest.putCustomRequestHeader("x-amz-acl", CannedAccessControlList.Private.toString());
            generatePresignedUrlRequest.putCustomRequestHeader("content-type", fileType);
            url = s3client.generatePresignedUrl(generatePresignedUrlRequest);

I able to upload file to s3 in below scenarios. All generated URL are by default https. 1. private bucket with https 2. public bucket failed over https, replaced https to http it worked.

Problem why public bucket upload is failing with https. I can't work with http on production system as it have ssl installed.


Answer:

There are two things which I have learned.

  1. S3 have two different styles of writing URL Path style and virtual host style. (You will have to be careful when your bucket looks like hostname)

Virtual Host Style

https://xyz.com.s3.amazonaws.com/myObjectKey

Path style

https://s3.amazonaws.com/xyz.com/myObjectKey

Ajax call to upload file fails in first case if you are on https, since SSL certificate is valid only for s3.amazonaws.com and if bucket name like hostname SSL check will fail and block ajax upload call.

Solution for this in Java

s3client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
  1. I am still not able to figure out how S3client which region to pick for URL formation, but I found some time it is picking proper "s3-ap-southeast-1.amazonaws.com" and sometimes it picks "s3.amazonaws.com".

In later case you upload will again fail mentioning CORS issues, If you presigned URL is s3.amazonaws.com and even if you have enabled CORS in your buckets its not gonna pick "Access-Control-Allow-Origin". So you need make to make sure you are giving proper region name using below code.

s3client.setEndpoint("s3-ap-southeast-1.amazonaws.com");//or whatever region you bucket is in.

Reference :http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html

Question:

I am going with file upload issue, in which I am using angular in front-end and Java at backend and uploading image on S3 bucket. I think there is no issue in java code because when I am using this upload URL on postman it is going well, I am Attaching Postman screenshot to showcase how it is working fine

Here is My AngularJS Controller as follows :

contactUs.controller('contactController', ['$scope','$http', 
function($scope,$http) {    $scope.uploadFile = function(){
var file = $scope.myFile;

           console.log('file is ' );
           console.dir(file);

           var uploadUrl = "uploadURL";

           var fd = new FormData(file);
           fd.append('files', file);

           $http.post(uploadUrl, fd, {
              transformRequest: angular.identity,
              headers: {'Content-Type': 'multipart/form-data',
                        'Authorization': 'Basic QHN0cmlrZXIwNzoxMjM0NTY='}
           })

           .success(function(response){
            console.log(response);
           })

           .error(function(error){
            console.log(error);
           });
        };
  }]);

Here is My AngularJS Directive as follows :

contactUs.directive('fileModel', ['$parse', function ($parse) {
        return {
           restrict: 'A',
           link: function(scope, element, attrs) {
              var model = $parse(attrs.fileModel);
              var modelSetter = model.assign;
              console.log(model);
              console.log(modelSetter);
              element.bind('change', function(){
                 scope.$apply(function(){
                    modelSetter(scope, element[0].files[0]);
                 });
              });
           }
        };
           }]);

The HTML is as follows :

<input type = "file" name="files" file-model = "myFile"/>
<button ng-click = "uploadFile()">upload me</button>

The Java controller is as follows :

@Path("/upload")
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces("application/text")
public Response uploadFile(@FormDataParam("files") List<FormDataBodyPart> bodyParts,@FormDataParam("files") FormDataContentDisposition fileDispositions) {
    /* Save multiple files */
    BodyPartEntity bodyPartEntity = null;
    String fileName = null;
    for (int i = 0; i < bodyParts.size(); i++) {
        bodyPartEntity = (BodyPartEntity) bodyParts.get(i).getEntity();
        fileName = bodyParts.get(i).getContentDisposition().getFileName();
        s3Wrapper.upload(bodyPartEntity.getInputStream(), fileName);
    }
    String message= "File successfully uploaded !!";
    return Response.ok(message).build();
}

The Error I am getting with the AngularJS is below :

400 - Bad Request


Answer:

1) To POST File data, You don't need to provide content-type as Multi part/form-data. Because It understand data type automatically. So just pass headers: {'Content-Type': undefined}.

2) As you show in your postman, key is files then If you are providing name="files" and fd.append("files",file), It will not process as files key is on both side. So, Remove name="files" from HTML and process the upload file then.

Question:

I am trying to upload a file from a java class to aws S3.

I am using the exact code as given here

The only parts I changed are these:

private static String bucketName     = "s3-us-west-2.amazonaws.com/<my-bubket-name>";
private static String keyName        = "*** Provide key ***";
private static String uploadFileName = "/home/...<localpath>.../test123";

I am not sure what to add in Provide Key . But even if I leave it this way, i get an error like this :

Error Message: The bucket is in this region: null.Please use this region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: *******) HTTP Status Code: 301 AWS Error Code: PermanentRedirect Error Type: Client


Answer:

Instead of s3-us-west-2.amazonaws.com/<my-bucket-name> you should put <my-bucket-name>.

Question:

I'm trying to upload a jpg file to AWS S3 bucket with Camel's aws-s3 producer. Can I make this work with this approach and if yes how? Now I'm only getting an IOException and can't figure out what would be the next step. I know I could implement the upload using TransferManager from the aws-sdk but now I'm only interested in Camel's aws-s3 endpoint.

Here is my route with Camel 2.15.3:

public void configure() {
    from("file://src/data?fileName=file.jpg&noop=true&delay=15m")
    .setHeader(S3Constants.KEY,constant("CamelFile"))
    .to("aws-s3://<bucket-name>?region=eu-west-1&accessKey=<key>&secretKey=RAW(<secret>)");
}

and the exception I get from running that route:

com.amazonaws.AmazonClientException: Unable to create HTTP entity: Stream Closed
at com.amazonaws.http.HttpRequestFactory.newBufferedHttpEntity(HttpRequestFactory.java:244)
at com.amazonaws.http.HttpRequestFactory.createHttpRequest(HttpRequestFactory.java:122)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:415)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:273)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3660)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1432)
at org.apache.camel.component.aws.s3.S3Producer.processSingleOp(S3Producer.java:209)
at org.apache.camel.component.aws.s3.S3Producer.process(S3Producer.java:71)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:129)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:448)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:439)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:211)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Stream Closed
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:246)
at com.amazonaws.services.s3.internal.RepeatableInputStream.read(RepeatableInputStream.java:167)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
at com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream.read(MD5DigestCalculatingInputStream.java:88)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:151)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.http.util.EntityUtils.toByteArray(EntityUtils.java:136)
at org.apache.http.entity.BufferedHttpEntity.<init>(BufferedHttpEntity.java:63)
at com.amazonaws.http.HttpRequestFactory.newBufferedHttpEntity(HttpRequestFactory.java:242)
... 27 more

Answer:

I did some digging and found one solution. Route works if you convert file contents to byte array before passing it to the aws-s3 endpoint like this:

from("file://src/data?fileName=file.jpg&noop=true&delay=15m")
    .convertBodyTo(byte[].class)
    .setHeader(S3Constants.CONTENT_LENGTH, simple("${in.header.CamelFileLength}"))
    .setHeader(S3Constants.KEY,simple("${in.header.CamelFileNameOnly}"))
    .to("aws-s3://{{awsS3BucketName}}"
                    + "?deleteAfterWrite=false&region=eu-west-1"
                    + "&accessKey={{awsAccessKey}}"
                    + "&secretKey=RAW({{awsAccessKeySecret}})")
    .log("done.");
}

There must also be S3Constants.CONTENT_LENGTH header value set to the file length in bytes.

The solution above reads whole file to memory so it's not ideal to every situation. However the code above is also the most simple way that I know of using aws-s3 producer endpoint. I'm still happy to hear about other (and better) solutions.

Question:

I have an Spring App(running on AWS Lambda) which gets a file and uploads it on AWS S3.

The Spring Controller sends a MultipartFile to my method, where it's uploaded to AWS S3, using Amazon API Gateway.

public static void uploadFile(MultipartFile mpFile, String fileName) throws IOException{

    String dirPath = System.getProperty("java.io.tmpdir", "/tmp");
    File file = new File(dirPath  + "/" + fileName);

    OutputStream ops = new FileOutputStream(file);
    ops.write(mpFile.getBytes());

    s3client.putObject("fakebucketname", fileName, file);

}

I try to upload a PDF file which has 2 pages with text. After upload, the PDF file(on AWS S3) has 2 blank pages.

Why is the uploaded PDF file blank?

I also tried with other files(like PNG image) and when I open it the image I uploaded is corrupted.

The only thing that worked was when I uploaded a text file.


Answer:

Turns out that this will do this trick. Its all about encoding, thanks to the help of @KunLun. In my scenario, file is the multipart file (pdf) that is passed to aws via a POST to the url.

        Base64.Encoder enc = Base64.getEncoder();
        byte[] encbytes = enc.encode(file.getBytes());
        for (int i = 0; i < encbytes.length; i++)
        {
            System.out.printf("%c", (char) encbytes[i]);
            if (i != 0 && i % 4 == 0)
                System.out.print(' ');
        }
        Base64.Decoder dec = Base64.getDecoder();
        byte[] barray2 = dec.decode(encbytes);
        InputStream fis = new ByteArrayInputStream(barray2);

        PutObjectResult objectResult = s3client.putObject("xxx", file.getOriginalFilename(), fis, data);

Question:

I have used multi part uploading for uploading image to Amazon S3 as mentioned in documentation.

But then the files uploaded can be access directly without access key or anything. Tested using the remote URL which is got from response for a particular file. Is there any way to restrict access to uploaded file? Also is there a way to change the upload URL here, If I want to add a folder and the the file?


Answer:

Yes you can create folder by using below method.

     AmazonS3 amazons3Client = new AmazonS3Client(new ProfileCredentialsProvider());
     public void createFolder(String bucketName, String folderName)
        {
            try
            {

                        ObjectMetadata objectMetaData = new ObjectMetadata();
                        objectMetaData.setContentLength(0);
                        InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
                        amazons3Client.putObject(new PutObjectRequest(bucketName, folderName + "/", emptyContent, objectMetaData));
            }
            catch (Exception exception)
            {
                LOGGER.error("Exception In Create Folder", exception);
            }

        }

Access rights you can use policy and it will apply on specific to your bucket,Please go through below link You can allow specific IP to access. http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

Question:

Suppose we have an app that has very limited memory but has to upload/download huge files to AWS s3.

Question 1 : what is the correct api to use when we need to upload/download directly to FS while having very limited memory? (like 200Mb)

One of the options to upload object to s3 is this

TransferManager.upload(String bucketName, String key, File file)

Question 2 : will TransferManager.upload() put entire file into the memory or it is smart enough to stream content to s3 without filling up the memory?

Question 3 : do we have any api that can do zero copy networking?

Question 4 : aws offers option to move files from s3 to slower storage if you define the policy, if the file is moved to low frequency access storage do we query it the same way? (my assumption is that s3 will block me for hours to get the file then my download will start) important thing is if this process is hidden for me as a client or i need to figure out where my file is now and use the specific api to get it?

Pardon me for many questions, searched answers for while, found only bits and pieces but no explicit answers.


Answer:

Q1, Q2: Dig for a multi-part S3 upload, that is what you are looking for.

Q3: Nope, S3 supports only standard and multi-part upload APIs for now.

Q4: No, it is working other way. For you, it will look like the file is stored normally and you will have access to it as soon as you uploaded it (several seconds), but the difference is in the price. It will be much more cheaper for you to store data, but more expensive to retrieve every MB.

Good luck

Question:

I am using the AWS v2 java SDK. I am trying to upload an object to my bucket. I want the object to be publicly readable.

What I have done is as follows:

     PresignedPutObjectRequest presignedRequest =S3Presigner.create()
                .presignPutObject(z -> z.signatureDuration(Duration.ofMinutes(10))
                    .putObjectRequest(
                                por -> por.bucket("MYBUCKET")
                                    .key("MYOBJECTKEY")
                                    .acl(ObjectCannedACL.PUBLIC_READ)));
   URL url = presignedRequest.url(); 

I then use this URL to upload a file. I am getting a 403 response when I do this.

If I remove the line adding the acl, then the upload works.

If I change the ACL to PRIVATE, that too fails.

From my aws console, I can make an object public, so I dont think that it is a bucket policy problem.

How can this be fixed? I want to upload an object to a bucket and make it publicly readable

After some more debugging from my end, it looks to be a problem with the way the URL is being built. This is part of the URL when I dont add an ACL

&X-Amz-SignedHeaders=host&X-Amz-Expires=120  

And this is the similar part of the URL with an ACL

&X-Amz-SignedHeaders=host%3Bx-amz-acl&X-Amz-Expires=120  

It looks like the URL building gets borked


Answer:

Since the pre-signed URL is only granting access if the ACL is public, you'll probably need to add a x-amz-acl=public-read header to satisfy the constraint.

See: PutObjectAcl - Amazon Simple Storage Service

Question:

I am attempting to upload a csv file from a local directory to AWS S3 using Apache Camel.

Referencing the documentation found here (https://camel.apache.org/staging/components/latest/aws-s3-component.html), I tried to create a simple route like so (I have of course removed keys and other identifying information and replaced them with [FAKE_INFO]):

from("file:fileName=${in.headers[fileName]}")
  .to("aws-s3://[BUCKET]?accessKey=[ACCESS_KEY]&secretKey=RAW([SECRET_KEY])&region=US_EAST_2&prefix=TEST.csv");

This results in the following error:

error: java.lang.IllegalArgumentException: AWS S3 Key header missing apache camel

After searching a bit online I removed the prefix that is passed and instead inserted a .setHeader to route like so:

from("file:fileName=${in.headers[fileName]}")
  .setHeader(S3Constants.KEY, simple("TEST.csv"))
  .to("aws-s3://[BUCKET]?accessKey=[ACCESS_KEY]&secretKey=RAW([SECRET_KEY])&region=US_EAST_2");

This works fine, as long as I am willing to hard code everything after the setHeader. However, for my particular use case I need to pass items from the exchange headers to feed the keys, bucket name, and fileName (this route is used by multiple files that go to different buckets based on different criteria which is received in the exchange headers). For some reason as soon as use setHeader to set the S3Constants.KEY, I am no longer able to access any of the exchange headers - in fact, I can't even assign the S3Constants.KEY value from an exchange header. As you can see, the fileName in the from section is assigned via an exchange header and I don't run into any issues there, so I know they are being received into the route.

Any thoughts on how I can modify this route so that it will allow me to upload files without the S3Constants and using exchange headers where appropriate?


Answer:

Not sure if I understand you correct, but it sounds to me that

  • The problem of the question subject is already solved
  • Your only problem is the static destination address you want to have dynamic

To define dynamic destination addresses, there is a "dynamic to"

.toD(...)

You can use for example simple expressions in such a dynamic destination address

.toD("aws-s3://${in.header.bucket}?region=${in.header.region}&...")

See the Camel Docs (section "Dynamic To") for more details.

By the way: you write about "exchange headers". Don't confuse Exchange properties with Message headers!

  • Exchange properties are on the Exchange wrapper only and therefore lost with the Exchange after the Camel route has finished processing.
  • Message headers are on the message itself and therefore they are kept on the message even after routing it to a queue or whatever endpoint. This also implies that headers must be serializable.
  • You must access these two types differently. For example in Simple you get a header from the inbound message with ${in.header.myHeader} while you get an Exchange property with ${exchangeProperty.myProperty}

Question:

I'm trying to upload an object to S3 bucket through Java API. However no matter what I do it throws Access Denied exception

private static void serverSideEncryption() throws NoSuchAlgorithmException {
    AmazonS3 S3_CLIENT = AmazonS3ClientBuilder.standard()
                                              .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
                                              .withRegion(Regions.US_EAST_1)
                                              .build();
    PutObjectRequest putRequest = new PutObjectRequest(BUCKET_NAME, "dfsdf.ss",
            new File("/Users/fsdfs/Desktop/test.jpeg"));
    S3_CLIENT.putObject(putRequest);
    System.out.println("Object uploaded");
}

when I run this I get

Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: AD1105A291B609D8; S3 Extended Request ID: Ls5wbbW2Yd43p75MJGSOjex0KvmgPiqNBupxpCcEvdMRkK4iptNPNCEwyOqokA=), S3 Extended Request ID: Ls5wbbW2Yd43p75MJGSOje70iqNBupxpCcEvdMRkK4iptNPNCEwyOqokA=
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1632)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1755)
    at com.xxx.aws.s3.S3Service.serverSideEncryption(S3Service.java:72)
    at com.xxx.aws.s3.S3Service.main(S3Service.java:58)

However if I do the same thing from AWS CLI using

aws s3api put-object --bucket zzz-yyy-xxx --key test/testfdf --server-side-encryption AES256

It works perfectly fine.

I also tried the code below

private static void serverSideEncryption() throws NoSuchAlgorithmException {
    KEY_GENERATOR = KeyGenerator.getInstance("AES");
    KEY_GENERATOR.init(256, new SecureRandom());
    SSE_KEY = new SSECustomerKey(KEY_GENERATOR.generateKey());
    AmazonS3 S3_CLIENT = AmazonS3ClientBuilder.standard()
                                              .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
                                              .withRegion(Regions.US_EAST_1)
                                              .build();
    PutObjectRequest putRequest = new PutObjectRequest(BUCKET_NAME, "dfsdf.ss",
            new File("/Users/xx-xx/Desktop/dfdf.jpeg")).withSSECustomerKey(SSE_KEY);
    S3_CLIENT.putObject(putRequest);
    System.out.println("Object uploaded");
}

My bucket policy is set to AES encryption


Answer:

Check your bucket name and access keys are correct, 403 means the key does not exist. This could mean the key as in the bucket name or your access key.

Check both to be sure.

Edit: Following the AWS S3 SSE Documentation at https://atdocs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html

In order to upload an object with server side encryption it is required to use theObjectMetaDatamethod to specify server side encryption keys, the reference of this object can then be used as a parameter within a PutObjectRequest.

Modifications attributed to OP https://stackoverflow.com/users/1629109/damien-amen

Question:

I am using the Java Amazon S3 SDK to upload files I was wondering - when using the transferManager to upload a directory - is there a better way to set the Acl to be public-read

Here is my code

public boolean uploadDirectoryToAmazon(String directory, String bucketName, String s3DirectoryKey) {
    boolean result = false;

    try {
        LOGGER.info("Uploading a directory to S3");

        BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey, secretAccessKey); 
        AWSStaticCredentialsProvider awsStaticCredentialsProvider = new AWSStaticCredentialsProvider(credentials);

        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                .withCredentials(awsStaticCredentialsProvider)
                .withRegion(amazonS3Region)
                .build();

        //PutObjectResult putObjectResult = s3Client.putObject(putObjectRequest);
        //http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
        TransferManager transferManager = TransferManagerBuilder.
                standard().
                withS3Client(s3Client)
                .build();


                    ObjectMetadataProvider objectTaggingProvider = new ObjectMetadataProvider() {
            public void provideObjectMetadata(File file, ObjectMetadata metadata) {
                if (BooleanUtils.isTrue(isPublic)) {
                    metadata.setHeader(Headers.S3_CANNED_ACL, CannedAccessControlList.PublicRead);
                }
            }
        };

        File dirToUpload = new File(directory);
        MultipleFileUpload uploadDirectoryResult = transferManager.uploadDirectory(bucketName, s3DirectoryKey, dirToUpload, false, objectMetadataProvider);

        //Call method to log the progress
        logProgress(uploadDirectoryResult);            

        result = true;  
        transferManager.shutdownNow();
     } catch (AmazonServiceException ase) {
        LOGGER.error("Caught an AmazonServiceException, which means your request made it to Amazon S3, but was rejected with an error response for some reason.");
        LOGGER.error("Error Message:    " + ase.getMessage());
        LOGGER.error("HTTP Status Code: " + ase.getStatusCode());
        LOGGER.error("AWS Error Code:   " + ase.getErrorCode());
        LOGGER.error("Error Type:       " + ase.getErrorType());
        LOGGER.error("Request ID:       " + ase.getRequestId());
    } catch (AmazonClientException ace) {
        LOGGER.error("Caught an AmazonClientException, which means the client encountered an internal error while trying to communicate with S3, such as not being able to access the network.");
        LOGGER.error("Error Message: " + ace.getMessage());
    }
    return result;
}

Other file upload options have easy methods to specify the ACL - just wondering if there is an easier way for the uploadDirectory command

Thanks Damien


Answer:

You can use an alternative method of TransferManager class:

uploadDirectory(String bucketName, String virtualDirectoryKeyPrefix, File directory, boolean includeSubdirectories, ObjectMetadataProvider metadataProvider, ObjectTaggingProvider taggingProvider, ObjectCannedAclProvider cannedAclProvider)

ObjectCannedAclProvider cannedAclProvider = new ObjectCannedAclProvider() {
        public CannedAccessControlList provideObjectCannedAcl(File file) {
                return CannedAccessControlList.PublicRead;
        }
}

MultipleFileUpload multiUpload = transferManager.uploadDirectory(bucketName, keyPrefix, 
              directory, includeSubdirectories, null, null, cannedAclProvider);

The require gradle dependency is : compile("com.amazonaws:aws-java-sdk:1.11.519")

Note: Similar question is asked by someone in the AWS sdk-java thread. https://github.com/aws/aws-sdk-java/issues/1938

Question:

In given code:

BasicAWSCredentials awsCred = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3Client s3Client = new AmazonS3Client(awsCred);
TransferManager tm = new TransferManager(s3Client);
Upload upload = tm.upload( bucket,key,new File(file));

How can we add KMS SSEAlgorithm and encryption Key while uploading to s3?


Answer:

From your code sample, I see you are using multi-part upload. Start the multi-part upload using AmazonS3#initiateMultipartUploadRequest(InitiateMultipartUploadRequest). When you create the InitiateMultipartUploadRequest object, you can set various encryption options such as setSSEAwsKeyManagementParams and setSSECustomerKey.

Question:

Will someone please provide an example for uploading a bunch of photos to S3 using uploadDirectory? Say I have 300 photos in a directory named "special_photos" on my android device. And I want to upload all of these photos to Amazon S3. I figure uploadDirectory may be the best method for doing this. But being new to Amazon cloud, I don’t know how I might do it. All I have gleaned so far is that the method executes asynchronously and so can be called from the main thread. I keep finding php codes on the internet. But I don’t use PHP. Does anyone have a complete working example they don’t mind sharing with the community? I am using the SDK via gradle on Android Studio. Also, is there some kind of callback for knowing when all the photos have been uploaded? Say for instance I want to delete the photos and the directory once they have been uploaded.


Answer:

There is no uploadDirectory but there is Multipart Upload. This will do your large data upload to S3. As stated HERE, the Multipart Upload Docs say:

Using the list multipart uploads operation, you can obtain a list of multipart uploads in progress. An in-progress multipart upload is an upload that you have initiated, but have not yet completed or aborted. Each request returns at most 1000 multipart uploads. If there are more than 1000 multipart uploads in progress, you need to send additional requests to retrieve the remaining multipart uploads.

To address the callback, there is a completion called once all of the TransferUtility items are uploaded. This open source adds listeners applied to the upload function. I would recommend breaking up your calls to 30 at a time, then delete the corresponding photos - in case there is a failure with the upload. There is a success and fail return, so obviously only delete in case of success.

HERE is the AWS documentation for Android Multipart Uploads

HERE is an article that will help migrate & understand the differences between TransferManager and TransferUtility

HERE is a good article on getting started with the Android TransferManager

And HERE is an open source demo - under the S3_TransferManager

Hope this helps!

Update:

The below code is all taken from @awslabs references

Create client:

public static AmazonS3Client getS3Client(Context context) {
    if (sS3Client == null) {
        sS3Client = new AmazonS3Client(getCredProvider(context.getApplicationContext()));
    }
    return sS3Client;
}

Create TransferUtility:

public static TransferUtility getTransferUtility(Context context) {
    if (sTransferUtility == null) {
        sTransferUtility = new TransferUtility(getS3Client(context.getApplicationContext()),
                context.getApplicationContext());
    }
    return sTransferUtility;
}

Use TransferUtility to get all upload transfers:

observers = transferUtility.getTransfersWithType(TransferType.UPLOAD);

Add your records: - you could iterate over the file names in your directory

HashMap<String, Object> map = new HashMap<String, Object>();
Util.fillMap(map, observer, false);
transferRecordMaps.add(map);

This starts everything:

private void beginUpload(String filePath) {
    if (filePath == null) {
        Toast.makeText(this, "Could not find the filepath of the selected file",
                Toast.LENGTH_LONG).show();
        return;
    }
    File file = new File(filePath);
    TransferObserver observer = transferUtility.upload(Constants.BUCKET_NAME, file.getName(),
            file);
    observers.add(observer);
    HashMap<String, Object> map = new HashMap<String, Object>();
    Util.fillMap(map, observer, false);
    transferRecordMaps.add(map);
    observer.setTransferListener(new UploadListener());
    simpleAdapter.notifyDataSetChanged();
}

This is your completion:

private class GetFileListTask extends AsyncTask<Void, Void, Void> {
    // The list of objects we find in the S3 bucket
    private List<S3ObjectSummary> s3ObjList;
    // A dialog to let the user know we are retrieving the files
    private ProgressDialog dialog;

    @Override
    protected void onPreExecute() {
        dialog = ProgressDialog.show(DownloadSelectionActivity.this,
                getString(R.string.refreshing),
                getString(R.string.please_wait));
    }

    @Override
    protected Void doInBackground(Void... inputs) {
        // Queries files in the bucket from S3.
        s3ObjList = s3.listObjects(Constants.BUCKET_NAME).getObjectSummaries();
        transferRecordMaps.clear();
        for (S3ObjectSummary summary : s3ObjList) {
            HashMap<String, Object> map = new HashMap<String, Object>();
            map.put("key", summary.getKey());
            transferRecordMaps.add(map);
        }
        return null;
    }

    @Override
    protected void onPostExecute(Void result) {
        dialog.dismiss();
        simpleAdapter.notifyDataSetChanged();
    }
}

Question:

I need to generate an AWS Signature v4 signature for uploading to s3, like this: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html.

I tried a lot of examples, but have the error

<Error>
    <Code>InvalidAccessKeyId</Code>
    <Message>The AWS Access Key Id you provided does not exist in our records.</Message>
    <AWSAccessKeyId>ASIA2AKMADUN</AWSAccessKeyId>
    <RequestId>E68a1B73B15</RequestId>
    <HostId>fIG19S=</HostId>
</Error>

I tried to build signature, using minio-java, like this https://github.com/minio/minio-java/blob/master/examples/PresignedPostPolicy.java

Also, I tried this code snippet https://gist.github.com/phstudy/3523576726d74a0410f8

P.S. My real target is uploading files from clients with limit of file size, like there, or there there. I can create presignS3UploadLink, but there is not way to set max size.


Answer:

So, solution https://github.com/minio/minio-java/blob/master/examples/PresignedPostPolicy.java did not work, because of absent x-amz-security-token parameter.

We need to use session-token (which we get from amazon) for creating a POST-Policy and for form publishing - https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html

Sample code for minio:

conditions.add(new String[]{"eq", "$x-amz-security-token", sessionToken});
formData.put("x-amz-security-token", sessionToken);

P.S. x-amz-security-token is needed because of using of temporary security credentials - https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Question:

I need to use this library dynamodb-geo but i can't find it in maven repository. During developing on local machine I added this library to local maven repository as 3rd party JARs

mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> \
    -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging>

and

<dependency>
...
</dependency>

Now I need to deploy it project in remote server. How I can create remote maven repository with this geo library?

P.S. Maybe you know dependency where this library included?


Answer:

This workaround really helped me

<dependency>
    <groupId>com.mylib</groupId>
    <artifactId>mylib-core</artifactId>
    <version>0.0.1</version>
</dependency>

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-install-plugin</artifactId>
    <version>2.5.2</version>
    <executions>
        <execution>
            <id>install-external</id>
            <phase>clean</phase>
            <configuration>
                <file>${basedir}/lib/mylib-core-0.0.1.jar</file>
                <repositoryLayout>default</repositoryLayout>
                <groupId>com.mylib</groupId>
                <artifactId>mylib-core</artifactId>
                <version>0.0.1</version>
                <packaging>jar</packaging>
                <generatePom>true</generatePom>
            </configuration>
            <goals>
                <goal>install-file</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Question:

I am new to AWS, I used AWS CLI to locally configure my AWS credentials, as I can't have an IAM role attached to my laptop, I can see my credentials properly configured though the Eclipse IDE's AWS Tool kit plugin. I referred this answer - AWS S3 upload without access and secret key in Java,

Can someone help me understand how do I set in code, which region the S3 bucket lies in? or does it fetch the region, from the one set during the aws configure command? As I get an error when I try to use s3client.setRegion method.

I am not able to test this code locally, it's throwing me the following error -

com.amazonaws.SdkClientException: Unable to load credentials from service endpoint

Following is my code to upload file to AWS S3 -

AmazonS3 s3client = AmazonS3ClientBuilder.standard()
                      .withCredentials(new InstanceProfileCredentialsProvider(false))
                      .build();
//s3client.setRegion(com.amazonaws.regions.Region.getRegion(Regions.EU_CENTRAL_1));
PutObjectResult result = s3client.putObject(new PutObjectRequest(BUCKET_NAME, BASE_PATH + localFile.getName(), localFile));

Complete error log -

The legacy profile format requires the 'profile ' prefix before the profile name. The latest code does not require such prefix, and will consider it as part of the profile name. Please remove the prefix if you are seeing this warning.
com.amazonaws.SdkClientException: Unable to load credentials from service endpoint
    at com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:180)
    at com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:159)
    at com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
    at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:141)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1118)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:758)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:722)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4227)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4174)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1722)
    at com.atrium.crud.service.PedestrianServiceImpl.savePedestrianSurvey(PedestrianServiceImpl.java:73)
    at com.atrium.crud.controller.PedestrianController.savePedestrianSurvey(PedestrianController.java:69)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
    at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
    at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:116)
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
    at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
    at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
    at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:105)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:474)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:349)
    at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:783)
    at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
    at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1434)
    at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: connect timed out
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
    at sun.net.www.http.HttpClient.New(HttpClient.java:339)
    at sun.net.www.http.HttpClient.New(HttpClient.java:357)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966)
    at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:47)
    at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:106)
    at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
    at com.amazonaws.auth.InstanceProfileCredentialsProvider$InstanceMetadataCredentialsEndpointProvider.getCredentialsEndpoint(InstanceProfileCredentialsProvider.java:156)
    at com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:121)
    ... 69 more

Answer:

You can use:

AmazonS3Client amazons3 = new AmazonS3Client(new ProfileCredentialsProvider());

ProfileCredentialsProvider will find hidden folder .aws & file credentials in your home directory.

Question:

A quick help here, i wan to achieve an upload of my images to my amazon S3 bucket, i'm able to achieve that with following code.

    s3Client = new AmazonS3Client( new BasicAWSCredentials( getString(R.string.s3_access_key), getString(R.string.s3_secret)));
//params contains file path
                        //PutObjectRequest por = new PutObjectRequest( getString(R.string.s3_bucket), params[0].getName(), params[0]);  
                        PutObjectRequest por = new PutObjectRequest( getString(R.string.s3_bucket), params[0].getName(), params[0]);
                        s3Client.putObject(por);

                        ResponseHeaderOverrides override = new ResponseHeaderOverrides();
                        override.setContentType( "image/jpeg" );
                        urlRequest = new GeneratePresignedUrlRequest( getString(R.string.s3_bucket), params[0].getName() );
                        urlRequest.setExpiration( new Date( System.currentTimeMillis() + 3600000 ) );  // Added an hour's worth of milliseconds to the current time.
                        urlRequest.setResponseHeaders( override );

There is a folder in my bucket I'm not able to upload images to that folder.

what i tried for uploading images to folder into bucket is this

PutObjectRequest por = new PutObjectRequest( getString(R.string.s3_bucket), params[0].getName(), params[0]).withKey("testmorya/");
                    s3Client.putObject(por);

BUCKET NAME : morya FOLDER NAME : testmorya

Help appreciated


Answer:

It is all about how you structure your key. To put "filea" into "folderb", just name the key of the object as "folderb/filea".

Question:

I currently am uploading videos from my app (android) which causes crashes in the application, trying to figure out how to make it efficient on both ends, the app and server side... At the end of the day it does upload the video but crashes either the app or the server depending which one runs out of memory.

Stack - Java, retrofit, Node.js, knox, heroku, amazonS3

For client side android: (retrofit, java):

rest File:

@Multipart
        @POST("/addMedia")
        public void addMedia(
                @Part("name") String name,
                @Part("categ") String category,
                @Part("desc") String desc,
                @Part("creatorId") String creatorId,
                @Part("isItAPicture") String isItAPicture, //if true it is a picture
                @Part("mediaFile") TypedFile mediaFile,
                Callback<UserResponse> callback);

Create Class File:

  ApiManager.getAsyncApi().addMedia(title, categ, descr, creatorId, String.valueOf(isPictureNotvideo), media, new Callback<UserResponse>() {
            @Override
            public void success(UserResponse userResponse, Response response) {
             \
            }

            @Override
            public void failure(RetrofitError error) {
throw error;
            }
        });

The server side (Node.js, heroku, AmazonS3, knox module)

var s3 = knox.createClient({
    key: config.amazonS3.key,
    secret: config.amazonS3.secret,
    bucket:config.amazonS3.bucketMedia
});






function setupAndCreateMedia ( cb) {
    if(req.files && req.files.mediaFile) {
        console.log("Received file:\n" + JSON.stringify(req.files));
        //add p for pics will do v for vids
        videoID = "v" + path.basename(req.files.mediaFile.path);
        tmp_path = req.files.mediaFile.path;
        targetPathSmall = './public/img/media/' + videoID;
        videoConvert = req.files.mediaFile.name;
        var video = req.files.mediaFile;
        var s3Headers = {
            'Content-Type': video.type,
            'x-amz-acl': 'public-read'
        };
        //console.log('stdout:', stdout, targetPathSmall)
        s3.putStream(targetPathSmall, videoID, s3Headers, function(err, s3response){
        //handle, respond
            if(err) { 
                console.log(err);
            } else {
                console.log("saved to S3");
                //console.log(s3response);
                cb(null, videoID);
            }
        });
    } else {
        videoID = "";
        cb(null, videoID);
    }
}

This is the crash message on android:

Caused by: java.lang.OutOfMemoryError: Failed to allocate a 48298892 byte allocation with 16777216 free bytes and 44MB until OOM

Answer:

The problem can be caused by logging. Retrofit's logger reads your video binary and here the OOM exception raised.

To prevent it, disable logs for your restAdapter using method:

setLogLevel(LogLevel.NONE)

Source

Question:

Does anyone know how to see the progress (in percent) of an upload in a multipart upload in Amazon S3?


Answer:

I would do it like this:

MultipleFileUpload transfer = transferManager.uploadDirectory(mybucket, null, new File(localSourceDataFilesPath), false);

// blocks the thread until the upload is completed
showTransferProgress(transfer);

Then in showTransferProgress, I would create a block the upload using a sleep, and do the math every X seconds:

private void showTransferProgress(MultipleFileUpload xfer) {
        while (!xfer.isDone()) {
            // some logic to wait so you don't do the math every second like a Thread.sleep

            TransferProgress progress = xfer.getProgress();
            long bytesTransferred = progress.getBytesTransferred();
            long total = progress.getTotalBytesToTransfer();
            Double percentDone = progress.getPercentTransferred();
            LOG.debug("S3 xml upload progress...{}%", percentDone.intValue());
            LOG.debug("{} bytes transferred to S3 out of {}", bytesTransferred, total);
        }

        // print the final state of the transfer.
        TransferState xferState = xfer.getState();
        LOG.debug("Final transfer state: " + xferState);
    }

this line is what you are looking for:

Double percentDone = progress.getPercentTransferred();

Question:

I am trying to upload a csv file to S3 bucket. The code runs successfully but when I check the bucket, the file with Access Key as its name is uploaded. I have to rename the file manually to check its content.

Is there a way where I could rename the file programmatically only or may be the file name does not change automatically while uploading?

Please check the code below:

public class AwsFileUploader {

private static String bucketName = "mybucket";
private static String accessKey = "my-access-key";
private static String secretKey = "my-secret-key";
private static String uploadFileName = "CompressionScore/compression_score_09-04-2015.csv";

public static void main(String[] args) throws IOException {

    AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);

    AmazonS3 s3client = new AmazonS3Client(credentials);

    try {
        System.out.println("Uploading a new object to S3 from a file\n");
        File file = new File(uploadFileName);
        System.out.println(file.getName());   //prints - compression_score_09-04-2015.csv
        s3client.putObject(new PutObjectRequest(bucketName, accessKey, file));
        System.out.println("File successfully uploaded");
    } catch (AmazonServiceException ase) {
        ase.printStackTrace();
    } catch (AmazonClientException ace) {
        ace.printStackTrace();
    }
}

}

Ideally the file in bucket should be with the name compression_score_09-04-2015.csv but instead its AKAJI3EBMILBCWENUSA. Could somebody guide as to what should be done?


Answer:

In PutObjectRequest constructor the key parameter is actually the name of the uploaded file, not the access key.

From SDK documentation:

Parameters: bucketName - The name of an existing bucket to which the

new object will be uploaded.

key - The key under which to store the new object.

input - The stream of data to upload to Amazon S3. metadata - The object metadata.

At minimum this specifies the content length for the stream of data being uploaded.

Source: PutObjectRequest constructor detail

You don't have to specify accessKey here because you already instantiate the AmazonS3Client object with the credentials which include the access and secret keys.

Question:

I have a pretty simple function that uploads a PDF file to an AWS S3 (https://codedestine.com/aws-s3-putobject-java/) using AWS Lambda with Amazon API Gateway.

I try to upload an PDF file which has 2 pages with text. After upload, the PDF file(on AWS S3) has 2 blank pages.

This is the method I use to upload the PDF file on AWS S3.

public static void uploadFile2(MultipartFile mpFile, String fileName) throws IOException{

    String dirPath = System.getProperty("java.io.tmpdir", "/tmp");
    File file = new File(dirPath  + "/" + fileName);

    OutputStream ops = new FileOutputStream(file);
    ops.write(mpFile.getBytes());

    s3client.putObject("fakebucketname", fileName, file);

}

Why the uploaded PDF file is blank?


Answer:

Turns out that this will do this trick. Its all about encoding, thanks to the help of @KunLun. In my scenario, file is the multipart file (pdf) that is passed to aws via a POST to the url.

            Base64.Encoder enc = Base64.getEncoder();
            byte[] encbytes = enc.encode(file.getBytes());
            for (int i = 0; i < encbytes.length; i++)
            {
                System.out.printf("%c", (char) encbytes[i]);
                if (i != 0 && i % 4 == 0)
                    System.out.print(' ');
            }
            Base64.Decoder dec = Base64.getDecoder();
            byte[] barray2 = dec.decode(encbytes);
            InputStream fis = new ByteArrayInputStream(barray2);

            PutObjectResult objectResult = s3client.putObject("xxx", 
            file.getOriginalFilename(), fis, data);


Another very important piece to include is that the API Gateway settings must be properly configured to support binary data types. AWS Console --> API Gateway --> Settings --> include what I have below in the attached photo

Question:

Given: AWS S3 Bucket named YOUR_BACKET_NAME with public access READ/WRITE

Need to upload a file to a public S3 bucket.

Using only basic most popular Java Libs like Apache HTTP Client lib.

Should not use AWS SDK.


Answer:

    @Test
public void upload_file_to_public_s3_with_httpClient() throws Exception {

    String fileName = "test.txt";
    String bucketName = "YOUR_BACKET_NAME";
    String s3url = "https://" + bucketName + ".s3.amazonaws.com/" + fileName;
    String body = "BODY OF THE FILE";

    HttpEntity entity = MultipartEntityBuilder
            .create()
            .setMode(HttpMultipartMode.BROWSER_COMPATIBLE)
            .addBinaryBody("file", body.getBytes())
            .build();

    HttpResponse returnResponse = Request.Put(s3url).body(entity).execute().returnResponse();
    StatusLine statusLine = returnResponse.getStatusLine();
    String responseStr = EntityUtils.toString(returnResponse.getEntity());
    log.debug("response from S3 : line: {}\n body {}\n ", statusLine, responseStr);
}

Question:

I am creating a simple application where I want to upload a file to my AWS S3 bucket. Here is my code:

import java.io.File;
import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.fasterxml.jackson.*;

public class UploadFileInBucket {

    public static void main(String[] args) throws IOException {
        String clientRegion = "<myRegion>";
        String bucketName = "<myBucketName>";
        String stringObjKeyName = "testobject";
        String fileObjKeyName = "testfileobject";
        String fileName = "D:\\Attachments\\LICENSE";

        try {

            BasicAWSCredentials awsCreds = new BasicAWSCredentials("<myAccessKey>", "<mySecretKey>");
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withRegion(clientRegion)
                    .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
                    .build();

            // Upload a text string as a new object.
            s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");

            // Upload a file as a new object with ContentType and title specified.
            PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(fileName));
            ObjectMetadata metadata = new ObjectMetadata();
            metadata.setContentType("plain/text");
            metadata.addUserMetadata("x-amz-meta-title", "someTitle");
            request.setMetadata(metadata);
            s3Client.putObject(request);
        }
        catch(AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process 
            // it, so it returned an error response.
            e.printStackTrace();
        }
        catch(SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}

I am unable to upload a file and getting an error as:

Exception in thread "main" java.lang.NoSuchFieldError: 

    ALLOW_FINAL_FIELDS_AS_MUTATORS
        at com.amazonaws.partitions.PartitionsLoader.<clinit>(PartitionsLoader.java:52)
        at com.amazonaws.regions.RegionMetadataFactory.create(RegionMetadataFactory.java:30)
        at com.amazonaws.regions.RegionUtils.initialize(RegionUtils.java:64)
        at com.amazonaws.regions.RegionUtils.getRegionMetadata(RegionUtils.java:52)
        at com.amazonaws.regions.RegionUtils.getRegion(RegionUtils.java:105)
        at com.amazonaws.client.builder.AwsClientBuilder.getRegionObject(AwsClientBuilder.java:249)
        at com.amazonaws.client.builder.AwsClientBuilder.withRegion(AwsClientBuilder.java:238)
        at UploadFileInBucket.main(UploadFileInBucket.java:28)

I have added required AWS bucket credentials, permissions and dependencies to execute this code.

What changes I should made in the code to get my file uploaded to desired bucket?


Answer:

It looks as though you either have the wrong version of the Jackson libraries or are somehow linking with multiple versions of them.

The AWS for Java SDK distribution contains a third-party/lib directory which contains all of the (correct versions of) the libraries that version of the SDK should be built with. Depending on which features of the SDK you are using you may not need all of them, but those are the specific 3rd party libraries you should be using.

Question:

I am saving files on my S3 Bucket, but I noticed that to do this I use a FileOutPutStream like so :

private UploadedFile file; // This is from PrimeFaces, the file that the client wishes to upload
File uploadedFile = new File(file.getFileName()); // Leaving the file like this creates the file on my IDE folder AFTER executing the next two lines, thats why I though the next lines were an error.

FileOutputStream fileOutput = new FileOutputStream(uploadedFile);
fileOutput.write(file.getContents());

So this lines of code are responsible of writing the file on my device, I first though that was an error or that wasn't necessary because I don't know much about file uploading to amazon, so I remove this two lines because I notice my method for uploading just needed the file and the filename like so:

businessDelegatorView.uploadPublicRead("mybucketname", fileName, fileToUpload);

So I though this wasn't necessary and that was only duplicating the files:

FileOutputStream fileOutput = new FileOutputStream(uploadedFile);
fileOutput.write(file.getContents());

But I notice that the upload doesn't work if I remove them because it throws a FileNotFoundException so I started my search and find out this post from BalusC and I get it I have to define a path where the files from my clients will be saved for later upload like in this case to amazon s3 bucket, but I was wondering if, for example doing it like this will work when the .WAR is generated:

File uplodadFile = new File("C:/xampp/apache/conf", file.getFileName());

FileOutputStream fileOutput = new FileOutputStream(uploadFile);
fileOutput.write(file.getContents());

I am saving the files there as a test, but I don't know or not sure if FileOutPutStream is the right choice, I don't know another way.

Also this is what the method for uploading looks like after the above code has executed, without the FileOutPutStream it won't work cause file not in my device

AmazonS3 amazonS3 = buildAmazonS3(); 
         try {
             amazonS3.putObject(new PutObjectRequest(bucketName, key, file).withCannedAcl(CannedAccessControlList.PublicRead));

Just want somebody to clear things a little bit more for me, like what is the best path to put on here?

File uplodadFile = new File("C:/xampp/apache/conf", file.getFileName());

or it really doesn't matter I just gotta keep in mind in which machine the .WAR will be deployed? thanks


Answer:

Just want somebody to clear things a little bit more for me, like what is the best path to put on here?

When you want to upload a file such as into a system, keep it as a stream of bytes as long as possible because you receive bytes in entry and you want to store these same bytes at the end. Conversion bytes->file->bytes are time consuming, resource consuming and error prone (encoding conversion and file stored on a filesystem may be indeed sources of error).

So I though this wasn't necessary and that was only duplicating the files:

FileOutputStream fileOutput = new FileOutputStream(uploadedFile); fileOutput.write(file.getContents());

You are right because the file was already uploaded by the client HTTP request. Doing it twice looks helpless. But there you don't have a File but an UploadedFile (primefaces).

The PutObjectRequest() constructor from the S3 API has several overloads. Actually you use it :

public PutObjectRequest(String bucketName,
                        String key,
                        File file)

The last parameter is a File. Do you see the mismatch ? In your first code that annoys you, you solved the issue (passing a File while you have a UploadedFile as source) by writing the content of the UploadedFile into a new File and that is acceptable if you need a File. But in fact you don't need a File because the PutObjectRequest() constructor has another overload that matches better to your use :

public PutObjectRequest(String bucketName,
                        String key,
                        InputStream input,
                        ObjectMetadata metadata)

Constructs a new PutObjectRequest object to upload a stream of data to the specified bucket and key. After constructing the request, users may optionally specify object metadata or a canned ACL as well.

Note that to not hurt performance, providing the content length matters :

Content length for the data stream must be specified in the object metadata parameter; Amazon S3 requires it be passed in before the data is uploaded. Failure to specify a content length will cause the entire contents of the input stream to be buffered locally in memory so that the content length can be calculated, which can result in negative performance problems.

So you could just do that :

UploadedFile file = ...; // uploaded by client
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(file.getSize());
amazonS3.putObject(new PutObjectRequest(bucketName, key, file.getInputStream(), metaData)
        .withCannedAcl(CannedAccessControlList.PublicRead));

Question:

I am trying to upload a file to Amazon S3 bucket through a web page.

Created a "Dynamic Web Project" in Eclipse with apache tomcat server. Create a JSP, Servlet and Java files. I have used aws-java-sdk-1.11.335.jar file. Tried to upload a image file by submit button click, Class not Found Exception is occurred. Can some one help to resolve this issue.

Here is Code for JSP, Servlet and Class Files:

index.jsp:

<form method="post" action="UploadFileServlet" enctype="multipart/form-data">
       <input type="submit" value="Upload" />
</form>

UploadFileServlet.java:

@WebServlet("/UploadFileServlet")
public class UploadFileServlet extends HttpServlet {
private static final long serialVersionUID = 1L;


public UploadFileServlet() {
    super();
}


protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

    UploadFilestoAWSS3 uploadfilestoawss3 = new UploadFilestoAWSS3();
    uploadfilestoawss3.UploadFiles();

    PrintWriter out = response.getWriter();
    out.print("Files are uploaded to Bucket");

}


protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
    doGet(request, response);
}

UploadFilestoAWSS3.java

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;
import com.amazonaws.services.s3.model.UploadPartResult;

public class UploadFilestoAWSS3 {

    public void UploadFiles() throws IOException {
        String clientRegion = "region";
        String bucketName = "bucket name";
        String filePath = "C:\\Users\\......\\imagefile.jpg";

        File file = new File(filePath);
        String keyName = file.getName();
        long contentLength = file.length();
        long partSize = 10 * 1024 * 1024; // Set part size to 10 MB. 

        try {
            BasicAWSCredentials awsCreds = new BasicAWSCredentials("Access Key", "Secret Key");
        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                                .withRegion(clientRegion)
                                .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
                                .build();

        List<PartETag> partETags = new ArrayList<PartETag>();

        InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, keyName);
        InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest);

        long filePosition = 0;
        for (int i = 1; filePosition < contentLength; i++) {
            partSize = Math.min(partSize, (contentLength - filePosition));

            UploadPartRequest uploadRequest = new UploadPartRequest()
                    .withBucketName(bucketName)
                    .withKey(keyName)
                    .withUploadId(initResponse.getUploadId())
                    .withPartNumber(i)
                    .withFileOffset(filePosition)
                    .withFile(file)
                    .withPartSize(partSize);

            UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
            partETags.add(uploadResult.getPartETag());

            filePosition += partSize;
        }

        CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, keyName,
                initResponse.getUploadId(), partETags);
        s3Client.completeMultipartUpload(compRequest);
    }
    catch(AmazonServiceException e) {
        e.printStackTrace();
    }
    catch(SdkClientException e) {
        e.printStackTrace();
    }

}

}

Error is:

Type Exception Report

Message Servlet execution threw an exception

Description The server encountered an unexpected condition that prevented it from fulfilling the request.

Exception
javax.servlet.ServletException: Servlet execution threw an exception
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)


Root Cause
java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
com.shiva.amazonaws.UploadFileServlet.doGet(UploadFileServlet.java:51)
com.shiva.amazonaws.UploadFileServlet.doPost(UploadFileServlet.java:64)
javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)


Root Cause
java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1292)
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1121)
com.shiva.amazonaws.UploadFileServlet.doGet(UploadFileServlet.java:51)
com.shiva.amazonaws.UploadFileServlet.doPost(UploadFileServlet.java:64)
javax.servlet.http.HttpServlet.service(HttpServlet.java:660)
javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)

Answer:

The problem is that some of the Amazon Web Services SDK classes are not found, specifically com.amazonaws.AmazonServiceException, which is the basic exception type for the AWS SDK - indicating that likely all of the AWS SDK files are missing from the Java classpath.

Please note that the file aws-java-sdk-1.11.335.jar is not the entire AWS SDK (as one might have thought) and only contains some metadata files. The AWS Java SDK is split into multiple artifacts - because it is so big you don't want to use a single JAR file with all the classes, you want to only take the SDK for the specific services you are using.

Because you are using S3, you probably just want the S3 JAR, and its dependencies, as can be seen in the Maven repository page for the AWS S3 SDK. Please note that one of the dependencies is aws-java-sdk-core which is where the com.amazonaws.AmazonServiceException class lives.

You can also see all the dependencies (and navigate the directories to collect all the files that you need) in the POM file under http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.11.335/ .

Generally, because of the complex dependency structures that are common these days, it is recommended to not download JARs manually and put them into one by one in to the Eclipse build path, but to use Maven instead.

If you have a standard Java project in eclipse, you can right-click the project name in the package explorer then choose "Configure"->"Convert to Maven project" and fill in some details in the dialog that opens(*). After that compeletes, Eclipse will create a pom.xml file in the root of your project - open it, and you will get a custom editor to set up your dependencies. Click on the tab labeled "Dependencies" and then on "Add" and you should be able to search for aws-java-sdk-s3, add it to your dependency list and then Maven will automatically download all the required JAR files and add them automatically to your build path.

*) "Group ID" is generally the top package name of your project, which should look like a domain name in reverse - for example my domain is geek.co.il, so my Group ID will be il.co.geek; "Artifact ID" is a single word identifier for your project, for example my-project.

Question:

I am using the Amazon Java SDK to upload files to Amazon s3

Whilst using version 1.10.62 of the artifact aws-java-sdk - the following code worked perfectly - Note all the wiring behind the scenes works

 public boolean uploadInputStream(String destinationBucketName, InputStream inputStream, Integer numberOfBytes, String destinationFileKey, Boolean isPublic){

    try {
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(numberOfBytes);            
        PutObjectRequest putObjectRequest = new PutObjectRequest(destinationBucketName, destinationFileKey, inputStream, metadata);

        if (isPublic) {
            putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
        } else {
            putObjectRequest.withCannedAcl(CannedAccessControlList.AuthenticatedRead);
        }

        final Upload myUpload = amazonTransferManager.upload(putObjectRequest);

        myUpload.addProgressListener(new ProgressListener() {
            // This method is called periodically as your transfer progresses
            public void progressChanged(ProgressEvent progressEvent) {
                LOG.info(myUpload.getProgress().getPercentTransferred() + "%");
                LOG.info("progressEvent.getEventCode():" + progressEvent.getEventCode());
                if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
                    LOG.info("Upload complete!!!");
                }
            }
        });

        long uploadStartTime = System.currentTimeMillis();
        long startTimeInMillis = System.currentTimeMillis();
        long logGap = 1000 *  loggingIntervalInSeconds;

        while (!myUpload.isDone()) {

            if (System.currentTimeMillis() - startTimeInMillis >= logGap) {
                logUploadStatistics(myUpload, Long.valueOf(numberOfBytes));
                startTimeInMillis = System.currentTimeMillis();
            } 
        }
        long totalUploadDuration = System.currentTimeMillis() - uploadStartTime;
        float totalUploadDurationSeconds = Float.valueOf(totalUploadDuration) / 1000;
        String uploadedPercentageStr = getFormattedUploadPercentage(myUpload);
        boolean isUploadDone = myUpload.isDone();

        if (isUploadDone) {
            Object[] params = new Object[]{destinationFileKey, totalUploadDuration, totalUploadDurationSeconds};
            LOG.info("Successfully uploaded file {} to Amazon. The upload took {} milliseconds ({} seconds)", params);
            result = true;
        } 
        LOG.debug("Post put the inputStream to th location {}", destinationFileKey); 
     } catch (AmazonServiceException e) {
         LOG.error("AmazonServiceException:{}", e);
         result = false;
    } catch (AmazonClientException e) {
        LOG.error("AmazonServiceException:{}", e);
        result = false;
    }

    LOG.debug("Exiting uploadInputStream - result:{}", result);
    return result;
}

Since I migrated to version 1.11.31 of the aws-java-sdk - this code stopped working All classes remain intact and there were no warnings in my IDE

However - I do see the following logged to my console

 [2016-09-06 22:21:58,920] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.requestId - x-amzn-RequestId: not available
[2016-09-06 22:21:58,931] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.request - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: null; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: D67813C8A11842AE), S3 Extended Request ID: 3CBHeq6fWSzwoLSt3J7D4AUlOaoi1JhfxAfcN1vF8I4tO1aiOAjqB63sac9Oyrq3VZ4x3koEC5I=

The upload still continues but from the progress listener - the event code is 8 which stands for transfer failed

Does anyone have any idea what I need to do to get this chunk of code working again?

Thank you Damien


Answer:

try changing it to this: public void progressChanged(ProgressEvent progressEvent) { LOG.info(myUpload.getProgress().getPercentTransferred() + "%"); LOG.info("progressEvent.getEventCode():" + progressEvent.getEventType()); if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) { LOG.info("Upload complete!!!"); } }

It looks like you are running some deprecated code.

In com.amazonaws.event.ProgressEventType, value 8 refers to HTTP_REQUEST_COMPLETED_EVENT

  • COMPLETED_EVENT_CODE is deprecated
  • getEventCode is deprecated

refer to this -> https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/event/ProgressEvent.java

Question:

I'm sending files to amazon s3 server like this and really need to change part sizes of sending file from default amazon (5mb) to 1 mb, is there any way to do that?

 TransferObserver observer = transferUtility.upload(
                            "mydir/test_dir",     /* The bucket to upload to */
                            data.getData().getLastPathSegment(),    /* The key for the uploaded object */
                            root        /* The file where the data to upload exists */
                    );;

Answer:

The minimum part size for S3 multipart uploads is 5MB. (See http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html). The Transfer Utility uses the smallest allowable part size, which is usually 5MB.

Question:

I'm uploading a file to Amazon S3 using the eclipse SDK. Apparently the file uploads, actually you can see a file the same size of the original one (in this case a PDF) but with the same name as the key name. If you click on the file link, you get this:

This XML file does not appear to have any style information associated with it. The document tree is shown below.

<Error>
    <Code>AccessDenied</Code>
    <Message>Access Denied</Message>
    <RequestId>2FA4CE8A99D64D5B</RequestId>
    <HostId>
       t3k5TK/5PvqCNFGjtF5ycvpjS4HaTGcSTNrd8I8f4fe0JvFHdLMJnaO8N9MTZJe0fXm5BU6E+zU=
    </HostId>
</Error>

For this trial, I allowed all permissions to the bucket, why can't I see the PDF file?

EDIT

For downloading/viewing of the uploaded file, the Permission of the File while uploading, needs to be set as Public Read.


Answer:

You can use the withCannedAcl(CannedAccessControlList.PublicRead) to set the permission as Public Read

public static void main(String[] args) throws IOException {
    AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
    try {
        File file = new File(uploadFileName);
        s3client.putObject(new PutObjectRequest(
                                 bucketName, keyName, file).withCannedAcl(CannedAccessControlList.PublicRead)); // this will set the permission as PublicRead

     } catch (Exception ex) {
        ex.printStacktrace();
    } 
}

Question:

I am trying to upload file to the s3 bucket via React and I am struggling with 4xx and 5xx :(

Here is my code base:

onChangeHandler = event => {
    const data = new FormData();

    data.append('data', event.target.files[0], event.target.files[0].name);

    axios
        .post(
            '/api/attachments/temporary',
            {
                documents: data,
                tempDir: this.generateUuid()
            },
            {
                headers: {
                    'Content-Type': 'multipart/form-data'
                }
            }
        )
        .then(data => {
            console.log(`data --- `, data);
        })
        .catch(e => {
            console.log(` --- `, e);
        });
};

render() {
    return (
            <input type='file' name='file' onChange={this.onChangeHandler} />
    );
}

If I am sending this post I get 500 and this error:

java.io.IOException: UT000036: Connection terminated parsing multipart data

Also I have noticed that documents property is empty:

This is API doc for backend:

How may I fix it? Maybe, I need somehow transform file locally into binary data etc.? We can upload images and .pdf files.

Thanks!


Answer:

It's very easy if you use MultipartHttpServletRequest

Step 1: Add dependency

pom.xml

<dependency>
<groupId>commons-fileupload</groupId>
    <artifactId>commons-fileupload</artifactId>
    <version>1.3.3</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.4</version>

Step 2:Send the file same like you are sending above.

Step 3: Configure bean in configuration file(I have used java based configuration)

@Bean(name = "multipartResolver")
    public CommonsMultipartResolver createMultipartResolver() {
        CommonsMultipartResolver resolver=new CommonsMultipartResolver();
        resolver.setDefaultEncoding("utf-8");
        return resolver;
    }

Step 4:

@RequestMapping(value="/api/attachments/temporary")
public ServiceResponse uploadFile(MultipartHttpServletRequest request){

if (request.getFileNames().hasNext()) {
            //1. get the files from the request object
            Iterator<String> itr = request.getFileNames();
            MultipartFile multipartFileImage = request.getFile(itr.next());
            StringBuilder sb=new StringBuilder(multipartFileImage.getOriginalFilename());

            String filename=sb.substring(sb.lastIndexOf("."), sb.length()); // getting file extension
            filename="("+email+")"+filename; // concatenation unique value i.e email to its file name with extension
            user.setProfileImage(filename);

        try {
            File saveImage = new File(imagePath+filename);  //Local path for image file

            PropertiesCredentials cred = new PropertiesCredentials(getClass().getClassLoader().getResourceAsStream(awsCredentialsProperties));
            logger.debug("Aws access key id :"+cred.getAWSAccessKeyId());
            logger.debug("Aws Secret key :"+cred.getAWSSecretKey());
            AWSCredentials credentials = new BasicAWSCredentials(cred.getAWSAccessKeyId(),
                      cred.getAWSSecretKey()
                    );

            AmazonS3 s3client = AmazonS3ClientBuilder
                      .standard()
                      .withCredentials(new AWSStaticCredentialsProvider(credentials))
                      .withRegion(#) // Your region
                      .build();

            PutObjectResult putResult = s3client.putObject(
                      "<bucket name>", 
                      filename, 
                      saveImage
                    );
            multipartFileImage.transferTo(saveImage);
            logger.debug("putResult :"+putResult.getVersionId());
        }catch(Exception e) {
            return ServiceResponse.createFailureResponse("Unable to upload image due to internet connection failure. Try again later.");
        }

}

It's better that you save your image locally because it's not feasible to get image everytime from s3 bucket if your image is demanding to be available frequently.

Question:

I am a newbie to AWS and right now I am trying to write a standalone Java application to upload a PDF to AWS S3. However, error: 400 is returned.

Can anybody give me some general directions on how to troubleshoot this error?

public class App {

private static String PDF_PATH = "/tmp/pdf-test.pdf";

public static void main(String[] args) throws IOException {

    // prepare AWS credential
    BasicAWSCredentials awsCreds = new BasicAWSCredentials("xxx",
            "yyy");
    AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion("aaa")
            .withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();

    // upload a test PDF
    byte[] pdfDoc = Files.readAllBytes((new File(PDF_PATH)).toPath());
    PutObjectRequest request = new PutObjectRequest("aaa", "bbb",
            new String(Base64.getEncoder().encode(pdfDoc)));
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setContentType("application/pdf");
    metadata.setContentLength(pdfDoc.length);
    request.setMetadata(metadata);
    s3Client.putObject(request);
}

}


Answer:

To expand on what @AlexGoja said, there are three constructors for a PutObjectRequest. One of them takes three Strings. However, the third string parameter is not a Base64 encoded file. I'm not sure where you got that. You want to use the constructor that takes a File to upload the file:

public class App {

private static String PDF_PATH = "/tmp/pdf-test.pdf";

public static void main(String[] args) throws IOException {

    // prepare AWS credential
    BasicAWSCredentials awsCreds = new BasicAWSCredentials("xxx",
            "yyy");
    AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion("aaa")
            .withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();

    // upload a test PDF
    File pdfFile = new File(PDF_PATH);
    PutObjectRequest request = new PutObjectRequest("aaa", "bbb", pdfFile );
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setContentType("application/pdf");
    metadata.setContentLength(pdfFile.length());
    request.setMetadata(metadata);
    s3Client.putObject(request);
}

}

Question:

using aws s3 outbound adapter in spring boot application,trying to upload files in s3 bucket. would like to check if bucket available before uploading file. if bucket is not available need to throw error.

suggest on this.

<int-aws:s3-outbound-gateway id="FileChannelS3"
        request-channel="filesOutS3CChainChannel"
        reply-channel="filesArcChannel"
        transfer-manager="transferManager"
        bucket-expression="headers.TARGET_BUCKET"
                command="UPLOAD">
        <int-aws:request-handler-advice-chain>
            <ref bean="retryAdvice" />          
        </int-aws:request-handler-advice-chain>
    </int-aws:s3-outbound-gateway>

Answer:

You can configure an S3RemoteFileTemplate bean and use its exists() API in the <filter>:

<bean id="s3RemoteFileTemplate" class="org.springframework.integration.aws.support.S3RemoteFileTemplate">
    <constructor-arg ref="s3SessionFactory"/>
</bean>

<int:filter expression="@s3RemoteFileTemplate.exists(headers.TARGET_BUCKET)" throw-exception-on-rejection="true"/>

UPDATE

Facing below exception java.lang.IllegalStateException: 'path' must in pattern [BUCKET/KEY].

Sorry, missed the fact that you need to check existence of the bucket, not an object inside.

For that purpose you need to use an Amazon API directly:

<int:filter expression="@amazonS3.doesBucketExistV2(headers.TARGET_BUCKET)" throw-exception-on-rejection="true"/>

where an amazonS3 is a bean for the com.amazonaws.services.s3.AmazonS3 client.

Question:

I've looked at several questions and answers about this but I'm still unable to upload to an S3 bucket. I have alreadydeclared the service in the manifest within the application tags.

Every log statement returns the correct information, but the status is always WAITING and none of the transfer listeners are triggered. I do not receive any errors whatsoever and the S3 bucket continues to be empty.

What am I doing wrong?

public void uploadStoredItems() {

    TransferUtility transferUtility = TransferUtility.builder()
            .context(AWSProvider.getInstance().getContext())
            .awsConfiguration(AWSProvider.getInstance().getConfiguration())
            .s3Client(new AmazonS3Client(AWSProvider.getInstance().getIdentityManager().getCredentialsProvider()))
            .build();

    try {
        for (File csvFile : csvFiles) {
            String filename = csvFile.getName(); //.substring(csvFile.getName().lastIndexOf('/') + 1);
            Log.d(TAG, "This is the filename for upload: " + filename);

            // FileReader reads text files in the default encoding.
            String line = null;
            FileReader fileReader = new FileReader(csvFile);

            // Always wrap FileReader in BufferedReader.
            BufferedReader bufferedReader = new BufferedReader(fileReader);

            while ((line = bufferedReader.readLine()) != null) {
                Log.d(TAG, "Here are the lines: " + line);
            }

            // Always close files.
            bufferedReader.close();

            TransferObserver uploadObserver = transferUtility.upload(filename, csvFile);
            // Gets id of the transfer.
            Log.d(TAG, "This is the bucket: " + uploadObserver.getBucket());
            Log.d(TAG, "This is the state: " + uploadObserver.getState());
            Log.d(TAG, "This is the id: " + uploadObserver.getId());

            observers = transferUtility.getTransfersWithType(TransferType.UPLOAD);
            TransferListener listener = new TransferListener() {
                @Override
                public void onStateChanged(int id, TransferState state) {
                    Log.d(TAG, "onStateChanged: " + state.toString());
                    if (TransferState.COMPLETED == state) {
                        Log.d(TAG, "COMPLETE: " + state.toString());
                    }
                }

                @Override
                public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
                    float percentDonef = ((float) bytesCurrent / (float) bytesTotal) * 100;
                    int percentDone = (int) percentDonef;

                    Log.d(TAG, "ID:" + id + " bytesCurrent: " + bytesCurrent
                            + " bytesTotal: " + bytesTotal + " " + percentDone + "%");
                }

                @Override
                public void onError(int id, Exception ex) {
                    // Handle errors
                    Log.d(TAG, "ERROR ID:" + id + " " + ex.getMessage());
                }
            };

            for (TransferObserver observer : observers) {

                if (TransferState.WAITING.equals(observer.getState())
                        || TransferState.WAITING_FOR_NETWORK.equals(observer.getState())
                        || TransferState.IN_PROGRESS.equals(observer.getState())) {
                    observer.setTransferListener(listener);
                }

                Log.d(TAG, "\n observers - id: " + observer.getId() + " state: " + observer.getState() + " key: " + observer.getKey() + " bytes total: " + observer.getBytesTotal());
            }
            Log.d(TAG, "Bytes Total: " + uploadObserver.getBytesTotal());

        }
        csvFiles.clear();
    } catch(java.io.FileNotFoundException ex) {
        System.out.println("Unable to open file");
    } catch(java.io.IOException ex) {
        System.out.println("Error reading file");
    }
} 

Answer:

Turns out I have to declare the TransferUtility service to run in the same process that the service I am calling it from is running:

<application

    ...

    <service
      android:name=".MainService"
      android:enabled="true"
      android:process=":main_service"
      android:stopWithTask="false" />
    <service
      android:name="com.amazonaws.mobileconnectors.s3.transferutility.TransferService"
      android:process=":main_service"
      android:enabled="true" />

    ...

</application>

Question:

When I try to upload a file to s3 using the aws java sdk I get an error about InvalidRedirectLocation.

Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: 
The website redirect location must have a prefix of 'http://' or 'https://' or '/'. 
(Service: Amazon S3; Status Code: 400; Error Code: InvalidRedirectLocation; 
Request ID: E801AFDA2A22A20E; S3 Extended Request ID: AAlLOlndWp2dAAA56Vlxs+ZTLCK/
HHaPv/ySaqjIAAAO4wv8qzkm17A7o7YOrtmOx4YJO+yfAAA=), S3 Extended Request ID: LAlAO
lndAp2dAAPA6Vlxs+ZTLCK/AAaPv/ySaqjIAAAO4wv8qzkm17b7o7AOrtmOx4AAO+yflAA=
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1630)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1302)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4330)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4277)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1750)
    at awsTest.main(awsTest.java:67)

Here is a snippet from my code.

String s3Bucket = "test_bucket";
String s3FileName = "test_file.txt";
String localFileName = "C:\\Users\\ABC\\Desktop\\test_file.txt";    

s3.putObject(new PutObjectRequest(s3Bucket, s3FileName, localFileName));

I am able to list files in a bucket and copy files from one bucket to another, but I am not able upload files. Any idea why?


Answer:

I was passing the location of the file as a string to the putObject function, I needed to use the File Class so the following code solved my issue.

String s3Bucket = "test_bucket";
String s3FileName = "test_file.txt";
String localFileName = "C:\\Users\\ABC\\Desktop\\test_file.txt";    
File file = new File(localFileName);

s3.putObject(new PutObjectRequest(s3Bucket, s3FileName, file ));

Question:

I am trying to upload file to aws-s3 bucket using spring mvc rest api. Here is my credentials format to access s3 bucket

[default]
aws_access_key_id = Access key id
aws_secret_access_key = secret access key

Here is my Java code:

@RestController
public class UploadController {

private static String bucketName= "mp4-upload-1";
private static String keyName= "secret access key";
public static final Logger logger=LogManager.getLogger(UploadController.class);

@RequestMapping(value="/uploadVideo", method = RequestMethod.POST, consumes=MediaType.MULTIPART_FORM_DATA_VALUE)
public ResponseEntity<String> uploadVideo(@RequestParam("file") MultipartFile file) throws IOException {


    AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
    try {
        System.out.println("Uploading a new object to S3 from a file\n");

        InputStream is=file.getInputStream();

        s3client.putObject(new PutObjectRequest(bucketName, keyName,is,new ObjectMetadata()).withCannedAcl(CannedAccessControlList.PublicRead));


    } catch (AmazonServiceException ase) {
        System.out.println("Caught an AmazonServiceException, which " +
                "means your request made it " +
                "to Amazon S3, but was rejected with an error response" +
                " for some reason.");
        System.out.println("Error Message:    " + ase.getMessage());
        System.out.println("HTTP Status Code: " + ase.getStatusCode());
        System.out.println("AWS Error Code:   " + ase.getErrorCode());
        System.out.println("Error Type:       " + ase.getErrorType());
        System.out.println("Request ID:       " + ase.getRequestId());
    } catch (AmazonClientException ace) {
        System.out.println("Caught an AmazonClientException, which " +
                "means the client encountered " +
                "an internal error while trying to " +
                "communicate with S3, " +
                "such as not being able to access the network.");
        System.out.println("Error Message: " + ace.getMessage());
    }
    return new ResponseEntity<>(HttpStatus.OK);

}

}

The problem is I am not getting any error but file is not uploading to s3-bucket. am I missing anything here? Thanks in advance


Answer:

The content length should be specified on the request when the stream is uploaded directly to S3.

ObjectMetadata objMetadata = new ObjectMetadata()
objMetadata.setContentLength(20L);

When uploading directly from an input stream, content length must be specified before data can be uploaded to Amazon S3. If not provided, the library will have to buffer the contents of the input stream in order to calculate it. Amazon S3 explicitly requires that the content length be sent in the request headers before any of the data is sent.

Refer this for content length calculation

Alternate Approach:-

Use org.springframework.util.FileCopyUtils.copyToByteArray() to convert the stream to byte[] and upload the byte array to S3.

Question:

I'm developing an application that will make use of S3 storage from Amazon. After following the tutorials and examples provided by Amazon I still find myself unable to get the the upload to work. This is the message I keep getting: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. The bucket is named testbucket-10.09.2017 and it is located in the Frankfurt (eu-central-1) region. I even found an application claiming to do exactly what I need but the error message is the same. The bulk of the code below has been taken and adapted from the documentation and tutorials provided by AWS. Any help would be greatly appreciated.

Here is the code I am using:

My HTML form:

<html> 
  <head>
    <title>S3 POST Form</title> 
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
  </head>

  <body> 
    <form action="https://testbucket-10.09.2017.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
      <input type="hidden" name="key" value="uploads/${filename}">
      <input type="hidden" name="AWSAccessKeyId" value="REMOVED FOR SAFETY"> 
      <input type="hidden" name="acl" value="private"> 
      <input type="hidden" name="success_action_redirect" value="http://localhost/">
      <input type="hidden" name="policy" value="ZXlKbGVIQnBjbUYwYVc5dUlqb2dJakl3TVRndE1ERXRNREZVTURBNk1EQTZNREJhSWl3S0lDQWlZMjl1WkdsMGFXOXVjeUk2SUZzZ0NpQWdJQ0I3SW1KMVkydGxkQ0k2SUNKMFpYTjBZblZqYTJWMExURXdMakE1TGpJd01UY2lmU3dnQ2lBZ0lDQmJJbk4wWVhKMGN5MTNhWFJvSWl3Z0lpUnJaWGtpTENBaWRYQnNiMkZrY3k4aVhTd0tJQ0FnSUhzaVlXTnNJam9nSW5CeWFYWmhkR1VpZlN3S0lDQWdJSHNpYzNWalkyVnpjMTloWTNScGIyNWZjbVZrYVhKbFkzUWlPaUFpYUhSMGNEb3ZMMnh2WTJGc2FHOXpkQzhpZlN3S0lDQWdJRnNpYzNSaGNuUnpMWGRwZEdnaUxDQWlKRU52Ym5SbGJuUXRWSGx3WlNJc0lDSWlYU3dLSUNBZ0lGc2lZMjl1ZEdWdWRDMXNaVzVuZEdndGNtRnVaMlVpTENBd0xDQXhNRFE0TlRjMlhRb2dJRjBLZlE9PQ==">
      <input type="hidden" name="signature" value="REMOVED FOR SAFETY">
      <input type="hidden" name="Content-Type" value="image/jpeg">
      <!-- Include any additional input fields here -->

      File to upload to S3: 
      <input name="file" type="file"> 
      <br> 
      <input type="submit" value="Upload File to S3"> 
    </form> 
  </body>
</html>

My Java code generating the policy and signature:

public static void myAttempt() throws Exception {

        String policy_document = constructPolicy();
        String aws_secret_key="REMOVED FOR SAFETY";

        String policy = (new BASE64Encoder()).encode(
                policy_document.getBytes("UTF-8")).replaceAll("\n","").replaceAll("\r","");

        String dateStamp ="20170912";
        String region = "eu-central-1";
        String serviceName ="s3";
        System.out.println("NEW SIGNATURE: "+getSignature(getSignatureKey(aws_secret_key,dateStamp,region,serviceName)));

        System.out.println("ENCODED POLICY: "+policy);        
    }

private static String constructPolicy() throws UnsupportedEncodingException {

        String policy_document="{\"expiration\": \"2018-01-01T00:00:00Z\",\n" +
                "  \"conditions\": [ \n" +
                "    {\"bucket\": \"testbucket-10.09.2017\"}, \n" +
                "    [\"starts-with\", \"$key\", \"uploads/\"],\n" +
                "    {\"acl\": \"private\"},\n" +
                "    {\"success_action_redirect\": \"http://localhost/\"},\n" +
                "    [\"starts-with\", \"$Content-Type\", \"\"],\n" +
                "    [\"content-length-range\", 0, 1048576]\n" +
                "  ]\n" +
                "}";

        String policy = (new BASE64Encoder()).encode(
                policy_document.getBytes("UTF-8")).replaceAll("\n","").replaceAll("\r","");
        return policy;
    }

private static byte[] HmacSHA256(String data, byte[] key) throws Exception {
        String algorithm="HmacSHA256";
        Mac mac = Mac.getInstance(algorithm);
        mac.init(new SecretKeySpec(key, algorithm));
        return mac.doFinal(data.getBytes("UTF8"));
    }

private static byte[] getSignatureKey(String key, String dateStamp, String regionName, String serviceName) throws Exception  {
        byte[] kSecret = ("AWS4" + key).getBytes("UTF8");
        byte[] kDate    = HmacSHA256(dateStamp, kSecret);
        byte[] kRegion  = HmacSHA256(regionName, kDate);
        byte[] kService = HmacSHA256(serviceName, kRegion);
        byte[] kSigning = HmacSHA256("aws4_request", kService);
        return kSigning;
    }

private static String getSignature(byte[] key) throws Exception{

        return base16().lowerCase().encode(HmacSHA256(constructPolicy(), key));
    }

Answer:

Turns out that for some reason outdated AWS documentation and examples are among the first results when doing a search. A few google results pages later, a more up to date example came up. Basically the form I was using was wrong. The correct one is as follows:

<html>
  <head>

    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

  </head>
  <body>

  <form action="http://sigv4examplebucket.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
    Key to upload: 
    <input type="input"  name="key" value="user/user1/${filename}" /><br />
    <input type="hidden" name="acl" value="public-read" />
    <input type="hidden" name="success_action_redirect" value="http://sigv4examplebucket.s3.amazonaws.com/successful_upload.html" />
    Content-Type: 
    <input type="input"  name="Content-Type" value="image/jpeg" /><br />
    <input type="hidden" name="x-amz-meta-uuid" value="14365123651274" /> 
    <input type="hidden" name="x-amz-server-side-encryption" value="AES256" /> 
    <input type="text"   name="X-Amz-Credential" value="AKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request" />
    <input type="text"   name="X-Amz-Algorithm" value="AWS4-HMAC-SHA256" />
    <input type="text"   name="X-Amz-Date" value="20151229T000000Z" />

    Tags for File: 
    <input type="input"  name="x-amz-meta-tag" value="" /><br />
    <input type="hidden" name="Policy" value='<Base64-encoded policy string>' />
    <input type="hidden" name="X-Amz-Signature" value="<signature-value>" />
    File: 
    <input type="file"   name="file" /> <br />
    <!-- The elements after this will be ignored -->
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>

</html>

Here is the link from where I got the form and where additional examples can be found.

Question:

I am trying to figure out the simplest method for allowing clients to upload media (photos and video) to my S3 bucket, without giving them direct access, or using pre-signed URLs.

The idea is that I don't want any kind of media processing to occur, the only thing that I am interested in is to shield the S3 bucket from direct contact with the clients, and record information about the files being uploaded (such as size, type etc.).

Do you have any ideas on how this architecture might be implemeted in a simple way?


Answer:

Uploading from a mobile app

To upload a file from a mobile application to Amazon S3:

  • In your back-end, use the AWS Security Token Service to generate temporary credentials
  • Pass the credentials to your mobile app
  • The mobile app can then call AWS APIs

The temporary credentials can be granted a limited set of permissions (eg upload to a specific bucket and path) and are valid only for a limited duration, up to one hour. This is good security practice because no permanent credentials are kept on the mobile device.

Uploading from a web page

Use a browser-based upload via an HTML form. This allows a form in an HTML page to securely upload directly the Amazon S3 -- even to private folders. It uses a signed policy to define the permitted action (eg upload to a specific location, up to a certain file size, using a particular permission set).

The form can be static -- no need to recalculate signatures for every individual file to be uploaded.

See: Authenticating Requests in Browser-Based Uploads Using POST

Question:

I catch this exception when trying to run the example from here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html Anybody can help?

Exception in thread "main" java.lang.ExceptionInInitializerError
at javax.crypto.Mac.getInstance(Mac.java:171)
at com.amazonaws.auth.AbstractAWSSigner.sign(AbstractAWSSigner.java:87)
at com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:69)
at com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:58)
at com.amazonaws.services.s3.internal.S3Signer.sign(S3Signer.java:127)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:652)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3697)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1434)
at com.bartoff.s3Utils.UploadObject.main(UploadObject.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:86)
... 16 more
Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer!
at javax.crypto.JarVerifier.verifyPolicySigned(JarVerifier.java:289)
at javax.crypto.JceSecurity.loadPolicies(JceSecurity.java:316)
at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:265)
at javax.crypto.JceSecurity.access$000(JceSecurity.java:48)
at javax.crypto.JceSecurity$1.run(JceSecurity.java:78)
at java.security.AccessController.doPrivileged(Native Method)
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:76)
... 16 more

Answer:

This is due to Security issue from Java.

To resolve it.

  • Download JCE files from the oracle site here
  • Extract the files to {YOUR_JDK_PATH}/jdk1.7.0_51/jre/lib/security

Question:

I have the next code for uploading files to an Amazon S3:

AmazonS3Client client = new AmazonS3Client(credentials, 
            new ClientConfiguration().withMaxConnections(100)
                                  .withConnectionTimeout(120 * 1000)
                                  .withMaxErrorRetry(15));
TransferManager tm = new TransferManager(client);
TransferManagerConfiguration configuration = new TransferManagerConfiguration();
configuration.setMultipartUploadThreshold(MULTIPART_THRESHOLD);
tm.setConfiguration(configuration);

Upload upload = tm.upload(bucket, key, file);

try {
    upload.waitForCompletion();
} catch(InterruptedException ex) {
    logger.error(ex.getMessage());
} finally {
    tm.shutdownNow(false);
}

It works, but some uploads(1GB) produce the next log message:

 INFO AmazonHttpClient:Unable to execute HTTP request: bucket-name.s3.amazonaws.com failed to respond
 org.apache.http.NoHttpResponseException: bucket-name.s3.amazonaws.com failed to respond

I have tried to create TransferManager without AmazonS3Client, but it doesn't help.

Is there any way to fix it?


Answer:

The log message is telling you that there was a transient error sending data to S3. You've configured .withMaxErrorRetry(15), so the AmazonS3Client is transparently retrying the request that failed and the overall upload is succeeding.

There isn't necessarily anything to fix here - sometimes packets get lost on the network, especially if you're trying to push through a lot of packets at once. Waiting a little while and retrying is usually the right way to deal with this, and that's what's already happening.

If you wanted, you could try turning down the MaxConnections setting to limit how many chunks of the file will be uploaded at a time - there's probably a sweet spot where you're still getting reasonable throughput, but not overloading the network.

Question:

My current AWS setup is a lambda function that is being triggered whenever I put an object into a S3 bucket. I implemented the lambda's handler function in Java. What I want to do is simply accessing the file that was uploaded and triggered the execution of the lambda function. E.g., if I upload sample.json to the bucket, I want to access the contents of this file in my handler function.

I know I can do something like this:

public Void handleRequest(S3Event input, Context context) {
  for (S3EventNotificationRecord record : input.getRecords()) {
    String key = record.getS3().getObject().getKey();
    String bucket = record.getS3().getBucket().getName();
    AmazonS3 s3Client = new AmazonS3Client(credentials);
    try {
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucket, key));
      InputStream input = s3Object.getObjectContent();
      BufferedReader reader = new BufferedReader(new InputStreamReader(input));
      while (true) {
        String line = reader.readLine();
        if (line == null) break;
        // Do something with line...
      }
// ...

The problem is that I am not allowed to use access keys. Thus, I cannot create an s3Client to download the file with. In other words, I have to get the object from the argument that my handler method takes, i.e., S3Event input. How would I do that?


Answer:

If your Lambda function is configured with an appropriate IAM role (that allows s3:GetObject of the relevant S3 object), then you don't need to explicitly provide credentials in your code.

Here's sample Java code to get a object in response to an object uploaded Lambda event:

package example;

import java.net.URLDecoder;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;

public class S3GetTextBody implements RequestHandler<S3Event, String> {

    public String handleRequest(S3Event s3event, Context context) {
        try {
            S3EventNotificationRecord record = s3event.getRecords().get(0);

            // Retrieve the bucket & key for the uploaded S3 object that
            // caused this Lambda function to be triggered
            String bkt = record.getS3().getBucket().getName();
            String key = record.getS3().getObject().getKey().replace('+', ' ');
            key = URLDecoder.decode(key, "UTF-8");

            // Read the source file as text
            AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
            String body = s3Client.getObjectAsString(bkt, key);
            System.out.println("Body: " + body);
            return "ok";
        } catch (Exception e) {
            System.err.println("Exception: " + e);
            return "error";
        }
    }
}

Question:

I'm trying to just upload as many files I want to a S3 bucket usign the Java SDK. I'm doing it as follows:

    @Override
public void upload() {
    PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET_NAME, ACCESS_KEY, new File('myFilePath'));
    getS3Client().putObject(putObjectRequest);
}

The problem is that it is ALWAYS overriding and existing file. So, no matter if I change the "myFilePath" parameter, to an image or a text, it doesn't create a new file but updates an existing one. It shouldn't happen because these are different files, not the same.

How can I just create a file without updating an existing one?


Answer:

The second parameter to the PutObjectRequest constructor is the key under which to store the new object. You are mistakenly passing the same key (ACCESS_KEY) for all the requests.

Question:

I have an s3 bucket which is used for users to upload zipped directories, often 1GB in size. The zipped directory holdes images in subfolders and more.

I need to create a lambda function, that will get triggered upon new uploads, unzip the file, and upload the unzipped content back to an s3 bucket, so I can access the individual files via http - but I'm pretty clueless as to how I can write such a function?

My concerns are:

  • Pyphon or Java is probably better performance over nodejs?
  • Avoid running out of memory, when unzipping files of a GB or more (can I stream the content back to s3?)

Answer:

The AWS Lambda FAQ states:

Each Lambda function receives 500MB of non-persistent disk space in its own /tmp directory.

This will be insufficient for storing the 1GB zip file, plus the unzipped contents.

You would need to stream the 'input' zip file in ranges, and store the unzipped files in small groups to avoid this problem. It is probably not worthwhile using Lambda for this application.

Question:

I want to upload a file to Amazon S3 from an environment which gives me IAM credentials. However I am getting this error :

Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: EF93490A8356F585)

The IAM roles are as follows :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::sam-94a493b-dev"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::sam-bbcb194a493b-dev/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:000351272236:key/9b7a989c-ee8e-4c83-b765-6debe0f94eaa"
            ]
        }
    ]
} 

I use default client to access Amazon S3 client and putObject method to put an object to a the bucket with fileNameWithPath (path/in/s3/filename.ext) The code to access s3 is as follows :

AmazonS3 s3client = AmazonS3ClientBuilder.defaultClient();
s3client.putObject(bucketName, fileNameWithPath, file)

And the error I get is :

com.amazonaws.services.s3.model.AmazonS3Exception: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4. (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: EF93490A8356F585)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1587) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1257) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511) ~[aws-java-sdk-core-1.11.163.jar!/:?]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4227) ~[aws-java-sdk-s3-1.11.163.jar!/:?]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4174) ~[aws-java-sdk-s3-1.11.163.jar!/:?]
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1722) ~[aws-java-sdk-s3-1.11.163.jar!/:?]
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1577) ~[aws-java-sdk-s3-1.11.163.jar!/:?]
    at com.example.services.S3Service.uploadFile(S3Service.java:63) ~[classes!/:?]

My aws sdk version is - 1.11.163 which should have have signature version 4 by default. I am not sure where the problem lies

I have already tried setting various SSEAlgorithm in putObject like 'AES256' and 'AWS4-HMAC-SHA256' but those didn't help.

Any leads would be appriciated.


Answer:

I solved this issue by following the following steps -

  1. Explicitly specifying the request via PutObjectRequest
  2. Creating a new ObjectMetadata and setting the SSEAlgorithm to it - "aws:kms".
  3. Attach the objectMetadata to the request.
  4. Send the request via putObject method.

Here is code -

    PutObjectRequest request = new PutObjectRequest(bucketName, ruleFilePath, file);
    ObjectMetadata objectMetadata = new ObjectMetadata();
    objectMetadata.setSSEAlgorithm("aws:kms");
    request.setMetadata(objectMetadata);
    this.s3client.putObject(request);

Question:

I'm trying to upload a large file to a server which uses a token and the token expires after 10 minutes, so if I upload a small file it will work therefore if the file is big than I will get some problems and will be trying to upload for ever while the access is denied

So I need refresh the token in the BasicAWSCredentials which is than used for the AWSStaticCredentialsProvider therefore I'm not sure how can i do it, please help =)

Worth to mention that we use a local server (not amazon cloud) with provides the token and for convenience we use amazon's code.

here is my code:

public void uploadMultipart(File file) throws Exception {
    //this method will give you a initial token for a given user, 
    //than calculates when a new token is needed and will refresh it just when necessary

    String token = getUsetToken();
    String existingBucketName = myTenant.toLowerCase() + ".package.upload";
    String endPoint = urlAPI + "s3/buckets/";
    String strSize = FileUtils.byteCountToDisplaySize(FileUtils.sizeOf(file));
    System.out.println("File size: " + strSize);

    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endPoint, null);//note: Region has to be null
    //AWSCredentialsProvider        
    BasicAWSCredentials sessionCredentials = new BasicAWSCredentials(token, "NOT_USED");//secretKey should be set to NOT_USED

    AmazonS3 s3 = AmazonS3ClientBuilder
            .standard()
            .withCredentials(new AWSStaticCredentialsProvider(sessionCredentials))
            .withEndpointConfiguration(endpointConfiguration)
            .enablePathStyleAccess()
            .build();

    int maxUploadThreads = 5;
    TransferManager tm = TransferManagerBuilder
            .standard()
            .withS3Client(s3)
            .withMultipartUploadThreshold((long) (5 * 1024 * 1024))
            .withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
            .build();

    PutObjectRequest request = new PutObjectRequest(existingBucketName, file.getName(), file);
    //request.putCustomRequestHeader("Access-Token", token);
    ProgressListener progressListener = progressEvent -> System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred());
    request.setGeneralProgressListener(progressListener);
    Upload upload = tm.upload(request);

    LocalDateTime uploadStartedAt = LocalDateTime.now();
    log.info("Starting upload at: " + uploadStartedAt);

    try {
        upload.waitForCompletion();
        //upload.waitForUploadResult();
        log.info("Upload completed. " + strSize);

    } catch (Exception e) {//AmazonClientException
        log.error("Error occurred while uploading file - " + strSize);
        e.printStackTrace();
    }
}

Answer:

When uploading a file (or parts of a multi-part file), the credentials that you use must last long enough for the upload to complete. You CANNOT refresh the credentials as there is no method to update AWS S3 that you are using new credentials for an already signed request.

You could break the upload into smaller files that upload quicker. Then only upload X parts. Refresh your credentials and upload Y parts. Repeat until all parts are uploaded. Then you will need to finish by combining the parts (which is a separate command). This is not a perfect solution as transfer speeds cannot be accurately controlled AND this means that you will have to write your own upload code (which is not hard).

Question:

I want to upload files using multipart uploads but it isn't suitable for me to use ETags (because parts can be uploaded through different servers). Is it possible to create a concatenated file with files which already have been uploaded and use the names of this files or something else but not ETags?


Answer:

No, that isn't possible.

It is OK for different servers to upload their parts, but you need to figure out a way for them to notify a central authority of ETags that they received from S3, so that the central authority can complete the request at the end.

Question:

I am using spring boot and I have a multipart file from request that I need to upload to S3,but S3 only supports file(not multipart file) to upload to S3. How to upload multipart file to s3


Answer:

You can do as follows :

public class S3Utility{
    public static String upload(String bucket, String fileName, InputStream inputStream, String contentType, AmazonS3 s3Client, boolean isPublic) {
    if (inputStream != null) {
      try {
        ObjectMetadata meta = new ObjectMetadata();
        meta.setContentLength(inputStream.available());
        meta.setContentType(contentType);

        PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, fileName, inputStream, meta);

        if (isPublic) {
          putObjectRequest = putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
        }

        PutObjectResult result = S3Util.getAmazonS3Client(s3Client).putObject(putObjectRequest);
        if (result != null) {
          return fileName;
        }

      } catch (Exception e) {
        log.error("Error uploading file on S3: ", e);
      }
    } else {
      log.warn("Content InputStream is null. Not Uploading on S3");
    }
    return null;
  }
}


@RestController
 public class TestController{


   @PostMapping(path = "upload/file", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
    public ResponseEntity<String> uploadStampPageImage( @RequestPart(value = "file") MultipartFile imageFile) {

        if(Objets.nonNull(file)){
          String uploadedFilePath = S3Utility.upload("myBucket","myFile.jpg",file.getInputStream(),file.getContentType(), amazonS3Client, true);
          return new ResponseEntity.ok(uploadedFilePath);
        }
        return new ResponseEntity.failure("Not uploaded");

    }

}

 }

Question:

I am trying to upload files into my S3 bucket using AWS Lambda in Java and i'm having some issues.

I am using APIGatewayProxyRequestEvent in my AWS Lambda function to get my file upload from Postman.

request.getBody() method of this event gives me a String representation of the image file whereas the S3.putObject takes as input an InputStream of the file to be uploaded.

How can I feed in request.getBody() to the S3.putObject() method in my Lambda code to make the File Upload work?


Answer:

1) You may create a File and using FileWriter you may write the request.getBody() into it. 2) You can go with PutObjectRequest object and put file created in Step1 into it. 3) s3Client.putObject(PutObjectRequest) will help you to put object to s3

Question:

I am trying to multi-part upload a file using Amazon S3 Server side encryption(KMS). I am getting a little confused whether I do need the KMS key in my code anywhere and if so, then how do I add it to the Java code?

--Update private static void saveMultipartData(String clientRegion, String bucketName, String awsFilePath, File file) { AmazonS3 s3client = AmazonS3Client.builder() .withRegion(clientRegion) .withCredentials(new AWSStaticCredentialsProvider(credentials)) .build();

    ObjectMetadata objectMetadata = new ObjectMetadata();
    PutObjectRequest putRequest = null;
    try {
        try {
            putRequest = new PutObjectRequest(bucketName,
                    awsFilePath,
                    new FileInputStream(file),
                    objectMetadata);
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
        // Upload the object and check its encryption status.
        putRequest.putCustomRequestHeader("x-amz-server-side-encryption","aws:kms");
        putRequest.putCustomRequestHeader("x-amz-server-side-encryption-aws-kms-key-id","<<keyID>>");

        TransferManager tm = TransferManagerBuilder.standard().withMinimumUploadPartSize(100L).withMultipartUploadThreshold(100L)
                .withS3Client(s3client)
                .build();
        Upload upload = tm.upload(putRequest);

        upload.waitForCompletion();
    } catch (Exception e) {
        e.printStackTrace();
    }
}

Answer:

While you don't need to have the KMS key in your code, your code does need to be able to access the key. What I am implying is that you, for example, use an environment variable to pass this value in- that way the key is hidden. Once you have the key, doing a multi-part upload can be performed as this:

InitiateMultipartUploadRequest initRequest = new
                InitiateMultipartUploadRequest(bucketName, keyName);
        initRequest.putCustomRequestHeader("x-amz-server-side-encryption", "aws:kms");
        initRequest.putCustomRequestHeader("x-amz-server-side-encryption-aws-kms-key-id", kmsKey);

Question:

I receive a presigned URL to upload to S3 . When i upload given the code below, i am getting a 403 status response. I tried setting the bucket policy to public on the web console but that has not solved the issue. Any other insights on how to fix the issue? i have also tried adding the ACL to PublicREADWRITE.

HttpURLConnection connection = (HttpURLConnection) url.openConnection();
    connection.setDoOutput(true);
    connection.setRequestMethod("PUT");

    OutputStream out = connection.getOutputStream();

   // OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
    //out.write("This text uploaded as an object via presigned URL.");


    byte[] boundaryBytes = Files.readAllBytes(Paths.get(edmFile));
    out.write(boundaryBytes);
    out.close();

    // Check the HTTP response code. To complete the upload and make the object available,
    // you must interact with the connection object in some way.
    int responseCode = connection.getResponseCode();
    System.out.println("HTTP response code: " + responseCode);

Presigned url:

  private URL getUrl(String bucketName, String objectKey) {

        String clientRegion = "us-east-1";
        java.util.Date expiration = new java.util.Date();
        long expTimeMillis = expiration.getTime();
        expTimeMillis += 1000 * 60 * 10;
        expiration.setTime(expTimeMillis);

        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(clientRegion)
                .build();
        GeneratePresignedUrlRequest generatePresignedUrlRequest =
                new GeneratePresignedUrlRequest(bucketName, objectKey)
                        .withMethod(HttpMethod.GET)
                        .withExpiration(expiration);
        URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

        System.out.println("Pre-Signed URL: " + url.toString());
        return url;
    }

Answer:

As discussed, signed Url should exactly match on what you want to do next.

Your pre-signed Url is created with GET operation, that's why upload PUT operation is failing with access denied error.

Try updating the withMethod to PUT.

GeneratePresignedUrlRequest generatePresignedUrlRequest =
                new GeneratePresignedUrlRequest(bucketName, objectKey)
                        .withMethod(HttpMethod.PUT)
                        .withExpiration(expiration);

Question:

I want to use S3 for hosting files which I upload via a Kotlin Spring-Boot application. I followed the instructions and used various other documentations plus tried a few solutions for similar issues found on stackoverflow. I always receive a 403 error. How do I set up S3 and IAM so I can upload the file? And how do I find out what's wrong? Any help would be appreciated.

I have activated access logging, which takes ages and hasn't helped me much yet, especially because it takes like 45 minutes to generate the logs. Ignoring the responses with status 200, the following messages appear in the logs (bucket represents the name of my bucket):

  • GET /bucket?encryption= HTTP/1.1" 404 ServerSideEncryptionConfigurationNotFoundError
  • GET /bucket?cors= HTTP/1.1" 404 NoSuchCORSConfiguration
  • GET /bucket?policy= HTTP/1.1" 404 NoSuchBucketPolicy
  • PUT /bucket?policy= HTTP/1.1" 400 MalformedPolicy
  • GET /bucket/?policyStatus HTTP/1.1" 404 NoSuchBucketPolicy
  • PUT /bucket?policy= HTTP/1.1" 403 AccessDenied

I build an AmazonS3 instance by

AmazonS3ClientBuilder.defaultClient()

I've checked the implementation and it retrieves the credentials from the environment variables I've set up.

To submit the file, I use the following method in my S3Service implementation:

private fun uploadFileToBucket(fileName: String, file: File) {
    s3client.putObject(
            PutObjectRequest(bucketName, fileName, file)
                    .withCannedAcl(CannedAccessControlList.PublicRead)
    )
}

This is my policy for the IAM user (the user inherits the policy from a group):

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": [
            "s3:PutAccountPublicAccessBlock",
            "s3:GetAccountPublicAccessBlock",
            "s3:ListAllMyBuckets",
            "s3:HeadBucket"
        ],
        "Resource": "*"
    },
    {
        "Sid": "VisualEditor1",
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::bucket",
            "arn:aws:s3:::bucket/*"
        ]
    }
]

}

And this is the bucket policy:

{
    "Version": "2012-10-17",
    "Id": "PolicyId",
    "Statement": [
        {
            "Sid": "StmtId",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::account:user/username"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::bucket"
        }
    ]
}

In the end, I want to be able to put files onto the bucket and want to provide public access to those. For example I want to upload images from an Angular app, uploading them via my Spring Boot application and display them on the Angular app. Right now I can't even upload them via Postman without a 403 error.


Answer:

The IAM policy could be shortened to this:

{
 "Version": "2012-10-17",
 "Statement": [
    {
        "Sid": "VisualEditor1",
        "Effect": "Allow",
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::bucket",
            "arn:aws:s3:::bucket/*"
        ]
    }
 ]
}

In other words, that second statement gives full S3 access, so the first statement in your IAM policy is pointless.

Your bucket policy probably has something wrong with it. It's hard to tell because you've replaced several values with placeholders I think. However, you don't need a bucket policy at all in this instance. I would just delete it.

Question:

how can I access a custom header from a server response when using TransferManager ?

we have a custom header added in the response from our server, from the client side we use multi part upload with default transfer manager

any suggestion how in how i could hook up it ?

so basically i want to pass over the response from the return response.getAwsResponse(); found in the class: AmazonS3Client on the method

private <X, Y extends AmazonWebServiceRequest> X invoke(Request<Y> request,
                HttpResponseHandler<AmazonWebServiceResponse<X>> responseHandler,
                String bucket, String key, boolean isAdditionalHeadRequestToFindRegion) { 

that response will have the HTTP response from the server containing the custom heather which I'm after, basically is a unique Id send back when the file was 100% completed so than i can manipulate it.

I need to pass over this custom header from the response to the very beginning where I use the transfer manager and the upload.waitForCompletion(), also i don't want to edit the amazon's,

so does anyone know if there is an interface or some other object which provides me access to it ?


Answer:

After some debug into the framework I strongly believe that there is no way to have access to the http response when using the TransferManager

for what we are trying to do we need to send an unique id from the server to the client when the file upload is completed and assembled

** therefore if you don't mind in do not use the beauty of the TransferManager you could write "your own TransferMananger" than you will have full control, but again on the client side we don't really want to add custom code but have a standard and simple approach (but that is for my scenario), if you decide to do it manually it can be done I have already tried and works !

So as a alternative we though in send from the server via the eTag, which is not great but will do the job and will keep the client side simple and clean

Any suggestion in how to send this value back in a better way ?

Upload up = tm.upload(bucketName, file.getName(), file);

UploadResult result = (UploadResult) ((UploadImpl) up).getMonitor().getFuture().get();
String uniqueIdFromServer = result.getETag();

Question:

I need to upload multiple file simultaneously to AWS S3 . I uploading files on different threads using ExecuterService. When I start arround 100 threads it start throwing exception:

com.amazonaws.SdkClientException: Unable to execute HTTP request: null
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325) ~[aws-java-sdk-s3-1.11.248.jar!/:na]
|   at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272) ~[aws-java-sdk-s3-1.11.248.jar!/:na]
|   at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1749) ~[aws-java-sdk-s3-1.11.248.jar!/:na]
|   ... 9 common frames omitted
| Caused by: java.nio.channels.ClosedChannelException: null
|   at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) ~[na:1.8.0_171]
|   at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:147) ~[na:1.8.0_171]
|   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65) ~[na:1.8.0_171]
|   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) ~[na:1.8.0_171]
|   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) ~[na:1.8.0_171]
|   at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream.read(MD5DigestCalculatingInputStream.java:128) ~[aws-java-sdk-s3-1.11.248.jar!/:na]
|   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) ~[na:1.8.0_171]
|   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) ~[na:1.8.0_171]
|   at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[na:1.8.0_171]
|   at com.amazonaws.internal.SdkBufferedInputStream.read(SdkBufferedInputStream.java:76) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:140) ~[httpcore-4.4.9.jar!/:4.4.9]
|   at com.amazonaws.http.RepeatableInputStreamRequestEntity.writeTo(RepeatableInputStreamRequestEntity.java:160) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156) ~[httpcore-4.4.9.jar!/:4.4.9]
|   at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:160) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238) ~[httpcore-4.4.9.jar!/:4.4.9]
|   at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:63) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123) ~[httpcore-4.4.9.jar!/:4.4.9]
|   at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.4.jar!/:4.5.4]
|   at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1236) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ~[aws-java-sdk-core-1.11.248.jar!/:na]
|   ... 20 common frames omitted

The Java AWS Sdk version I using is 1.11.248.

The ExecuterSErvice is used as:

ExecutorService ioThreadPool = Executors.newFixedThreadPool(100);
    ConcurrentLinkedDeque<Future<?>> uploadTasks = new ConcurrentLinkedDeque<>();
    Future<?> future = ioThreadPool.submit(() -> {
        try {
            putObjectToS3();
        } catch (IOException e) {
            throw new InternalErrorException("IOException during file upload.", e);
        }
    });
    uploadTasks.push(future);
    CompletableFuture.allOf(uploadTasks.toArray(new CompletableFuture<?>[0])).join();

I also set the maxNumberOfConnection in AWS Client to 150

Thanks


Answer:

It seems that you need to implement retry mechanism using ClientConfiguration as follows:

AmazonS3Client s3Client = AmazonS3ClientBuilder.standard().withRegion(REGION).
.withClientConfiguration(getClientConfiguration()).build();

private ClientConfiguration getClientConfiguration() {
        ClientConfiguration clientConfiguration = new ClientConfiguration();
        clientConfiguration.setRetryPolicy(new RetryPolicy(null, null, MAX_TRIES, false);
        return clientConfiguration;
}

You can add/update above configurations in s3 client builder specific to your application. I have used basic configuration to show you its use.

Now, try running your program with this configuration.

Question:

I have a bucket in S3 with the following structure and contents:

javaFolderA/
└── javaFolderB/
    └── javaFile.tmp
consoleFolderA/
└── consoleFolderB/
    └── consoleFile.tmp

The java* folders and file were uploaded via the Java SDK:

final File file = new File("C:\\javaFolderA\\javaFolderB\\javaFile.tmp");
client.putObject("testbucket", "javaFolderA/javaFolderB/javaFile.tmp", file);

The console* folders and file were created/uploaded from the web console (Clicking the "+ Create folder" button for each folder, then uploading the file with public read permissions).

In the web console, after clicking to create a new bucket, the following message is shown:

When you create a folder, S3 console creates an object with the above name appended by suffix "/" and that object is displayed as a folder in the S3 console.

So, as expected, with the folders and files above, we get 3 objects created in the bucket with the following keys:

  • consoleFolderA/
  • consoleFolderA/consoleFolderB/
  • consoleFolderA/consoleFolderB/consoleFile.tmp

Tthe result of the SDK upload is a single object with the key: javaFolderA/javaFolderB/javaFile.tmp. This makes sense, as we are only putting a single object, not three. However, this results in inconsistencies when listing the contents of a bucket. Even though there is only one actual file in each directory, listing the contents returns multiple for the console scenario.

My question is why is this the case, and how can I achieve consistent behavior? There doesn't seem to be a way to "upload a directory" via the SDK (In quotes because I know there aren't actually folders/directories).

From the CLI I can verify the number of objects and their keys:

C:\Users\avojak>aws s3api list-objects --bucket testbucket
{
    "Contents": [
        {
            "LastModified": "2018-01-02T22:43:55.000Z",
            "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
            "StorageClass": "STANDARD",
            "Key": "consoleFolderA/",
            "Owner": {
                "DisplayName": "foo.bar",
                "ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
            },
            "Size": 0
        },
        {
            "LastModified": "2018-01-02T22:44:02.000Z",
            "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
            "StorageClass": "STANDARD",
            "Key": "consoleFolderA/consoleFolderB/",
            "Owner": {
                "DisplayName": "foo.bar",
                "ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
            },
            "Size": 0
        },
        {
            "LastModified": "2018-01-02T22:44:16.000Z",
            "ETag": "\"968fe74fc49094990b0b5c42fc94de19\"",
            "StorageClass": "STANDARD",
            "Key": "consoleFolderA/consoleFolderB/consoleFile.tmp",
            "Owner": {
                "DisplayName": "foo.bar",
                "ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
            },
            "Size": 69014
        },
        {
            "LastModified": "2018-01-02T22:53:13.000Z",
            "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
            "StorageClass": "STANDARD",
            "Key": "javaFolderA/javaFolderB/javaFile.tmp",
            "Owner": {
                "DisplayName": "foo.bar",
                "ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
            },
            "Size": 0
        }
    ]
}

Answer:

If you prefer the console implementation then you need to emulate it. That means that your SDK client needs to create the intermediate 'folders', when necessary. You can do this by creating zero-sized objects whose key ends in forward-slash (if that's your 'folder' separator).

The AWS console behaves this way, allowing you to create 'folders', because many AWS console users are more comfortable with the notion of folders and files than they are with objects (and keys).

It's rare, in my opinion, to need to do this, however. Your SDK clients should be implemented to handle both the presence and absence of these 'folders'. More info here.

Question:

I am using the amazonaws S3 for uploading the media file getting the following error like below:-

E/UploadTask: Failed to upload: 15 due to Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null) E/Exeception: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null), S3 Extended Request ID: null E/percentage: 100 15 E/statechange: FAILED

I have used the following code for it , please check it once.

 CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
            context, NetworkTask.BASE_AWS_KEY, Regions.US_EAST_1);// Region

   AmazonS3Client s3 = new AmazonS3Client(credentialsProvider);

    s3.setRegion(Region.getRegion(Regions.US_EAST_1));

    transferUtility = new TransferUtility(s3, context);
        TransferObserver transferObserver = transferUtility.upload(
                "MY-BUCKET-NAME"     /* The bucket to upload to */
                , fileUploadName, /* The key for the uploaded object */
                fileToUpload       /* The file where the data to upload exists */
        );

        transferObserver.setTransferListener(new TransferListener() {

            @Override
            public void onStateChanged(int id, TransferState state) {
                Log.e("statechange", state + "");

                if (String.valueOf(state).equalsIgnoreCase("COMPLETED")) {
                    fileUploadInterface.getUploadFileUrl(String.valueOf(s3.getUrl("zargow.vcard.image", fileUploadName)), service_id);
                }
            }

            @Override
            public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
                int percentage = (int) (bytesCurrent / bytesTotal * 100);
                Log.e("percentage", percentage + "" + "  " + id);
            }

            @Override
            public void onError(int id, Exception ex) {
                Log.e("Exeception", ex.toString());
            }

        });

4 out of 5 times i am getting above error and one time getting the success response.

I have used the following gradle for it,please check it once

 compile('com.amazonaws:aws-android-sdk-s3:2.2.13') {
        exclude module: 'gson'
    }

I have visited the following site before posting the question but did not get any expected result.Please check the links 1. First link 2. Second link 3. Third link 4. Forth link 5. Fifth link

Please check it once, and let me know what did i wrong on the code. Please help me to short out from this problem


Answer:

Ok well this took me a ton of time to get right, but I'm going to share it with you ;). Below is a CognitoHelper class I wrote to manage using the credentials needed for Authentication as well as S3 information. I don't know your full app or what you are using, so I'm just giving you the full thing.

    import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoDevice;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoUserAttributes;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoUserCodeDeliveryDetails;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoUserDetails;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoUserPool;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.CognitoUserSession;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.continuations.AuthenticationContinuation;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.continuations.AuthenticationDetails;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.continuations.ChallengeContinuation;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.continuations.ForgotPasswordContinuation;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.continuations.MultiFactorAuthenticationContinuation;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.AuthenticationHandler;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.ForgotPasswordHandler;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.GenericHandler;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.GetDetailsHandler;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.UpdateAttributesHandler;
import com.amazonaws.mobileconnectors.cognitoidentityprovider.handlers.VerificationHandler;
import com.amazonaws.regions.Regions;

import java.util.List;
import java.util.Locale;

/**
 * Created by App Studio 35 on 7/27/17.
 */
public class CognitoManager {

    /*///////////////////////////////////////////////////////////////
    // CONSTANTS
    *////////////////////////////////////////////////////////////////
    public static class S3BucketInfo {
        public static final String DEV_BUCKET_NAME = "<YOUR-PHOTOS-STAGING-BUCKET>";

        public static final String PRD_BUCKET_NAME = "<YOUR-PHOTOS-PROD-BUCKET>";

    }
    public static class CognitoProviderInfo {
        public static final Regions DEV_REGION = Regions.US_EAST_1;

        public static final Regions PRD_REGION = Regions.US_EAST_1;

    }
    public static class S3ClientInfo {
        public static final String PRD_CLIENT_ACCESS_KEY = "<YOUR-CLIENT-ACCESS-KEY>";
        public static final String PRD_CLIENT_SECRET_KEY = "<YOUR-CLIENT-SECRET-KEY>";

    }
    public static class CognitoUserPoolInfo {
        public static final String DEV_USER_POOL_ID = "us-east-1_<YOUR-LETTERS>"; //DON'T USE EAST IF YOU ARE NOT EAST
        public static final String DEV_APP_PROVIDER_CLIENT_ID = "<YOUR-APP-STAGE-PROVIDER-CLIENT-ID-FOR-ANDROID>";
        public static final String DEV_APP_PROVIDER_CLIENT_SECRET = "<YOUR-APP-STAGE-PROVIDER-CLIENT-SECRET-FOR-ANDROID-PROVIDER>";

        public static final String PRD_USER_POOL_ID = "us-east-1_<YOUR LETTERS>"; //DON'T USE EAST IF YOU ARE NOT EAST
        public static final String PRD_APP_PROVIDER_CLIENT_ID = "<YOUR-APP-PROD-PROVIDER-CLIENT-ID-FOR-ANDROID>";
        public static final String PRD_APP_PROVIDER_CLIENT_SECRET = "<YOUR-APP-PROD-PROVIDER-CLIENT-ID-FOR-ANDROID>";

    }


    /*///////////////////////////////////////////////////////////////
    // MEMBERS
    *////////////////////////////////////////////////////////////////
    private static final String TAG = Globals.SEARCH_STRING + CognitoManager.class.getSimpleName();
    private static CognitoManager mInstance;
    private static CognitoUserPool mUserPool;
    private static String mUser;
    private static boolean mIsEmailVerified;
    private static boolean mIsPhoneVerified;
    private static CognitoUserSession mCurrentUserSession;


    /*///////////////////////////////////////////////////////////////
    // PROPERTIES
    *////////////////////////////////////////////////////////////////
    public static String getUserPoolID(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
                return CognitoUserPoolInfo.DEV_USER_POOL_ID;

            case PRD:
            default:
                return CognitoUserPoolInfo.PRD_USER_POOL_ID;

        }

    }
    public static String getClientID(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
                return CognitoUserPoolInfo.DEV_APP_PROVIDER_CLIENT_ID;

            case PRD:
            default:
                return CognitoUserPoolInfo.PRD_APP_PROVIDER_CLIENT_ID;

        }

    }
    public static String getClientSecret(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
                return CognitoUserPoolInfo.DEV_APP_PROVIDER_CLIENT_SECRET;

            case PRD:
            default:
                return CognitoUserPoolInfo.PRD_APP_PROVIDER_CLIENT_SECRET;

        }

    }
    public static String getS3ClientID(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
            case PRD:
            default:
                return S3ClientInfo.PRD_CLIENT_ACCESS_KEY;

        }

    }
    public static String getS3ClientSecret(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
            case PRD:
            default:
                return S3ClientInfo.PRD_CLIENT_SECRET_KEY;

        }

    }
    public static String getS3BucketName(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
                return S3BucketInfo.DEV_BUCKET_NAME;

            case PRD:
            default:
                return S3BucketInfo.PRD_BUCKET_NAME;

        }
    }
    public static Regions getCognitoRegion(){
        switch (AMEnvironment.getCurrentEnvironment()){
            case DEV:
            case QA:
            case STG:
                return CognitoProviderInfo.DEV_REGION;

            case PRD:
            default:
                return CognitoProviderInfo.PRD_REGION;

        }

    }
    public static void setUser(String user){
        mUser = user;
    }
    public static String getUser(){
        return mUser;
    }
    public static CognitoUserPool getUserPool(){
        return mUserPool;

    }
    public static CognitoUserSession getCurrentUserSession(){
        return mCurrentUserSession;

    }
    public static void setCurrentUserSession(CognitoUserSession session){
        mCurrentUserSession = session;

    }


    /*///////////////////////////////////////////////////////////////
    // INIT
    *////////////////////////////////////////////////////////////////
    public static void init(Context context) {
        if (mInstance != null && mUserPool != null) {
            return;

        }

        if (mInstance == null) {
            mInstance = new CognitoManager();

        }

        if (mUserPool == null) {
            // Create a user pool with default ClientConfiguration
            mUserPool = new CognitoUserPool(context, getUserPoolID(), getClientID(), getClientSecret(), getCognitoRegion());

        }

    }


    /*///////////////////////////////////////////////////////////////
    // EXTERNAL METHODS
    *////////////////////////////////////////////////////////////////
    public static void signInUser(final String user, final String password, final AuthenticationHandler authenticationHandler){
        setUser(user);
        getUserPool().getUser(user).getSessionInBackground(new AuthenticationHandler() {
            @Override
            public void onSuccess(final CognitoUserSession userSession, final CognitoDevice newDevice) {
                setCurrentUserSession(userSession);
                rememberTrustedDevice(newDevice);
                getUserDetails(new GetDetailsHandler() {
                    @Override
                    public void onSuccess(CognitoUserDetails cognitoUserDetails) {
                        try{
                            mIsEmailVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_EMAIL_VALIDATED_ATTRIBUTE));//"email_verified" is the string
                            //mIsPhoneVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_EMAIL_VALIDATED_ATTRIBUTE));

                        }catch (Exception ex){


                        }

                        authenticationHandler.onSuccess(userSession, newDevice);

                    }
                    @Override
                    public void onFailure(Exception exception) {
                        authenticationHandler.onSuccess(userSession, newDevice);

                    }
                });

            }
            @Override
            public void getAuthenticationDetails(AuthenticationContinuation authenticationContinuation, String UserId) {
                Locale.setDefault(Locale.US);
                AuthenticationDetails authenticationDetails = new AuthenticationDetails(user, password, null);
                authenticationContinuation.setAuthenticationDetails(authenticationDetails);
                authenticationContinuation.continueTask();
                authenticationHandler.getAuthenticationDetails(authenticationContinuation, UserId);

            }
            @Override
            public void getMFACode(MultiFactorAuthenticationContinuation continuation) {
                authenticationHandler.getMFACode(continuation);

            }
            @Override
            public void authenticationChallenge(ChallengeContinuation continuation) {
                authenticationHandler.authenticationChallenge(continuation);
                //TODO implement "new_password_required" or "phone_needs_verified" or "email_needs_verified" instead of passing back lazily use correct callbacks of phone or password etc.. for cleanliness

            }
            @Override
            public void onFailure(Exception exception) {
                authenticationHandler.onFailure(exception);

            }

        });

    }
    public static void signOutCurrentUser(){
        if(getUserPool().getCurrentUser() != null) {
            getUserPool().getCurrentUser().signOut();

        }

    }
    public static void rememberTrustedDevice(CognitoDevice newDevice){
        if(newDevice != null) {
            newDevice.rememberThisDeviceInBackground(new GenericHandler() {
                @Override
                public void onSuccess() {
                    //not really sure if we need to do anything with this info or not just yet

                }

                @Override
                public void onFailure(Exception exception) {
                    //Faled to save device

                }

            });

        }

    }
    public static void refreshToken(final GenericHandler genericHandler){ //called from background thread to keep session alive
        if(getUserPool() == null || getUserPool().getCurrentUser() == null || getUserPool().getCurrentUser().getUserId() == null){
            genericHandler.onFailure(new Exception("Invalid User Token"));

        }else{
            getUserPool().getCurrentUser().getSessionInBackground(new AuthenticationHandler() {
                @Override
                public void onSuccess(CognitoUserSession userSession, CognitoDevice newDevice) {
                    setCurrentUserSession(userSession);
                    rememberTrustedDevice(newDevice);
                    getUserDetails(new GetDetailsHandler() {
                        @Override
                        public void onSuccess(CognitoUserDetails cognitoUserDetails) {
                            try{
                                mIsEmailVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_EMAIL_VALIDATED_ATTRIBUTE));
                                //mIsPhoneVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_PHONE_VALIDATED_ATTRIBUTE)); //not used in my current app

                            }catch (Exception ex){


                            }

                            genericHandler.onSuccess();

                        }
                        @Override
                        public void onFailure(Exception exception) {
                            genericHandler.onSuccess();
                        }
                    });

                }
                @Override
                public void getAuthenticationDetails(AuthenticationContinuation authenticationContinuation, String UserId) {
                    genericHandler.onFailure(new Exception("Invalid User Token"));

                }
                @Override
                public void getMFACode(MultiFactorAuthenticationContinuation continuation) {
                    genericHandler.onFailure(new Exception("Invalid User Token"));

                }
                @Override
                public void authenticationChallenge(ChallengeContinuation continuation) {
                    genericHandler.onFailure(new Exception("Invalid User Token"));

                }
                @Override
                public void onFailure(Exception exception) {
                    genericHandler.onFailure(new Exception("Invalid User Token"));

                }

            });

        }

    }
    /**
     * Used to update cached booleans for isEmailVerified or isPhoneVerified
     */
    public static void phoneOrEmailChanged(){
        if(getUserPool().getCurrentUser() == null){
            return;

        }

        getUserDetails(new GetDetailsHandler() {
            @Override
            public void onSuccess(CognitoUserDetails cognitoUserDetails) {
                try{
                    mIsEmailVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_EMAIL_VALIDATED_ATTRIBUTE));
                    //mIsPhoneVerified = Boolean.parseBoolean(cognitoUserDetails.getAttributes().getAttributes().get(Globals.CUSTOM_USER_ATTRIBUTES.IS_PHONE_VALIDATED_ATTRIBUTE)); //"phone_number" is string, but not used in my current app

                }catch (Exception ex){


                }

            }
            @Override
            public void onFailure(Exception exception) {

            }

        });

    }
    public static boolean isPhoneVerified(){
        return true; //for now we are not verifying phone
        //return mIsPhoneVerified;
    }
    public static boolean isEmailVerified(){
        return mIsEmailVerified;
    }
    public static void getUserDetails(GetDetailsHandler handler){
        getUserPool().getCurrentUser().getDetailsInBackground(handler);

    }
    public static void updatePhoneNumber(String phone, final GenericHandler handler){
        CognitoUserAttributes userAttributes = new CognitoUserAttributes();
        userAttributes.addAttribute(Globals.CUSTOM_USER_ATTRIBUTES.PHONE_ATTRIBUTE, PhoneNumberHelper.getStrippedNumberWithCountryCode(phone));

        CognitoManager.getUserPool().getUser(CognitoManager.getUserPool().getCurrentUser().getUserId()).updateAttributesInBackground(userAttributes, new UpdateAttributesHandler() {
            @Override
            public void onSuccess(List<CognitoUserCodeDeliveryDetails> attributesVerificationList) {
                handler.onSuccess();

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);

            }

        });
    }
    public static void updateEmail(String email, final GenericHandler handler){
        CognitoUserAttributes userAttributes = new CognitoUserAttributes();
        userAttributes.addAttribute(Globals.CUSTOM_USER_ATTRIBUTES.EMAIL_ATTRIBUTE, email);
        CognitoManager.getUserPool().getUser(CognitoManager.getUserPool().getCurrentUser().getUserId()).updateAttributesInBackground(userAttributes, new UpdateAttributesHandler() {
            @Override
            public void onSuccess(List<CognitoUserCodeDeliveryDetails> attributesVerificationList) {
                handler.onSuccess();

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);
            }

        });

    }
    public static void updatePassword(String oldPassword, String newPassword, final GenericHandler handler){
        getUserPool().getUser().changePasswordInBackground(oldPassword, newPassword, new GenericHandler() {
            @Override
            public void onSuccess() {
                handler.onSuccess();

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);
            }

        });
    }
    public static void forgotPassword(String email, final ForgotPasswordHandler handler){
        getUserPool().getUser(email).forgotPasswordInBackground(new ForgotPasswordHandler() {
            @Override
            public void onSuccess() {
                handler.onSuccess();
            }
            @Override
            public void getResetCode(ForgotPasswordContinuation continuation) {
                handler.getResetCode(continuation);
            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);

            }

        });
    }
    public static void sendVerificationEmail(final VerificationHandler handler){
        getUserPool().getCurrentUser().getAttributeVerificationCodeInBackground(Globals.CUSTOM_USER_ATTRIBUTES.PHONE_ATTRIBUTE, new VerificationHandler() {
            @Override
            public void onSuccess(CognitoUserCodeDeliveryDetails verificationCodeDeliveryMedium) {
                handler.onSuccess(verificationCodeDeliveryMedium);

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);

            }

        });

    }
    public static void sendVerificationText(final VerificationHandler handler){
        getUserPool().getCurrentUser().getAttributeVerificationCodeInBackground(Globals.CUSTOM_USER_ATTRIBUTES.PHONE_ATTRIBUTE, new VerificationHandler() {
            @Override
            public void onSuccess(CognitoUserCodeDeliveryDetails verificationCodeDeliveryMedium) {
                handler.onSuccess(verificationCodeDeliveryMedium);

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);

            }

        });

    }
    public static void verifyAttributesInBackground(String attribute, String code, final GenericHandler handler){
        CognitoManager.getUserPool().getCurrentUser().verifyAttributeInBackground(attribute, code, new GenericHandler() {
            @Override
            public void onSuccess() {
                handler.onSuccess();

            }
            @Override
            public void onFailure(Exception exception) {
                handler.onFailure(exception);

            }

        });

    }

}

Next up how to use the S3 piece of it:

private void uploadImageToS3(String filePath){
        final File newImageFile = new File(filePath);
        showProgressDialog(TAG, getString(R.string.loading_please_wait));

        //For auth route
        BasicAWSCredentials credentials = new BasicAWSCredentials(CognitoManager.getS3ClientID(), CognitoManager.getS3ClientSecret());

        AmazonS3Client s3 = new AmazonS3Client(credentials);
        TransferUtility transferUtility = new TransferUtility(s3, this);
        TransferObserver observer = transferUtility.upload(CognitoManager.getS3BucketName(), newImageFile.getName(), newImageFile);
        observer.setTransferListener(new TransferListener() {
            @Override
            public void onStateChanged(int id, TransferState state) {
            if(state.compareTo(TransferState.COMPLETED) == 0){
                String imgURLOfUploadComplete = "https://s3.amazonaws.com/" + CognitoManager.getS3BucketName() + "/" + newImageFile.getName();
                hideProgressDialog(TAG);
                Intent intent = new Intent();
                intent.putExtra(Globals.INTENT_KEYS.KEY_IMAGE_URL, imgURLOfUploadComplete);
                setResult(Activity.RESULT_OK, intent);
                if(newImageFile.exists()){
                    newImageFile.delete();

                }
                finish();

            }

        }
        @Override
        public void onProgressChanged(int id, long bytesCurrent, long bytesTotal) {
            if(bytesTotal != 0) {
                //For viewing progress
                int percentage = (int) (bytesCurrent / bytesTotal * 100);
            }
        }
        @Override
        public void onError(int id, Exception ex) {
            A35Log.e(TAG, getString(R.string.error_uploading_s3_part1) + id + getString(R.string.error_uploading_s3_part2) + ex.getMessage());
            hideProgressDialog(TAG);
            showDialogMessage(getString(error), getString(R.string.error_failed_create_image_alert_id) + error);

        }

    });

}

and that's it. Now you have a fully functioning example of Cognito and S3, you just have to put in your keys and make sure you setup your Android Provider for your app in S3 if you are using that piece, but if you are just using the S3 piece with id and secret you probably don't need the CognitoHelper stuff, just use your secret and id and bucket names for your environment and be done. I used the same security group and id/secret for prd and stage just separated by buckets, but you can do whatever you want with that.

Question:

I have just started using AWS for my project. I want to write a project that uploads critical files to s3 bucket. I donot want to expose any secret keys so that all other developers / users can access the uploaded documents. Please provide some pointer how to begin with.

My Current Implementation:

 return new AmazonS3Client(new AWSCredentials() {
    @Override
    public String getAWSAccessKeyId() {
    return accessKey;
    }

   @Override
   public String getAWSSecretKey() {
     return accessKeySecret;
   }, clientConfiguration )

Then I use amazonS3Client.putObject(putReq); to upload file. So, here I am exposing my keys that enables other colleague to download/view the files. Anyone can use it to download/upload file from s3cmd, browser plugins etc.

On reading AWS docs, I got to know I can use EC2 instance and setup IAM profile. BUt I am not sure how can I do with java code. Please provide some link and example


Answer:

Look at the InstanceProfileCredentialsProvider class. It gets IAM credentials (access/secret key) from the instance's metadata. Launch your instance under an IAM role that has a policy that permits access to S3.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withCredentials(new InstanceProfileCredentialsProvider()) .build();

Source reference

Question:

For example i generate upload link using GeneratePresignedUrlRequest. Is there any possibility to set the object`s (which later will be uploaded to s3 by this link) lifetime? I mean, it should be automatically deleted for example in 7 days since uploaded. So how can I do it in the step of generating upload link?


Answer:

You can't specify the object's lifetime via the presigned URL. What you can do is upload all the objects to a bucket that has a lifecycle rule defined that will delete them 7 days after they've been uploaded.

Question:

I am trying to generate an AWS presigned url with MD5 content. The url is generate but when I uses it to upload the content it fails with HTTP 403. error

Java code to generate the presigned URL is as below:

    GeneratePresignedUrlRequest generatePresignedUrlRequest = new     GeneratePresignedUrlRequest(bucketName, key, httpMethod);
    byte[] resultByte = DigestUtils.md5(/*byte array*/);    
    String streamMD5 = new String(java.util.Base64.getEncoder().encode(resultByte));
    generatePresignedUrlRequest.setContentMd5(streamMD5);
    s3Client.generatePresignedUrl(generatePresignedUrlRequest);

Java code to upload data using pre signed url:

   HttpURLConnection connection;
    try {
        connection = (HttpURLConnection) url.openConnection();
        connection.setDoOutput(true);
        connection.setRequestMethod("PUT");
        OutputStream output = connection.getOutputStream();
        output.write(getImage());
        output.flush();
        assertEquals(OK, connection.getResponseCode());
    } catch (IOException e) {
        LOGGER.info("Exception: {}", e);
    }

I am not sure what need to be added on upload code to make it work.


Answer:

I was able to resolve it by adding following piece of code:

byte[] resultByte = DigestUtils.md5(/*byte array*/);    
String streamMD5 = new String(java.util.Base64.getEncoder().encode(resultByte));
connection.setRequestProperty("content-md5", streamMD5);

Make sure that MD5 code that you add while building pre signed url is same as being used to set "content-md5" header.

Question:

I would like to upload a new file to Amazon s3 to my test-bucket.

Here is the java code:

    AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
    java.util.Date expiration = new java.util.Date();
    long msec = expiration.getTime();
    msec += 1000 * 60 * 60; // Add 1 hour.
    expiration.setTime(msec);
    GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("test-bucket", filename);
    generatePresignedUrlRequest.setMethod(HttpMethod.GET);
    generatePresignedUrlRequest.setExpiration(expiration);
    URL s = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

However i keep getting:

"The specified key does not exist." for the filename var.

How do I make this code works with a new file?


Answer:

From the looks of it GeneratePresignedUrlRequest is for existing objects in S3.

public GeneratePresignedUrlRequest(String bucketName, String key)

Creates a new request for generating a pre-signed URL that can be used as part of an HTTP GET request to access the Amazon S3 object stored under the specified key in the specified bucket. Parameters: bucketName - The name of the bucket containing the desired Amazon S3 object. key - The key under which the desired Amazon S3 object is stored.

You can use one of the putObject methods in AmazonS3Client class.

PutObjectResult putObject(PutObjectRequest putObjectRequest) Uploads a new object to the specified Amazon S3 bucket.

PutObjectResult putObject(String bucketName, String key, File file) Uploads the specified file to Amazon S3 under the specified bucket and key name.

PutObjectResult putObject(String bucketName, String key, InputStream input, ObjectMetadata metadata) Uploads the specified input stream and object metadata to Amazon S3 under the specified bucket and key name.

Once you put the object into S3, you can then use the key to instantiate a GeneratePresignedUrlRequest object and get the URL.

Question:

I checked the document and the code examples. However, I just found the way to upload a file. If I set the file path be a folder, the program would return exception:

Exception in thread "main" com.amazonaws.SdkClientException: Unable to calculate MD5 hash: /path/to/folder (Is a directory)

I noticed that C# code example has a way to upload folder, but Java doesn't. Does it mean Java cannot upload folder to AWS S3?


Answer:

The Amazon S3 API only supports uploading one object per API call. There is no API call to upload a folder.

Your code would need to loop through each file in the folder and upload them individually.

Question:

I'm currently uploading a file into AWS S3 Bucket(B1) with 250 MB size & 1 Million records. This triggers a Lambda (L1 - 1.5GB, 3 Mins) which reads this file & grouping the records with some criteria & writing about 25K files into S3 again on the different bucket(B2).

Now, notification event configured on Bucket (B2) generates 25K events(requests) to different Lambda (L2 - 512MB, 2Mins, Concurrency-2). This Lambda calls a Java-based micro-service which makes an entry into DB after processing which takes about 1-2 seconds for each call.

The problem here is, Once 2nd Lambda (L2) is triggered, there's no way to stop it. It runs for hours & not receiving any other event for the same lambda until processing all events completely & I've no control over S3 events triggered already.

Can someone please explain how events triggered on S3 upon file upload being processed (architecture) on Amazon S3 & how to get fine-grained control over S3 events triggered?

Is there anything I can do on AWS Lambda side to stop S3 events triggered already?


Answer:

I don't think setting a notification event on B2 is the best option when you are writing 25K objects at a time. I think process can be simplified.

  • Lambda L1 that writes 25K objects in B2 can create an array of object keys being written and put that in B2. Make sure it is written in a separate folder and notification event is set on that folder and not in the location where 25K files being written.

  • L2 will be triggered when you write file containing keys of 25K objects which it can pass to your microservice.

Another option using SNS

  • Lambda L1 that writes 25K objects in B2 can create an array of object keys being written and publish it to an SNS Topic. SNS message size is 256 KB which is enough for your use case

  • You mircoservice can subscribe to SNS Topic to receive the object keys and make entries in the DB

Question:

My method receives a buffered reader and transforms each line in my file. However I need to upload the output of this transformation to an s3 bucket. The files are quite large so I would like to be able to stream my upload into an s3 object.

To do so, I think I need to use a multipart upload however I'm not sure I'm using it correctly as nothing seems to get uploaded.

Here is my method:

public void transform(BufferedReader reader)
{
        Scanner scanner = new Scanner(reader);
        String row;
        List<PartETag> partETags = new ArrayList<>();

        InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest("output-bucket", "test.log");
        InitiateMultipartUploadResult result = amazonS3.initiateMultipartUpload(request);

        while (scanner.hasNext()) {
            row = scanner.nextLine();

            InputStream inputStream = new ByteArrayInputStream(row.getBytes(Charset.forName("UTF-8")));

            log.info(result.getUploadId());

            UploadPartRequest uploadRequest = new UploadPartRequest()
                    .withBucketName("output-bucket")
                    .withKey("test.log")
                    .withUploadId(result.getUploadId())
                    .withInputStream(inputStream)
                    .withPartNumber(1)
                    .withPartSize(5 * 1024 * 1024);

            partETags.add(amazonS3.uploadPart(uploadRequest).getPartETag());
        }

        log.info(result.getUploadId());

        CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(
                "output-bucket",
                "test.log",
                result.getUploadId(),
                partETags);

        amazonS3.completeMultipartUpload(compRequest);
}

Answer:

Oh, I see. The InitiateMultipartUploadRequest needs to read from an input stream. This is a valid constraint, since you can only write to output streams in general.

You probably heard that you can copy data from InputStream to ByteArrayOutputStream. Then take the resulting byte-array and create an ByteArrayInputStream. You could feed this to your request object. BUT: All data will in one byte array at a certain time. Since your use case is about large files, this cannot be o.k.

What you need is to create a custom input stream class which transforms the original input stream into another input stream. It requires you to work on a byte level abstraction. It would however offer the best performance. I suggest to ask a new question if you like to know more about that.

Your transformation code is already finished and you don't want to touch it again? There is another approach. You could also just "connect" an output stream to an input stream by using pipes: https://howtodoinjava.com/java/io/convert-outputstream-to-inputstream-example/. The catch: you are dealing with multi-threading here.

Question:

I am trying to update a code of mylamda from dummylamda but I think there is some problem with methodology of code implementation.

whenever a new myobject.jar gets uploaded to mybucket, dummylamda will be triggered which will deploy myobject.jar to mylambda.

When I am writing this code I am not able to import the proper package. Below are the specifications:

IDE: Eclipse Java EE IDE for Web Developers(Version: 2018-09 (4.9.0))

https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambda.html#updateFunctionCode-com.amazonaws.services.lambda.model.UpdateFunctionCodeRequest-

Java code:

package com.amazonaws.lambda.dummy;

import com.amazonaws.AmazonWebServiceResult;
import com.amazonaws.ResponseMetadata;
import com.amazonaws.services.lambda.AWSLambda;
import com.amazonaws.services.lambda.AWSLambdaAsyncClient;
import com.amazonaws.services.lambda.AWSLambdaClient;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.model.UpdateFunctionCodeResult;

public class LambdaFunctionHandler  {

     UpdateFunctionCodeResult updateFunctionCode(UpdateFunctionCodeRequest updateFunctionCodeRequest)
       {
           AWSLambda client = AWSLambdaClientBuilder.standard().build();
           UpdateFunctionCodeRequest request = new UpdateFunctionCodeRequest().withFunctionName("mylambda-arn")
                    .withS3Bucket("mybucket-name").withS3Key("myobject-key")
                    .withPublish(true);
            UpdateFunctionCodeResult response = client.updateFunctionCode(request);

        return response;

       }   
}

POM:

 <project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.amazonaws.lambda</groupId>
<artifactId>dummy</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.5.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <version>3.1.0</version>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
            <executions>
                <execution>
                    <id>assemble-all</id>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <version>3.0.0</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>



<dependencies>


    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-bom</artifactId>
        <version>1.11.475</version>
        <type>pom</type>
        <scope>import</scope>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk</artifactId>
        <version>1.11.475</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-lambda -->
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-lambda</artifactId>
        <version>1.9.16</version>
    </dependency>


    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-lambda-java-events</artifactId>
        <version>1.3.0</version>
    </dependency>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-lambda-java-core</artifactId>
        <version>1.1.0</version>
    </dependency>
</dependencies>
</project>


Answer:

Just have to add aws-java-sdk-bundle maveen dependency and need to update the version of other dependencies. So, the updated POM is:

    <project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.amazonaws.lambda</groupId>
    <artifactId>dummy</artifactId>
    <version>1.0.0</version>
    <packaging>jar</packaging>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.1.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>assemble-all</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>3.0.0</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>



    <dependencies>


        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-bom</artifactId>
            <version>1.11.475</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.11.475</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-lambda -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-lambda</artifactId>
            <version>1.11.475</version>
        </dependency>


        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-events</artifactId>
            <version>1.3.0</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>1.1.0</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle -->
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk-bundle</artifactId>
            <version>1.11.475</version>
        </dependency>

    </dependencies>
</project>

Question:

I need to call the withCannedAcl method. It turns images as a public read ENUM, so when the book images are registered in the Bucket, they will have a readable public visibility and all users will be able to view the images. How can I insert the method into my filesaver file?

Here the withCannedAcl method that i found:

.withCannedAcl(CannedAccessControlList.PublicRead));

this is the filesaver.java where i have to insert that method (could be inside the s3.putObject():?). I tried but it didnt works.

@RequestScoped
public class FileSaver {

private static final String CONTENT_DISPOSITION = "content-disposition";

private static final String FILENAME_KEY = "filename=";

@Inject
private AmazonS3Client s3;    

public String write(String baseFolder, Part multipartFile) {
    AmazonS3Client s3 = client();
    String fileName = extractFilename(multipartFile.getHeader(CONTENT_DISPOSITION));
    try {
        s3.putObject("superkovalev", fileName,
                multipartFile.getInputStream(),                    
                new ObjectMetadata());            
                return "https://s3.amazonaws.com/xxxxxxxxxxxxxx/"                            
                +fileName;
    } catch (AmazonClientException | IOException e) {
        throw new RuntimeException(e);
    }
}

 private AmazonS3Client client() {
    AWSCredentials credentials = new BasicAWSCredentials("xxxxxxxx ",
            xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    AmazonS3Client newClient = new AmazonS3Client(credentials, new 
     ClientConfiguration());
    newClient.setS3ClientOptions(new 
     S3ClientOptions().withPathStyleAccess(true));
    return newClient;
   }

  private String extractFilename(String contentDisposition) {

    if (contentDisposition == null) {
        return null;
    }
    int startIndex = contentDisposition.indexOf(FILENAME_KEY);
    if (startIndex == -1) {
        return null;
    }
    String filename = contentDisposition.substring(startIndex
            + FILENAME_KEY.length());
    if (filename.startsWith("\"")) {
        int endIndex = filename.indexOf("\"", 1);
        if (endIndex != -1) {
            return filename.substring(1, endIndex);
        }
    } else {
        int endIndex = filename.indexOf(";");
        if (endIndex != -1) {
            return filename.substring(0, endIndex);
        }
    }
    return filename;
}

public AmazonS3Client getS3() {
    return s3;
}

public void setS3(AmazonS3Client s3) {
    this.s3 = s3;
}

 }

Answer:

This is the answer for the question:

Policy bucket_policy = new Policy().withStatements(
new Statement(Statement.Effect.Allow)
    .withPrincipals(Principal.AllUsers)
    .withActions(S3Actions.GetObject)
    .withResources(new Resource(
        "arn:aws:s3:::" + bucket_name + "/*")));

https://docs.aws.amazon.com/pt_br/sdk-for-java/v1/developer-guide/examples-s3-bucket-policies.html

Question:

I am using Java to upload images to an S3 bucket. Via:

PutObjectResult result = s3client.putObject(new PutObjectRequest(
                            BUCKET_NAME, KEY, file));

The result, however does not have a URL property from which I can access the image again. Only 2 properties are set, contentMd5 and eTag. How can I get the URL for the uploaded file, so I can download it again?


Answer:

When creating the file, you already know the key name and the bucket which allows constructing the S3 URL of the file uploaded.

The support for the URL attributes is not currently available in S3 API for creating objects.

Question:

I am trying to upload an image I took from the camera to the s3 bucket.

CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
getApplicationContext(),
"us-west-2:xxxxxxxxxxxxx",
Regions.US_WEST_2
);

 AmazonS3Client s3Client = new AmazonS3Client(new AWSCredentials() {
        @Override
        public String getAWSAccessKeyId() {
            return "XXXXXXX";
        }

        @Override
        public String getAWSSecretKey() {
            return "XXXXXXX";
        }
    });


    TransferUtility transferUtility = new TransferUtility(s3Client,getApplicationContext());

transferUtility.upload(MY_BUCKET,file.getName(),file);

My pool is in us-west-2 but my bucket region is in Asia pacific Sydney.

Running the above code crashes and gives me the error below.

Java.lang.RuntimeException: Error receiving broadcast Intent { act=android.net.conn.CONNECTIVITY_CHANGE flg=0x4000010 (has extras) } in com.amazonaws.mobileconnectors.s3.transferutility.TransferService$NetworkInfoReceiver@744215
                                                                                 at android.app.LoadedApk$ReceiverDispatcher$Args.lambda$-android_app_LoadedApk$ReceiverDispatcher$Args_50043(LoadedApk.java:1282)
                                                                                 at android.app.-$Lambda$FilBqgnXJrN9Mgyks1XHeAxzSTk.$m$0(Unknown Source:4)
                                                                                 at android.app.-$Lambda$FilBqgnXJrN9Mgyks1XHeAxzSTk.run(Unknown Source:0)
                                                                                 at android.os.Handler.handleCallback(Handler.java:769)
                                                                                 at android.os.Handler.dispatchMessage(Handler.java:98)
                                                                                 at android.os.Looper.loop(Looper.java:164)
                                                                                 at android.app.ActivityThread.main(ActivityThread.java:6540)
                                                                                 at java.lang.reflect.Method.invoke(Native Method)
                                                                                 at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:240)
                                                                                 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:767)
                                                                              Caused by: java.lang.SecurityException: ConnectivityService: Neither user 10084 nor current process has android.permission.ACCESS_NETWORK_STATE.
                                                                                 at android.os.Parcel.readException(Parcel.java:1948)

I have added permissons and services in the manifest file. How can I fix this and send an image to s3 bucket in Asia pacific sydney.

I have these permissions under manifest file -

   <uses-permission android:name="android.permission.INTERNET"></uses-permission>
<uses-permission android:name="ANDROID.PERMISSION.ACCESS_NETWORK_STATE" />


Answer:

As the error clearly states, you need to add android.permission.ACCESS_NETWORK_STATE permission in your Manifest

Question:

Please check the below code

import java.io.File;
import java.io.IOException;

import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.PutObjectRequest;

public class UploadObjectSingleOperation {
    private static String bucketName     = "*******";
    private static String keyName        = "************";
    private static String uploadFileName = "C:/Users/Yohan/Desktop/asdasd.html";

    public static void main(String[] args) throws IOException {
        BasicAWSCredentials creds = new BasicAWSCredentials(keyName, "**********"); 
        AmazonS3 s3client = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(creds)).withRegion(Regions.AP_SOUTH_1).build();
//            AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
        try {
            System.out.println("Uploading a new object to S3 from a file\n");
            File file = new File(uploadFileName);

            s3client.putObject(new PutObjectRequest(
                                     bucketName, keyName, file));

         } catch (AmazonServiceException ase) {
            System.out.println("Caught an AmazonServiceException, which " +
                    "means your request made it " +
                    "to Amazon S3, but was rejected with an error response" +
                    " for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        } catch (AmazonClientException ace) {
            System.out.println("Caught an AmazonClientException, which " +
                    "means the client encountered " +
                    "an internal error while trying to " +
                    "communicate with S3, " +
                    "such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());
        }
    }
}

OK so above code I have is being used to upload files to Amazon S3 Bucket. My S3 bucket is in the nearest location to my client, Asia Pacific - Mumbai.

The above code works fine, however I noticed the following.

  1. What is getting uploaded is always the key . The real file is not getting uploaded. Please check below image.

Why is this happening? When I upload file using the web interface of S3 it works totally fine.


Answer:

Found the error. The code in my question is from Amazon tutorials - http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html

However I am sure it is incorrect or deprecated.

Pay attention to the below line

s3client.putObject(new PutObjectRequest(bucketName, keyName, file));

For this to work, instead of keyName, you have to pass the filename with extension. For an example websitepage.html.

Question:

I want to upload multiple file to Amazon s3 bucket by java file chooser.For that purposes I am using the following code. This code can upload a file to s3 but next time when I upload another file the previous file is replaced. I know it is caused by the String key = "squre.jpg"; in the code. My question is how to upload multiple file without replacing the previous one. Thanks in advanced.

imageUpload.setOnMouseClicked(new EventHandler<MouseEvent>() {
        @Override
        public void handle(MouseEvent event) {
            FileChooser fileChooser=new FileChooser();
            fileChooser.setInitialDirectory(new File("c:\\"));
            fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("JPG Images","*.jpg"),
                    new FileChooser.ExtensionFilter("JPEG Images","*.jpeg"),
                    new FileChooser.ExtensionFilter("PNG Images","*.png"));
            File file=fileChooser.showOpenDialog(null);

            if (file!=null){
                try {

AWSCredentials Credentials = new BasicAWSCredentials(
            "AWSAccessKeyId", 
            "AWSSecretKey");

    AmazonS3Client amazonS3Client = new AmazonS3Client(Credentials);
    String bucketName = "awsimagetrading";
    String key = "squre.jpg";
                    System.out.println("Uploading a new object to S3 from a file\n");
                    AmazonS3 s3client = new AmazonS3Client(Credentials);
                    s3client.putObject(new PutObjectRequest(bucketName,key,file));
                    URL url = amazonS3Client.generatePresignedUrl(bucketName,key,Date.from(Instant.now().plus(5,ChronoUnit.MINUTES)));
                    System.out.println(url);
                    //label.setText(url.toString());

                } catch (AmazonClientException e) {
                    e.printStackTrace();
                }
            }
        }
    });

Answer:

From your code it looks like you need to use filechooser's openMultipleDialog and then you can set key to file's name (file.getName()). Here is the modified code..

imageUpload.setOnMouseClicked(new EventHandler<MouseEvent>() {
        @Override
        public void handle(MouseEvent event) {
            FileChooser  fileChooser=new FileChooser();
            fileChooser.setInitialDirectory(new File("c:\\"));
            fileChooser.getExtensionFilters().addAll(new FileChooser.ExtensionFilter("JPG Images","*.jpg"),
                    new FileChooser.ExtensionFilter("JPEG Images","*.jpeg"),
                    new FileChooser.ExtensionFilter("PNG Images","*.png"));

            List<File> selectedFiles = fileChooser.showOpenMultipleDialog(null);
            if (selectedFiles != null) {
                for (File file : selectedFiles) {
                    try {

                        AWSCredentials Credentials = new BasicAWSCredentials(
                                "AWSAccessKeyId",
                                "AWSSecretKey");

                        AmazonS3Client amazonS3Client = new AmazonS3Client(Credentials);
                        String bucketName = "awsimagetrading";
                        String key = file.getName();
                        System.out.println("Uploading a new object to S3 from a file\n");
                        AmazonS3 s3client = new AmazonS3Client(Credentials);
                        s3client.putObject(new PutObjectRequest(bucketName,key,file));
                        URL url = amazonS3Client.generatePresignedUrl(bucketName,key,Date.from(Instant.now().plus(5,ChronoUnit.MINUTES)));
                        System.out.println(url);
                        //label.setText(url.toString());

                    } catch (AmazonClientException e) {
                        e.printStackTrace();
                    }
                }
        }
    });

Question:

I've got a frontend client built using EmberJS and specifically ember-uploader to handle uploading files directly to S3. Where I'm stuck is I can't seem to correctly sign the request using my backend server (A java Dropwizard microservice) before it goes off to Amazon.

I know I can create a GeneratePresignedUrlRequest but the frontend library I'm using specifically wants a json object back from the server, so I'm attempting to split that GeneratePresignedUrlRequest into an object.

At the moment all that seems fine, but I'm missing the policy as I can't workout how to create it correctly.

private SignRequestObject createSignRequestObject(List<NameValuePair> valuePairs) {
    SignRequestObject request = new SignRequestObject();

    request.setKey("test.txt");
    request.setBucket("test-bucket");
    request.setPolicy("?");

    for (NameValuePair pairs : valuePairs) {
        if (pairs.getName().equals("X-Amz-Credential")) {
            request.setCredentials(pairs.getValue());
        }

        if (pairs.getName().equals("X-Amz-Signature")) {
            request.setSignature(pairs.getValue());
        }

        if (pairs.getName().equals("X-Amz-Algorithm")) {
            request.setAlgorithm(pairs.getValue());
        }

        if (pairs.getName().equals("X-Amz-Date")) {
            request.setDate(pairs.getValue());
        }
    }

    return request;
}

The valuePairs are coming from the GeneratePresignedUrlRequest

private String createSignedUrl() {
    GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("test-bucket", "test.txt");
    generatePresignedUrlRequest.setMethod(HttpMethod.PUT);

    return amazonS3.generatePresignedUrl(generatePresignedUrlRequest).toString();
}

According to the wiki of ember-uploader I want the policy object to look something like:

// Ruby example, but shouldn't matter
        {
          expiration: @expires,
          conditions: [
            { bucket: 'sandbox' },
            { acl: 'public-read' },
            { expires: @expires },
            { success_action_status: '201' },
            [ 'starts-with', '$key', '' ],
            [ 'starts-with', '$Content-Type', '' ],
            [ 'starts-with', '$Cache-Control', '' ],
            [ 'content-length-range', 0, 524288000 ]
          ]
      )

Should I be trying to build this myself or does the aws-sdk have methods for this? I keep seeing AWS Signature Version 4 around but can't find out how to use it either.

When trying to upload via the browser I'm getting a 403 back from amazon.


Answer:

I solved this and wrote a small guice module for it. This is then called from the Repository class on a get request to the backend.

// Resource

public class SignResource {

    private final SignRepository repository;

    @Inject
    public SignResource(SignRepository repository) {
        this.repository = repository;
    }

    @GET
    public Response signPOST(@QueryParam("type") String type) {
        String signRequest = repository.signRequest(type);
        return Response.status(Response.Status.OK).entity(signRequest).build();
    }
}

// Repository

public class SignRepository {

    @Inject
    private SignService signService;

    public SignRepository() {
    }

    public String signRequest(String contentType) {
        return signService.signRequest(contentType);
    }
}

// Implementation

public class SignServiceImpl implements SignService {

    private String awsBucket;
    private String awsAccessKey;
    private String awsSecretKey;

    SignServiceImpl(AmazonConfiguration amazon) {
        awsSecretKey = amazon.getSecret();
        awsAccessKey = amazon.getAccess();
        awsBucket = amazon.getBucket();
    }

    @Override
    public String signRequest(String contentType) {
        final String randomFileName = createRandomName();

        String policy = createPolicy(randomFileName, contentType);

        SignRequest signRequest = new SignRequest();
        signRequest.setAwsAccessKeyId(awsAccessKey);
        signRequest.setPolicy(policy);
        signRequest.setSignature(ServiceUtils.signWithHmacSha1(awsSecretKey, policy));
        signRequest.setBucket(awsBucket);
        signRequest.setKey(randomFileName);
        signRequest.setAcl("public-read");
        signRequest.setContentType(contentType);
        signRequest.setExpires(createExpireTime().toString());
        signRequest.setSuccessActionStatus("201");

        return createJsonString(signRequest);
    }

    private String createPolicy(String randomFileName, String contentType) {
        try {
            String[] conditions = {
                S3Service.generatePostPolicyCondition_Equality("bucket", awsBucket),
                S3Service.generatePostPolicyCondition_Equality("key", randomFileName),
                S3Service.generatePostPolicyCondition_Equality("acl", "public-read"),
                S3Service.generatePostPolicyCondition_Equality("expires", createExpireTime().toString()),
                S3Service.generatePostPolicyCondition_Equality("content-Type", contentType),
                S3Service.generatePostPolicyCondition_Equality("success_action_status", "201"),
                S3Service.generatePostPolicyCondition_AllowAnyValue("cache-control")
            };

            String policyDocument = "{\"expiration\": \"" + ServiceUtils.formatIso8601Date(createExpireTime()) + "\", \"conditions\": [" + ServiceUtils.join(conditions, ",") + "]}";

            return ServiceUtils.toBase64(policyDocument.getBytes(Constants.DEFAULT_ENCODING));
        } catch (UnsupportedEncodingException e) {
            e.printStackTrace();
        }

        return null;
    }

    private String createRandomName() {
        return UUID.randomUUID().toString();
    }

    private Date createExpireTime() {
        Calendar cal = Calendar.getInstance();
        cal.add(Calendar.HOUR, 24);

        return cal.getTime();
    }

    private String createJsonString(SignRequest request) {
        ObjectMapper mapper = new ObjectMapper();
        String json = null;

        try {
            json = mapper.writeValueAsString(request);
        } catch (JsonProcessingException e) {
            e.printStackTrace();
        }

        return json;
    }
}

// The service

public interface SignService {
    String signRequest(String contentType);
}

// Module

public class SignServiceModule extends AbstractModule {

    @Override
    protected void configure() {
        bind(SignService.class).toProvider(SignServiceProvider.class).asEagerSingleton();
    }
}

// Provider

public class SignServiceProvider implements Provider<SignService> {

    @Inject
    private SwordfishConfiguration configuration;

    @Override
    public SignService get() {
        return new SignServiceImpl(configuration.getAmazon());
    }
}

Question:

I'm having an issue with uploading a .zip file to a remote server in that some bytes are missing from the file after the upload. Upon redownloading the file, the .zip archive is unopenable, which leads me to believe that I need those bytes to perform the upload successfully.

I am using a multipart/form-data POST request to upload the file. The utility helper class I use to do this is given in the below code:

import java.io.*;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLConnection;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;

public class MultipartFormDataUtil {
    private final String boundary;
    private static final String lineReturn = "\r\n";
    private HttpURLConnection conn;
    private DataOutputStream dos;
    int bytesRead, bytesAvail, bufferSize;
    byte[] buffer;
    int maxBufferSize = 1*1024*1024;
    List<String> response;

    public MultipartFormDataUtil(String postUrl, LinkedHashMap<String, String> params, File file) throws IOException {
        boundary = "=-=" + System.currentTimeMillis() + "=-=";

        URL url = new URL(postUrl);
        conn = (HttpURLConnection) url.openConnection();
        conn.setDoInput(true);
        conn.setDoOutput(true);
        conn.setUseCaches(false);
        conn.setRequestMethod("POST");
        conn.setRequestProperty("Connection", "Keep-Alive");
        conn.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary);
        dos = new DataOutputStream(conn.getOutputStream());

        for (String key : params.keySet()) {
            addFormPart(key, params.get(key));
        }

        addFilePart(file);

        finish();
    }

    private void addFormPart(String name, String value) throws IOException {
        dos.writeBytes("--" + boundary + lineReturn);
        dos.writeBytes("Content-Disposition: form-data; name=\"" + name + "\"" + lineReturn);
        dos.writeBytes("Content-Type: text/plain" + lineReturn + lineReturn);
        dos.writeBytes(value + lineReturn);
        dos.flush();
    }

    private void addFilePart(File file) throws IOException {
        FileInputStream fileInputStream = new FileInputStream(file);

        dos.writeBytes("--" + boundary + lineReturn);
        dos.writeBytes("Content-Disposition: form-data; name=\"file\"; filename=\"" + file.getName() + "\"" + lineReturn);
        dos.writeBytes("Content-Type: " + URLConnection.guessContentTypeFromName(file.getName()) + lineReturn);
        dos.writeBytes("Content-Transfer-Encoding: binary" + lineReturn + lineReturn);

        bytesAvail = fileInputStream.available();
        bufferSize = Math.min(bytesAvail, maxBufferSize);
        buffer = new byte[bufferSize];

        bytesRead = fileInputStream.read(buffer, 0, bufferSize);

        while (bytesRead > 0) {
            dos.write(buffer, 0, bufferSize);
            bytesAvail = fileInputStream.available();
            bufferSize = Math.min(bytesAvail, maxBufferSize);
            bytesRead = fileInputStream.read(buffer, 0, bufferSize);
        }
        dos.flush();

        dos.writeBytes(lineReturn);
        dos.flush();
        fileInputStream.close();
    }

    private void finish() throws IOException {
        response = new ArrayList<String>();

        dos.writeBytes("--" + boundary + "--" + lineReturn);
        dos.flush();
        dos.close();

        BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
        String line;

        while ((line = reader.readLine()) != null) {
            response.add(line);
        }

        reader.close();
        conn.disconnect();
    }

    public List<String> getResponse() {
        return response;
    }

To give credit where credit is due, this utility is based off of examples from Peter's Notes and CodeJava.net. This util is called with the following code:

protected static void postFile(String url, LinkedHashMap<String, String> params, File file) throws Exception {
    try {
        MultipartFormDataUtil multipartRequest = new MultipartFormDataUtil(url, params, file);
        List<String> response = multipartRequest.getResponse();

        for (String line : response) {
            System.out.println(line);
        }

    } catch (IOException ioe) {
        log.warn("There was an error posting the file and form data", ioe);
    }
}

The upload url in this case is to an Amazon S3 bucket, which passes it on to the destination system. It is at this final destination that I can see that the process that is supposed to be running on the .zip file has failed (note: the process is run by a Rails app and gives the error "Error identifying package type: can't dup NilClass"). Upon downloading the file, I see that the file size is 3,110,416 bytes instead of 3,110,466 bytes. I can no longer extract the archive to see what is in it; the mac archive utility responds with "Error 2 - No such file or directory".

I lack the conceptual background in this area to get a feel for where in the process things may be going wrong. I am hoping that someone will be able to tell me that I made an error in the utility class, or let me know that something else is the problem.

Thank you for any insight you can provide, and let me know if I can post anything else that would be of help.

EDIT: Some additional information I gathered about different sizes of file uploads (in bytes):

Original----------Uploaded----------Difference

10,167,389______10,167,238______151

3,110,466_______3,110,416_______50

156,885_________156,885_________0

95,639,352______95,637,925______1,427

For the 3 files that had bytes missing following the upload, the % of total data lost was around (but not exactly) 0.0015% for each one, but not equal to each other.


Answer:

Upon some further research, we found that the error did not have anything to do with the multipart/form-data utility as shown in this question. Instead, it had to do with our own file downloading client in here. We were not setting the FileTransfer client to download the file as binary, which is necessary for a .zip file.

Feel free to use the code included in the original question for your multipart/form-data purposes in java - it works great assuming there are no problems with your original file.

Question:

I am trying to upload files from android studio to AWS S3 Bucket. I have created a new AWS account. This seems to be validation/Authorization Code issue. Can someone please help in figuring out the root cause for this and how can this be solved ?

Please Let me know if any more detail is required.

Thanks,

Bucket Policy:

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject"

            ],
            "Resource": "arn:aws:s3:::mybucket/*"
        }
    ]
}

Warning in Debug Log:

D/CognitoCachingCredentialsProvider﹕ Loading credentials from SharedPreferences
D/CognitoCachingCredentialsProvider﹕ No valid credentials found in SharedPreferences
I/AmazonHttpClient﹕ Unable to execute HTTP request: Read timed out
    java.net.SocketTimeoutException: Read timed out
            at com.android.org.conscrypt.NativeCrypto.SSL_read(Native Method)
            at com.android.org.conscrypt.OpenSSLSocketImpl$SSLInputStream.read(OpenSSLSocketImpl.java:674)
            at com.android.okio.Okio$2.read(Okio.java:113)
            at com.android.okio.RealBufferedSource.indexOf(RealBufferedSource.java:147)
            at com.android.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:94)
            at com.android.okhttp.internal.http.HttpConnection.readResponse(HttpConnection.java:175)
            at com.android.okhttp.internal.http.HttpTransport.readResponseHeaders(HttpTransport.java:101)
            at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:616)
            at com.android.okhttp.internal.http.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:379)
            at com.android.okhttp.internal.http.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:323)
            at com.android.okhttp.internal.http.HttpURLConnectionImpl.getResponseMessage(HttpURLConnectionImpl.java:487)
            at com.android.okhttp.internal.http.DelegatingHttpsURLConnection.getResponseMessage(DelegatingHttpsURLConnection.java:109)
            at com.android.okhttp.internal.http.HttpsURLConnectionImpl.getResponseMessage(HttpsURLConnectionImpl.java:25)
            at com.amazonaws.http.UrlHttpClient.execute(UrlHttpClient.java:62)
            at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353)
            at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
            at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4234)
            at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1644)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:134)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.call(UploadCallable.java:126)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.upload(UploadMonitor.java:182)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:140)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:54)
            at java.util.concurrent.FutureTask.run(FutureTask.java:237)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
            at java.lang.Thread.run(Thread.java:818)

Code:

// Variables Values:
private static final String AWS_ACCOUNT_ID = "078xxxxxxx91";
    private static final String COGNITO_POOL_ID = "eu-west-1:9xxxxx16-4xx2-4xxa-axx1-44cxxxxxxxf5";
    private static final String COGNITO_ROLE_UNAUTH = "arn:aws:iam::078xxxxxxx91:role/Cognito_ABCUnauth_Role";
    private static final String BUCKET_NAME = "mybucket";

   private void uploadImagesToServer() {
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                try {
                    AWSCredentialsProvider credProvider = null;
                    credProvider = getCredProvider(credProvider, getApplicationContext());
                    TransferManager transferManager = new TransferManager(credProvider);
            for(int i=0; i<imagesPath.size(); i++) {
                File file = new File(imagesPath.get(i));
                String fileName = file.getName();
                Upload upload = transferManager.upload(BUCKET_NAME, fileName, file);                        
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
});
thread.start();
}

public static AWSCredentialsProvider getCredProvider(AWSCredentialsProvider sCredProvider,
                                                     Context appContext) {
    if(sCredProvider == null) {
        sCredProvider = new CognitoCachingCredentialsProvider(
                appContext,
                AWS_ACCOUNT_ID, COGNITO_POOL_ID, COGNITO_ROLE_UNAUTH,
                null, Regions.EU_WEST_1);
        sCredProvider.refresh();
    }
    return sCredProvider;
}

Answer:

The log shows Request ARN is invalid. It's because COGNITO_ROLE_UNAUTH is an empty string. Please get the role arn from IAM, or copy the sample code from console.

Then you see Not authorized to perform sts:AssumeRoleWithWebIdentity exception. This happens when the credentials provider makes a request to STS to assume the role you specified for session credentials, but your role isn't set to trust Cognito.

Judging by its name, the role was created by you rather than one that is generated by Cognito in the console. I believe you forget to the trust relationship. Go to IAM console, edit the role, scroll all the way down, and click edit Trust Relationships. Make sure you have something like the following (replace pool id with your Cognito identity pool id).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Federated": "cognito-identity.amazonaws.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "cognito-identity.amazonaws.com:aud": "us-east-1:<pool_id>"
        },
        "ForAnyValue:StringLike": {
          "cognito-identity.amazonaws.com:amr": "unauthenticated"
        }
      }
    }
  ]
}

Question:

I am using Apache camel to upload files to Amazon s3. Does camel provides any provision to upload folders.

Actually I have an requirement which as follows, The polling should happen on daily basis to specific folder. Consider the folder structure /photos/user and inside that there are some sub folders like day1,day2.,

from (file:\photos\user) to (aws-s3:bucket-name?access Key=<>&secret Key=<>)

So the above code should pull the records from day1 folder and next day folders consequently as day progresses.


Answer:

You can use the recursive option like

file:\photo\user?recursive=true

to consume files from the folders recursively.

Question:

I am a new baby to AWS world. I have one requirement please help me to get my query. I wrote one lambda function to read file content from s3 bucket and it stores the details inside my PostgreSQL RDS instance in AWS. Fortunately it works fine!!!! My lambda function name is 'MyFunction'. Now I want to do the following. If a new file fall on my s3 bucket 's3-testing' automatically lambda function 'MyFunction' should work.

Is there any way to do this.I am using eclipse to create my lambda function.

Please help me to get this.


Answer:

This is a very common requirement: Invoke a Lambda function for new files in an S3 bucket.

Amazon S3 supports events. When a new object (other actions are also supported) arrives in S3, and event is created. This event can be sent to an SNS Topic, SQS Queue or a Lambda function.

Your Lambda function will receive an event data structure. This structure details information about the S3 event.

This tutorial will walk you thru setting up S3 to invoke your Lambda function.

Tutorial: Using AWS Lambda with Amazon S3

Question:

I have a CSV file in an AWS s3 bucket. When the CSV file arrives in a Lambda function the function gets triggered. What I want to do is, I want to remove some special characters from the CSV file and again stores it to another S3 bucket.

In my Lambda function, I can receive the file from the S3 bucket and can read the file from the S3 object content.

S3Object s3Object = this.s3Client.getObject(new GetObjectRequest(srcBucket, srcKey));
InputStream objectData = s3Object.getObjectContent();   
BufferedReader reader = new BufferedReader(new InputStreamReader(objectData));
    String line = null;
    while ((line = reader.readLine()) != null) {
        if(line.contains("\"")) {
            String newLine= line.replace("\"", "");
        }
    }

After removing the character how can I write the file and store the file into another S3 bucket?


Answer:

The easiest way to manipulate Amazon S3 files in AWS Lambda functions is:

  • Copy the file from S3 to local /tmp
  • Manipulate/edit the file
  • Upload the file to S3

The relevant commands are:

  • GetObject()
  • PutObject

For example:

s3client.putObject(new PutObjectRequest(bucketName, key, 
        new File("C:\\Users\\user\\Desktop\\testvideo.mp4")));

Question:

When I try to upload a file using the AWS SDK in java on my MAC OS, I am unable to get a "upload complete" message. No errors are thrown. The program never terminates. This seems to happen only on a Mac as others in my company ran the exact same code on a Windows and Linux Ubuntu and did not encounter this issue. specs below.

Hardware: MAC OS El Capitan 10.11.4

Java(TM) SE Runtime Environment (build 1.8.0_111-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

Firewall is currently off.

Note: I am able to manually upload to S3 without any issues.


Answer:

I found the solution. the answer is to update the java version on the Mac to Java 8 160 and above. any older version will not work due to some policy issue.

Question:

Is there any documentation API for Java conectivity with "Digital Ocean". Or else any alternative approach is present. If not possible in java can i use the php as an interface to toggle with this aspect.


Answer:

"https://icdu91.wordpress.com/2017/11/29/digitalocean-spaces-java-example/"

refer the above link. It supports the java to digital ocean api connectivity.