Hot questions for Using Azure in logic

Question:

I am creating an application where I need to view blobs in browser rather than downloading them. Currently, links of blobs having token downloads the corresponding blob.

I got some reference here to view the blobs in browser : https://github.com/Azure-Samples/storage-blob-java-getting-started/blob/master/src/BlobBasics.java (See from line number 141)

Here is my code to create token :

@Test
    public String testBlobSaS(CloudBlob blob, CloudBlobContainer container) throws InvalidKeyException,
            IllegalArgumentException, StorageException, URISyntaxException, InterruptedException {
        SharedAccessBlobPolicy sp = createSharedAccessBlobPolicy(
                EnumSet.of(SharedAccessBlobPermissions.READ, SharedAccessBlobPermissions.LIST), 100);
        BlobContainerPermissions perms = new BlobContainerPermissions();
        perms.getSharedAccessPolicies().put("readperm", sp);
        perms.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
        container.uploadPermissions(perms);
        String sas = blob.generateSharedAccessSignature(sp, null);
        CloudBlockBlob sasBlob = new CloudBlockBlob(
                new URI(blob.getUri().toString() + "?" + blob.generateSharedAccessSignature(null, "readperm")));
        sasBlob.download(new ByteArrayOutputStream());
        CloudBlob blobFromUri = new CloudBlockBlob(
                PathUtility.addToQuery(blob.getStorageUri(), blob.generateSharedAccessSignature(null, "readperm")));
        assertEquals(StorageCredentialsSharedAccessSignature.class.toString(),
                blobFromUri.getServiceClient().getCredentials().getClass().toString());
        StorageCredentials creds = new StorageCredentialsSharedAccessSignature(
                blob.generateSharedAccessSignature(null, "readperm"));
        CloudBlobClient bClient = new CloudBlobClient(sasBlob.getServiceClient().getStorageUri(), creds);
        CloudBlockBlob blobFromClient = bClient.getContainerReference(blob.getContainer().getName())
                .getBlockBlobReference(blob.getName());
        assertEquals(StorageCredentialsSharedAccessSignature.class.toString(),
                blobFromClient.getServiceClient().getCredentials().getClass().toString());
        assertEquals(bClient, blobFromClient.getServiceClient());
        return sas;
    } 

I have added this line into code from reference of url provided earlier:

perms.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);

I have code which gives me url for blob with token like :

https://accountName.blob.core.windows.net/directories/blobName?token

Still with this url, it's downloading the respective blob. What changes I should make in code while creating token, so that I can view blobs in browser itself without downloading?


Answer:

First thing you would want to check is the content-type property of the blob. In all likelihood, the content type of the blob would be application/octet-stream (which is the default content type). Because of this the browser is not understanding what to do with this blob and thus downloading it. Please try to change the content type of the blob to appropriate value (e.g. image/png) and that should fix the problem.

Also I noticed that you're setting the container's ACL to BlobContainerPublicAccessType.CONTAINER. If you're doing this, then there's no need for you to create Shared Access Signature (SAS). Your blobs will be accessible simply by their URL (https://accountname.blob.core.windows.net/containername/blobname). SAS comes into play when the container's ACL is Private.

Question:

I am creating application to copy file from one directory to another.

jSON input is :

{   "accountName" : "name", 
    "accountKey"  : "key",
    "source" : "directory1/directory2/directory3/directory4",
    "destination" : "directory1/directory2",
    "fileToCopy"    : "1"
}

Directory 4 is under directory 3, 3 is under 2 and 2 is under 1.

Want to copy file named as "1" from directory 4 to directory 2.

My java code is :

@Override
    public JSONObject copyFile(JSONObject jsonInput) throws IOException, InvalidKeyException, URISyntaxException {
        CloudFileClient fileClient = null;
        String storageConnectionString = "DefaultEndpointsProtocol=https;AccountName="+jsonInput.get("accountName")+";"+"AccountKey="+jsonInput.get("accountKey");
        System.out.println(storageConnectionString);
        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
        JSONObject jsonOutput = new JSONObject();
        try {
            fileClient = storageAccount.createCloudFileClient();

            String source = jsonInput.get("source").toString();
            String destination = jsonInput.get("destination").toString();
            String fileToCopy = jsonInput.get("fileToCopy").toString();
            String[] sourceNameArray = source.split("\\s*/\\s*");
            System.out.println(sourceNameArray.length);
            String[] destinationNameArray = destination.split("\\s*/\\s*");
            System.out.println(destinationNameArray.length);
            CloudFileShare share = fileClient
                    .getShareReference(sourceNameArray[0].toLowerCase().replaceAll("[-+.^:,!@#$%&*()_~`]", ""));
            CloudFileDirectory rootDir = share.getRootDirectoryReference();
            for (int i=0; i< sourceNameArray.length; i++)
            {
                String directoryToCreate = sourceNameArray[i];
                CloudFileDirectory directory = rootDir.getDirectoryReference(directoryToCreate);
                if(i==sourceNameArray.length-1)
                {
                    CloudFile fileToCopyFromSorce = directory.getFileReference(fileToCopy);

                    for (int j=0; j< destinationNameArray.length; j++)
                    {
                        String directoryToCreateForDestination = destinationNameArray[j];
                        CloudFileDirectory directoryForDestination = rootDir.getDirectoryReference(directoryToCreateForDestination);
                        if(j==destinationNameArray.length-1){
                        CloudFile fileDestination = directoryForDestination.getFileReference(fileToCopy);
                        // is next line required?
                //fileToCopyFromSorce.create(1);
                        fileDestination.startCopy(fileToCopyFromSorce);
                        System.out.println("copied to destination");
                        jsonOutput.put("status", "successful");
                        }
                        rootDir = directoryForDestination;
                    }
                }
                rootDir = directory;
            }

        } catch (Exception e) {
            System.out.println("Exception is " + e);
            jsonOutput.put("status", "unsuccessful");
            jsonOutput.put("exception", e.toString());
        }
        return jsonOutput;
    }

I am getting error as,

Exception is com.microsoft.azure.storage.StorageException: The specified parent path does not exist.

But I have specified parent path in my azure storage account.

Need suggestion on code and any reference code if possible.


Answer:

According to the exception, the issue was caused by the parent directories of a file not exist on Azure File Storage, as the figure below from here.

So you need to check and create these parent directories one by one from the root to the kids firstly, such as the code below for destination path, when you need to get the directory reference.

String destination = "directory1/directory2";
CloudFileDirectory rootDir = share.getRootDirectoryReference();
String[] destinationNameArray = destination.split("/");
CloudFileDirectory kidDir = rootDir;
for(String name: destinationNameArray) {
    kidDir = kidDir.getDirectoryReference(name);
    kidDir.createIfNotExists();
}

Then you can directly copy a file from source to target as below.

String source = "directory1/directory2/directory3/directory4";
String destination = "directory1/directory2";
String fileName = "1";
CloudFileDirectory sourceDir = rootDir.getDirectoryReference(source);
CloudFileDirectory destinationDir = rootDir.getDirectoryReference(destination);
CloudFile sourceFile = sourceDir.getFileReference(fileName);
CloudFile destinationFile = destinationDir.getFileReference(fileName);
destinationFile.startCopy(sourceFile);

Question:

I want to call REST API related to file storage of azure through postman. Here is how I am making my request :

I am making request to list all shares in file storage account as described here : https://docs.microsoft.com/en-us/rest/api/storageservices/list-shares

I am getting below error:

"The Date header in the request is incorrect." What changes I should make ?

Edit1 :

When I provided date n correct format, I have error like this :

I am getting below error: "The MAC signature found in the HTTP request '' is not the same as any computed signature. Server used following string to sign: 'GET"

How to resolve this?


Answer:

With your updated screenshot, it seems like that your problem is no longer related to x-ms-date. The reason for this 403 error is the Authorization attribute in the Header which format as

Authorization="[SharedKey|SharedKeyLite] [AccountName]:[Signature]"

You should't directly use the Access Key on Azure Portal as the Signature part of Authorization,instead it should be constructed encoding request string by using the HMAC-SHA256 algorithm over the UTF-8-encoded. format as

Signature=Base64(HMAC-SHA256(UTF8(StringToSign)))

which mentioned on the official document.

Sample java code as below show you how to construct Signature part of Authorization:

String stringToSign = "GET\n" 
       + "\n" // content encoding
       + "\n" // content language
       + "\n" // content length
       + "\n" // content md5
       + "\n" // content type
       + "\n" // date
       + "\n" // if modified since
       + "\n" // if match
       + "\n" // if none match
       + "\n" // if unmodified since
       + "\n" // range
       + "x-ms-date:" + date + "\nx-ms-version:2015-02-21\n" // headers
       + "/" + <your account name> + "/"+"\ncomp:list"; // resources
       String auth = getAuthenticationString(stringToSign);

private static String getAuthenticationString(String stringToSign) throws Exception {
       Mac mac = Mac.getInstance("HmacSHA256");
       mac.init(new SecretKeySpec(Base64.decode(key), "HmacSHA256"));
       String authKey = new String(Base64.encode(mac.doFinal(stringToSign.getBytes("UTF-8"))));
       String auth = "SharedKey " + account + ":" + authKey;
       return auth;
}

The auth parameter in the code is generated your Signature mentioned above,then you could fill it in the Authorization property and re-send request in Postman.

The screenshot as below:

Important notice:

In the above code,you should't miss "\ ncomp: list" in the //resources line,otherwise it will also return 403 error. You could find the rules in the Constructing the Canonicalized Resource String.

Question:

I am creating an application where I need to call REST API related to create data source as mentioned here: https://docs.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blob-storage

Here is how I am making my request :

I am getting following error :

{
    "error": {
        "code": "",
        "message": "The request is invalid. Details: index : The property 'type' does not exist on type 'Microsoft.Azure.Search.V2016_09_01.IndexDefinition'. Make sure to only use property names that are defined by the type.\r\n"
    }
}

What should I do so that 'type' can be set correctly?


Answer:

You're posting to the wrong URL try the following

https://[service name].search.windows.net/datasources?api-version=2016-09-01

Question:

I am creating directories in azure storage account by java services.

jSON input is :

{   "accountName" : "name", 
    "accountkey"  : "keyOfAzureAccount",

    "directoryStructure" : "directory1/directory2/directory3/directory4/directory5"
}

What I am expecting is to create these directories one-under-one in azure account. Like directory5 will be inside directory4. directory4 will be inside directory3. directory3 will be inside directory2 and directory2 will be inside directory1.

My java code is like :

@Override
    public JSONObject createDynamicDirectory(JSONObject jsonInput) throws InvalidKeyException, URISyntaxException {
        CloudFileClient fileClient = null;
        String storageConnectionString = "DefaultEndpointsProtocol=https;AccountName="+jsonInput.get("accountName")+";"+"AccountKey="+jsonInput.get("accountkey");
        System.out.println(storageConnectionString);
        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
        JSONObject jsonOutput = new JSONObject();
        try {
            fileClient = storageAccount.createCloudFileClient();
            String directoryName = jsonInput.get("directoryStructure").toString();

            String[] directoryNameArray = directoryName.split("\\s*/\\s*");
            System.out.println(directoryNameArray.length);

            CloudFileShare share = fileClient
                    .getShareReference(directoryNameArray[0].toLowerCase().replaceAll("[-+.^:,!@#$%&*()_~`]", ""));
            if (share.createIfNotExists()) {
                System.out.println("New share created named as "
                        + directoryName.toLowerCase().replaceAll("[-+.^:,!@#$%&*()_~`]", ""));
            }
            for(int i=0;i<directoryNameArray.length;i++)
            {
                CloudFileDirectory rootDir = share.getRootDirectoryReference();
                CloudFileDirectory parentDirectory = rootDir.getDirectoryReference(directoryNameArray[i]);
                if (parentDirectory.createIfNotExists()) {
                    System.out.println("new directory created named as " + directoryName);
                    jsonOutput.put("status", "successful");
                }
            }

        } catch (Exception e) {
            System.out.println("Exception is " + e);
            jsonOutput.put("status", "unsuccessful");
            jsonOutput.put("exception", e.toString());
        }
        return jsonOutput;
    }
}

This code creates share from directory1 as required. But the problem is, under the same share , it creates all directories1,2,3,4,5. Not like one-under-one directory as required.

How can I implement my java code so that directories can be created as required?


Answer:

Try something like this (the code is in C#)

        var parentDirectory = share.GetRootDirectoryReference();
        for (var i=1; i< directoryNameArray.Length; i++)
        {
            var directoryToCreate = directoryNameArray[i];
            var directory = parentDirectory.GetDirectoryReference(directoryToCreate);
            directory.CreateIfNotExists();
            Console.WriteLine("Created directory - " + directoryToCreate);
            parentDirectory = directory;
        }

Essentially you start with the share as root directory and as you start creating child directories you keep on updating the reference for root directory.

Question:

Premise: We have groovy scripts that execute every minute. I want one of those scripts to open an HTTP client, and poll a service bus queue / topic for messages. I have my rest client code working an getting messages from the service bus queue. I can do a "Get" every 5 seconds, and wireshark shows that it's reusing the same TCP connection which is better than I expected, but its still not ideal.

Goal: I would like to make this http client do "long polling", for efficiency and to achieve actual real-time processing. It seems to be more complicated than I anticipated.

Problem: When I do a "Delete" call to read message from a service bus queue, it immediately returns "HTTP/1.1 204 No Content", and the connection closes. I set a timeout on the client, but I don't think that matters.

Here's the article that shows service bus says it's supports long polling, which I imagine is the hard part. Azure Service Bus Queues

I feel that I don't understand something fundamental about how to implement long polling in code. My understanding is that when there is no data in the queue, it's supposed to delay the response until data exists, or until my client eventually times out waiting (which lets me set my own disconnect/reconnect interval). I don't even care about blocking/nonblocking etc, because the script execution is already spreadout into a threadpool, and will be terminated forcibly and all that.

Any help is greatly appreciated.


Answer:

The correct and simple answer is that adding the following to the end of an Azure REST API URL (with service bus) is the way to implements long-polling with that service: ?timeout=60 , where 60 tells azure to wait 60 seconds before responding with no-data. So, your application can check for data every 60 seconds, with an internal timeout of 60 seconds on each HTTP request. This will hold the TCP connection open for that timeframe, waiting for an HTTP response.