Hot questions for Using Azure in port


I am trying to use the Azure Storage Java API to check if a storage container exists and I am seeing the following exception. Any Idea what it means?

ERROR ~ The account being accessed does not support http.
105448         at
105449         at
105450         at
105451         at
105452         at
105453         at
105454         at
105455         at
105456         at jobs.azurearm.machinepool.CreateCloudEntity.runStep(
105457         at jobs.Utils.ActionExecutor.<init>(
105458         at controllers.Clouds.createMachinePoolForAzureARM(
105459         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
105460         at sun.reflect.NativeMethodAccessorImpl.invoke(
105461         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
105462         at java.lang.reflect.Method.invoke(
105463         at play.mvc.ActionInvoker.invokeWithContinuation(
105464         at play.mvc.ActionInvoker.invoke(
105465         at play.mvc.ActionInvoker.invokeControllerMethod(
105466         at play.mvc.ActionInvoker.invokeControllerMethod(
105467         at play.mvc.ActionInvoker.invoke(
105468         at play.server.PlayHandler$NettyInvocation.execute(
105469         at play.Invoker$
105470         at play.server.PlayHandler$
105471         at java.util.concurrent.Executors$
105472         at
105473         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(
105474         at java.util.concurrent.ScheduledThreadPoolExecutor$
105475         at java.util.concurrent.ThreadPoolExecutor.runWorker(
105476         at java.util.concurrent.ThreadPoolExecutor$
105477         at


Are you using a SAS to access the storage account? If so, please ensure your SAS doesn't contain "spr=https" when being generated. If you're using storage key to access the storage account, please set "Secure transfer required" to Disabled in storage account configuration on Azure Portal:


My project is using azure-documentdb-spring-boot-starter:0.2.0 to interact with cosmosdb:

public interface PingEasyRepo extends DocumentDbRepository<PingEasy, String> {

@Document(collection = "test")
public class PingEasy {
    private String id;
    private String content;

    public String getContent() {
        return content;

    public void setContent(String content) {
        this.content = content;

    public String getId() {
        return id;

    public void setId(String id) { = id;

The code runs and can do save and findAll. But when I go to the azure portal and cosmosdb data explorer, the data not appeared. When I open browser tools, I see failed requests with

"500 Internal Server Error: Object reference not set to an instance of an object." and exceptions:

Uncaught TypeError: xhr.getResponseHeader is not a function
    at Object.DocumentClientFactory._shouldForceRetry (DocumentClientFactory.js?v=
    at HttpRequest.xhr.onreadystatechange (documentdbclient-1.14.0.js?v=
    at HttpRequest._onAjaxError (Request.js?v=
    at i (jquery.min.js:2)
    at Object.fireWith [as rejectWith] (jquery.min.js:2)
    at y (jquery.min.js:4)
    at XMLHttpRequest.c (jquery.min.js:4)

The following is the script I used to auto create my cosmodb:

  "$schema": "",
  "contentVersion": "",
  "parameters": {
    "databaseAccountName": {
      "type": "string",
      "metadata": {
        "description": "The MongoDB database account name. Needs to be globally unique."
  "variables": {},
  "resources": [
      "apiVersion": "2015-04-08",
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "kind": "MongoDB",
      "name": "[parameters('databaseAccountName')]",
      "location": "[resourceGroup().location]",
      "properties": {
        "databaseAccountOfferType": "Standard",
        "name": "[parameters('databaseAccountName')]"


The issue is that you are creating a MongoDB account and using a sample that writes data using the DocumentDB API, as stated in the article you linked.

Microsoft's Spring Boot Starter enables developers to use Spring Boot applications that easily integrate with Azure Cosmos DB by using DocumentDB APIs.

MongoDB accounts are meant to be used with MongoDB clients and applications, not DocumentDB API clients and applications.

The main difference is that MongoDB's required identifier field is "_id" while DocumentDB/SQL account's required identifier is "id". When you write documents to a MongoDB account through a MongoDB client (application or a code using one of the MongoDB SDKs), the driver/client makes sure your document has the required "_id" field or autogenerates one. And when you work with a DocumentDB API sdk/client and a DocumentDB/SQL account, the sdk/client will autogenerate the required "id" field.

The Portal uses a MongoDB client to read the documents in a MongoDB account, but it is failing to read the documents because they are not valid MongoDB documents (don't have the required identifier). You will run into the same error if you try to read the documents with a MongoDB application coded by yourself or if you try to read that using a MongoDB client like Robomongo or Mongo Chef.

In your case, if you want to use the Spring Boot sample, you need to create a DocumentDB/SQL account. In the screenshots of the article you can see how to create a SQL account.

Hope this helps.


I am using Azure Cosmos DB. I have created a simple trigger in Azure Portal as follows:

  var context = getContext();
  var request = context.getRequest();

  // item to be created in the current operation
  var itemToCreate = request.getBody();
  itemToCreate["address"] = "test";

  // update the item that will be created

Unfortunately this trigger is not being triggered when I insert new documents. I have also tried to set the "Trigger Type" to "Post". Am I missing anything?


Great question! I always thought that triggers would run automatically :).

I believe the triggers are not run automatically whenever a document is inserted. What you would need to do is specify the trigger that you want to run when you're creating the document.

What you need to do is register the trigger by passing the trigger name as the request option when sending create document request.

For example, see the code here: (copied below as well). Notice the use of PreTriggerInclude in RequestOptions:

dynamic newItem = new
    category = "Personal",
    name = "Groceries",
    description = "Pick up strawberries",
    isComplete = false

Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
RequestOptions requestOptions = new RequestOptions { PreTriggerInclude = new List<string> { "trgPreValidateToDoItemTimestamp" } };
await client.CreateDocumentAsync(containerUri, newItem, requestOptions);


I am launching one windows instance in Azure(Microsoft) using Java and trying to open few inbound ports like 445, 8077 for my work. I have also tried using Security groups port opening but it is only opening the inbound ports at security group level, not at the system level. Provide me some solution so that I can open either before launch itself or after launch is also fine. I have done the same thing in AWS as asked in below URL: Open some custom inbound ports e.g. 8077 by using 80 or 3389


If my understanding is right,you use a Windows VM in Azure(Iaas service). It is same with AWS instance, you could use Power Shell to open port 8077 on Windows Firewall.

netsh advfirewall firewall add rule name="Open Port 8077" dir=in action=allow protocol=TCP localport=8077

On Azure VM, you also need open port on Azure NSG.


If you want to use Azure java SDK to do this, you could use this example.

I modify the example to add Custom Script Extension like below:

            .withPublicSetting("commandToExecute", "netsh advfirewall firewall add rule name=\"Open Port 8077\" dir=in action=allow protocol=TCP localport=8077")

You could check my code on Github.


I want to create a Java application on Microsoft Azure in a web app. The web app service has provides some Tomcat and Jetty versions with default configuration. i want to host an application that doesn't use these default versions and configuration. is this doable or should I opt for VM instead?


It is doable. But in this case you will need to create a web app and then manually copy and edit configuration files.

This method is good for:

  • Java applications that require a version of Tomcat or Jetty that isn't directly supported by App Service or provided in the gallery. (Your case)
  • Java application that takes HTTP requests and does not deploy as a WAR into a pre-existing web container.
  • Configure the web container from scratch.
  • Use a version of Java that isn’t supported in App Service and want to upload it yourself.

For cases like these, you can create an app using the portal, and then provide the appropriate runtime files manually.

Here is the tutorial for implementing this:


I have created Red Hat VM in Microsoft Azure.

I have started Java server in VM with port 8081 and it started successfully. But I am not able view it in browser. Its doesn't get loaded.

I am using Red Hat Linux OS. I believe Firewall is blocking the Port to be visible.


For Red Hat, you should open port 8081 on Azure NSG (Inbound rules), you also need to add rules to Red Hat firewall. You could use the following commands:

sudo firewall-cmd --zone=public --add-port=8081/tcp --permanent
sudo firewall-cmd --reload

More information about Red Hat firewall please refer to this article.


I am building a Java webapp (Spring webapp using Maven build) on Azure and using Application Insights for monitoring. I used the reference link

Since I use multiple environments I planned to pass the App Insights instrumentation Key as a system property from azure portal APP_SETTING (JAVA_OPTS value as -Dappinsight.instrumentation.key=xxxxxxx).

I have added required Maven dependencies and my src\main\resources\ApplicationInsights.xml has the App Insight instrumentation key reference as:

<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="" schemaVersion="2014-05-30">

  <!-- The key from the portal: -->


  <!-- HTTP request component (not required for bare API) -->

    <Add type=""/>
    <Add type=""/>
    <Add type=""/>

  <!-- Events correlation (not required for bare API) -->
  <!-- These initializers add context data to each event -->

    <Add   type=""/>
    <Add type=""/>
    <Add type=""/>
    <Add type=""/>
    <Add type=""/>


But it doesn't work. When I hardcode the key directly, it works.

Is there any specific way of referencing the system properties for Application insights in Spring?


The instrumentation key provided in the configuration file is taken as is, and therefore specifying a system property will not help.

Although it is not documented, AI Java SDK tries to resolve the instrumentation key in the following order:

  1. System property: -DAPPLICATION_INSIGHTS_IKEY=your_ikey
  2. Environment variable: APPLICATION_INSIGHTS_IKEY
  3. Configuration file: ApplicationInsights.xml.

So I guess one of the first two options will satisfy you.

The SDK is open-source, you can read the code here: TelemetryConfigurationFactory.setInstrumentationKey


I have created Red Hat VM in Microsoft Azure and able to connect via ssh.

I have started Java server in VM with port 8081 and it started successfully. But I am not able view it in browser. Its doesn't get loaded.

I have tried the following and but all doesn't get loaded:-


I have added Inbound security rule in Network security Group and associated subnet to it. Still I am not able view my server in browser.

I have followed this document for inbound security rule

Here is my inbound rule

netstat -tuplen

(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name 
tcp 0 0* LISTEN 0 20864 - 
tcp 0 0* LISTEN 0 13894 - 
tcp 0 0* LISTEN 0 18132 - 
tcp 0 0* LISTEN 994 19499 - 
tcp6 0 0 :::111 :::* LISTEN 0 13893 - 
tcp6 0 0 :::8081 :::* LISTEN 1000 28721 3212/java 
tcp6 0 0 :::22 :::* LISTEN 0 18143 - 
tcp6 0 0 :::9080 :::* LISTEN 1000 28547 3212/java 
udp 0 0* 0 16585 - 
udp 0 0* 0 23825 - 
udp 0 0* 995 15601 - 
udp 0 0* 0 16574 - 
udp 0 0* 0 23826 - 
udp6 0 0 :::57126 :::* 0 16575 - 
udp6 0 0 :::111 :::* 0 23827 - 
udp6 0 0 ::1:323 :::* 995 15602 - 
udp6 0 0 :::893 :::* 0 23828 - 

ss -tln

State Recv-Q Send-Q Local Address:Port Peer Address:Port 
LISTEN 0 10 *:* 
LISTEN 0 128 *:111 *:* 
LISTEN 0 128 *:22 *:* 
LISTEN 0 128 *:* 
LISTEN 0 128 :::111 :::* 
LISTEN 0 50 :::8081 :::* 
LISTEN 0 128 :::22 :::* 
LISTEN 0 50 :::9080 :::* 


According to your screenshot, I notice that your service is listening on tcp6. Based on my knowledge, if the port only provides ipv6 service, you could not use IPv4 address to access.

I notice that you use jetty container, jetty use IPv6 by default. You could allow jetty force to use IPv4. There is a good answer about this.


I am using Azure's Java SDK and trying to fetch a resource's metrics to then fetch information about it's costs.


I am trying to fetch a resources metrics:


But I get:

Status code 400, {"code":"BadRequest","message":"ApiVersion: 2018-01-01 does not support query at non Arm resource Id level"}

I am not sure how to fix this.


listByResource is expecting the resourceId of a resource (ie. a storage account or VM). But you are passing the resourceId of a resource group.


I'm generating a JSON file in JAVA. The file contains a list of JSONs. I want to import this file to Azure Cosmos DB as soon as it is created.

Is there some way to achieve it from Java code?

Thanks in advance!


According to my research, if we want to implement bulk operations with java, we just can use bulk executor Java library. For more details, please refer to the document and article. Regarding how to use bulk executor Java library, please refer to the document.

For example

  1. My .json file
        "id": "1",
        "name": "test1",
        "age": "20"
    }, {
        "id": "2",
        "name": "test2",
        "age": "21"
    }, {
        "id": "3",
        "name": "test3",
        "age": "22"
    }, {
        "id": "4",
        "name": "test4",
        "age": "23"
        "id": "5",
        "name": "test5",
        "age": "24"
    }, {
        "id": "6",
        "name": "test6",
        "age": "25"
    }, {
        "id": "7",
        "name": "test7",
        "age": "26"
    }, {
        "id": "8",
        "name": "test8",
        "age": "27"

  1. My pom.xml

  1. Code
 String endpoint="<your cosmos db endpoint>";
        String key="<your key>";
        ConnectionPolicy connectionPolicy = new ConnectionPolicy();
        DocumentClient client = new DocumentClient(
        String databaseId="testbulk";
        String collectionId="items";
        String databaseLink = String.format("/dbs/%s", databaseId);
        String collectionLink = String.format("/dbs/%s/colls/%s", "testbulk", collectionId);

        ResourceResponse<Database> databaseResponse = null;
        Database readDatabase = null;
        try {
            databaseResponse = client.readDatabase(databaseLink, null);
            readDatabase = databaseResponse.getResource();

            System.out.println("Database already exists...");

        } catch (DocumentClientException dce) {
            if (dce.getStatusCode() == 404) {
                System.out.println("Attempting to create database since non-existent...");

                Database databaseDefinition = new Database();

                    client.createDatabase(databaseDefinition, null);

                databaseResponse = client.readDatabase(databaseLink, null);
                readDatabase = databaseResponse.getResource();
            } else {
                throw dce;

        ResourceResponse<DocumentCollection> collectionResponse = null;
        DocumentCollection readCollection = null;

        try {
            collectionResponse = client.readCollection(collectionLink, null);
            readCollection = collectionResponse.getResource();

            System.out.println("Collection already exists...");
        } catch (DocumentClientException dce) {
            if (dce.getStatusCode() == 404) {
                System.out.println("Attempting to create collection since non-existent...");

                DocumentCollection collectionDefinition = new DocumentCollection();

                PartitionKeyDefinition partitionKeyDefinition = new PartitionKeyDefinition();
                Collection<String> paths = new ArrayList<String>();

                RequestOptions options = new RequestOptions();

                // create a collection
                client.createCollection(databaseLink, collectionDefinition, options);

                collectionResponse = client.readCollection(collectionLink, null);
                readCollection = collectionResponse.getResource();
            } else {
                throw dce;


        ArrayList<String> list = new ArrayList<String>();
        JSONParser jsonParser = new JSONParser();
        try (FileReader reader = new FileReader("e:\\test.json")) {

            //Read JSON file
            Object obj = jsonParser.parse(reader);

            JSONArray jsonArray  = (JSONArray) obj;
            // cast jsonarry to string list
            if (jsonArray  != null) {
                int len = jsonArray.size();
                for (int i=0;i<len;i++){

        } catch (FileNotFoundException e) {
        } catch (IOException e) {
        } catch (ParseException e) {
        // Set client's retry options high for initialization

       // Builder pattern
        DocumentBulkExecutor.Builder bulkExecutorBuilder = DocumentBulkExecutor.builder().from(
                20000) ;// throughput you want to allocate for bulk import out of the container's total throughput

         // Instantiate DocumentBulkExecutor
        try {
            DocumentBulkExecutor bulkExecutor =;
            // Set retries to 0 to pass complete control to bulk executor
            BulkImportResponse bulkImportResponse = bulkExecutor.importAll(list, false, false, null);
        } catch (Exception e) {


I've been using Azure's cosmosDb for a while. Recently i had done bulk import using stored procedure in a collection of my database, and that used to work fine. Now i've to do the same in another collection which uses partitioning; I searched azure code samples and modified my previous bulk insert function like this:

public void createMany(JSONArray aDocumentList, PartitionKey aPartitionKey)  throws DocumentClientException {
        List<String> aList = new ArrayList<String>();
        for(int aIndex = 0; aIndex < aDocumentList.length(); aIndex++) {
                JSONObject aJsonObj = aDocumentList.getJSONObject(aIndex);

        String aSproc = getCollectionLink() + BULK_INSERTION_PROCEDURE;
        RequestOptions requestOptions = new RequestOptions();

        String result = documentClient.executeStoredProcedure(aSproc,
                        , new Object[] { aList}).getResponseAsString();


but this code gives me error: Message: {"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted.\"]}\r\nStack trace: Error: {\"Errors\":[\"Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted.\"]}\n   at callback (bulkInsertionStoredProcedure.js:1:1749)\n   at Anonymous function (bulkInsertionStoredProcedure.js:689:29)"]}

I'm not quite certain what that error actually means. Since partitionKey is just a JSON key in the document, why would it need it in other Documents also. Do i need to append this in my document also(with partitionKey key) .Could anyone please tell me what i'm missing here? I've searched over the internet and haven't found anything useful that could make it work.


I've already answered this question here. The gist of it is that the documents you're inserting with your SPROC must have a partitionKey that matches the one you pass with the request options

// ALL documents inserted must have a parititionKey value that matches
// "aPartitionKey" value

So if aPartitionKey == 123456 then all the documents you are inserting with the SPROC are required to belong to that partition. If you have documents spanning multiple partitions that you want to bulk insert you will have to group them by partition key and run the SPROC separately for each grouping.


We have to setup MS azure active directory authentication for one of our legacy application which is on Struts-1, will run on JBOSS EAP-7.

The basic setup is like this. We have a welcome file called index.html as below.

 <FRAMESET border=0 name=fs_rep ROWS="18%,*">
  <FRAME SRC="heading.html" NAME="HEADING">
  <FRAME SRC="logon.jsp" NAME="DISPLAY">
  This browser does not support frames. The application cannot be displayed.

When the application starts user sees the login page, gives the credentials and the request goes to LoginAction class which does the LDAP verification.

We are following this link for setting up MS AD Authetication.

We have created a basic filter in web.xml as


This filter has code for authentication and redirects user to Azure login page. We have given the "Response URL" in azure as: http://localhost:8001/MyApp/index.html

This setup works fine with Weblogic server, but when I try to deploy the same on JBOSS EAP-7, it takes us to MS Azure signup page, we give credentials, the basic filter runs, and finally it shows "HTTP method POST is not supported by this URL" in the browser.

Are we on wrong track? How is POST to be supported for the URL (happens only in JBOSS)


It seems that HTTP method POST for .html file is not supported default on JBoss which is different from other servlet engines.

Per my experience, I think there are some way to solve the issue.

  1. It seems like a security constraint on JBoss which may be changed via try to set the below configuration in the web.xml file of your project.

      <display-name>Example Security Constraint</display-name>    
         <web-resource-name>Protected Area</web-resource-name>    
  2. As a work around, you can try to rename your index.html to index.jsp. This will compile your HTML as a JSP run on JBoss serlvet container, and a JSP always uses the service() method and this should avoid the issue on JBoss.


My code using the Azure Java SDK is as follows. I am able to authenticate and get my API.Management generic resource but do not understand how to proceed to access all my registered APIs from the generic resource.

ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(
                clientID, domainID, secret, AzureEnvironment.AZURE);
        Azure.Authenticated authenticated = Azure.authenticate(credentials);

        Azure azure = authenticated.withSubscription(subscriptionID);
        GenericResource genericResource = azure.genericResources().get(resourceGroupName,
                "Microsoft.ApiManagement", "service", resourceName);

Please help.


If you want to list APIs in Azure APi management with java, you can use the sdk azure-mgmt-apimanagement. For more details, please refer to

For example

  1. Install SDK

  1. Code
ApplicationTokenCredentials credentials = new ApplicationTokenCredentials(
                clientID, domainID, secret, AzureEnvironment.AZURE);
        ApiManagementManager apimanager=ApiManagementManager.configure().authenticate(creds,subscriptionId);
        Observable<ApiContract> result = apimanager.apis().listByServiceAsync("testapi06","testapi06");
        ArrayList<ApiContract> apis = new ArrayList<ApiContract>();
        result.doOnNext(r -> apis.add(r))
                .doOnCompleted(() -> System.out.println("compltested"))

        for (ApiContract api: apis) {




I'm using Java DocumentDb bulk-executor to bulk import an array of json to Azure Cosmos DB.

Sample JSON :

    "SId": "101",
    "SName": "ABC"
    "SId": "102",
    "SName": "XYZ"

Sample Code :


DocumentCollection collection = Utilities.createEmptyCollectionIfNotExists(client, DATABASE, CONTAINER, PARTITION_KEY, THROUGHPUT);
            ArrayList<String> list = new ArrayList<String>();
            JSONParser jsonParser = new JSONParser();
            FileReader reader = new FileReader("C:\\samplejson.json");

                Object obj = jsonParser.parse(reader);

                JSONArray jsonArray  = (JSONArray) obj;

                if (jsonArray  != null) {
                    int len = jsonArray.size();
                    for (int i=0;i<len;i++){


                DocumentBulkExecutor.Builder bulkExecutorBuilder = DocumentBulkExecutor.builder().from(client, DATABASE, CONTAINER,
                collection.getPartitionKey(), 20000);

                DocumentBulkExecutor bulkExecutor =;

                BulkImportResponse bulkImportResponse = bulkExecutor.importAll(list, false, false, null);

Now if I have this another JSON :

    "SId": "101,         // Item with this SID has already been inserted
    "SName": "ABCDEF"
    "SId": "103",
    "SName": "PQR"

I want to insert this JSON to the same container. But it's just stored as a new entry and with a different "id", automatically created by Cosmos DB.

How can I bulk import and overwrite if item on the basis of"SId", if it already exists, at the same time?

Please help!


You need to change the isUpsert flag in your call to importAll to be true. This will enable the Upsert operation which means it will either add a new document if the id doesn't already exist, or it will update an existing document if the id is already there.

Change line:

BulkImportResponse bulkImportResponse = bulkExecutor.importAll(list, false, false, null);


BulkImportResponse bulkImportResponse = bulkExecutor.importAll(list, true, false, null);


In Azure portal I have registered an App of type 'Native'. In Java I was able to get the access token using this API call


Request Params

  1. client_id: appId on azure portal
  2. grant_type: "password" this is hardcorded
  3. resource: ""
  4. username: email
  5. password: password of the email

This gives me an accessToken and a refreshToken. I can use this accessToken to call any of the Power BI API. Like get all reports, clone reports, create datasets etc.

Now I want to embed a report to my web page and I use this API using jquery

function embedPBIReport(txtAccessToken, embedUrl, embedReportId, mode) {

        // Read embed URL from textbox
        var txtEmbedUrl = embedUrl;

        // Read report Id from textbox
        var txtEmbedReportId = embedReportId;

        // Get models. models contains enums that can be used.
        var models = window['powerbi-client'].models;

        // We give All permissions to demonstrate switching between View and Edit mode and saving report.
        var permissions = mode == 1 ? models.Permissions.Read : models.Permissions.ReadWrite ;
        var viewMode = mode == 1 ? models.ViewMode.View : models.ViewMode.Edit;
        // Embed configuration used to describe the what and how to embed.
        // This object is used when calling powerbi.embed.
        // This also includes settings and options such as filters.
        // You can find more information at
        var config = {
            type: 'report',
            tokenType: models.TokenType.Embed,
            accessToken: txtAccessToken,
            embedUrl: txtEmbedUrl,
            id: txtEmbedReportId,
            permissions: permissions,
            viewMode: viewMode,
            settings: {
                filterPaneEnabled: false,
                navContentPaneEnabled: true

        // Get a reference to the embedded report HTML element
        var embedContainer = $('#reportContainer');
        // Embed the report and display it within the div container. --> -->
        var report = embedContainer.powerbi(config);

When I initiate embed on web page, it creates an Iframe and shows Power BI icon as loader and then throws this error

{"message":"LoadReportFailed","detailedMessage":"Get report failed","errorCode":"403","level":6,"technicalDetails":{"requestId":"f62b4819-7cd0-1c6d-1af0-a89050881a8a"}}

I have googled this issue and people are saying 403 is caused when authentication process is not correct. What am I doing wrong here?


It looks you are trying to embed the report specifying wrong token type. In your code token type is set to Embed:

tokenType: models.TokenType.Embed

While you never mention that such is generated (using GenerateTokenInGroup for example). So you are probably using the token acquired during initial authentication. If you want to use it, you should change token type to be Aad:

tokenType: models.TokenType.Aad

The difference is that Azure AD token gives access to user’s data, reports, dashboards and tiles, while embed token is specific to the embedded item. Also the embed token has shorter live (~5 minutes) than AAD token (~1 hour).


I have used to authenticate at azure portal to embed power bi reports in my application.

There is a class named Office365Authenticator which I used to authenticate using my credentials. I have provided

  1. client id="3b54c59c-2602-4100-b4e5-xxxxxxxxxxxx"(which i presume is application id on azure portal)
  2. tenant id="b3e3ea8a-1379-4a80-acdd-xxxxxxxxxxxx" (Directory Id)
  3. username (azure portal login email)
  4. password (azure portal login password)

    Office365Authenticator ads = new Office365Authenticator(CLIENT_ID, TENANT, USERNAME, PASSWORD);

But it throws an error

{"error":"invalid_request","error_description":"AADSTS90019: No tenant-identifying information found in either the request or implied by any provided credentials.\r\nTrace ID: 948699d9-0f5d-4dd8-af3d-xxxxxxxxxxxx\r\nCorrelation ID: 27a9bdc9-90c1-4b40-9fe8-xxxxxxxxxxxx\r\nTimestamp: 2019-03-07 14:27:04Z"}

I have search but have no exact clue why it is happening for my scenario when I have verified the tenant id is correct and the user is related with this tenant Id as can be seen in the attached image.

Any help will be appreciated.


To use the ROPC(username and password), you should have the following parameters:

1. client_id: your application id in the azure portal
2. client_secret: you could create this key in the application
3. grant_type:password
4. username: the user account that you want in the azure portal
5. password: the password for your account
6. scope: email openid(here use the microsoft graph api as an example, and the related permissions: User.Read, email, openid)

For the details, you could refer to here.


I want to run a PowerShell command using Java on a remote windows machine, which is actually to open the inbound firewall port. script.ps1 contains the below command

PowerShell cmd:- netsh advfirewall firewall add rule name="Open Port (8077)" dir=in action=allow protocol=TCP localport=(8077)

The below code works fine locally. But I want to do same on a remote machine from my local machine only and I can't do anything manually (not even creating a ps1 file over there). I have admin rights on the remote computer.


public class TestMain2 {

    public static void main(String[] args) throws IOException {
        String command = "powershell.exe \"C:\\Users\\Administrator\\Desktop\\agent_port\\script.ps1\"";

        // Executing the command
        Process powerShellProcess = Runtime.getRuntime().exec(command);
        // Getting the results
        String line;
        System.out.println("Standard Output:");
        BufferedReader stdout = new BufferedReader(new InputStreamReader(powerShellProcess.getInputStream()));
        while ((line = stdout.readLine()) != null) {
        System.out.println("Standard Error:");
        BufferedReader stderr = new BufferedReader(new InputStreamReader(powerShellProcess.getErrorStream()));
        while ((line = stderr.readLine()) != null) {


I tried this link also :- Running Powershell script remotely through Java


Your link example needs enable Winrm Sevice on remotely VM. By default, Azure Windows VM does not allow winrm service. So, you could not use the example.

For Azure VM, you could use Azure Custom Script Extension to do this.

You could use this example. Add following code.

        //Add Azure Custom Script Extension
            .withPublicSetting("commandToExecute", "netsh advfirewall firewall add rule name=\"Open Port 8077\" dir=in action=allow protocol=TCP localport=8077")


I need to import user information from Azure AD and allow those users to sign into my application using their azure AD credentials.

Currently I am using Azure Graph API. I will be adding an application in the azure portal manually, will be getting the clientid, tenantid and secretkey from azure portal. In my application I am expecting the user to provide these three fields and using this I am calling the graph-api to get user-details.

My question is is it a right idea to expect the customer to add the application in their azure portal manually?

If not how can I import the data using java?


Accroding to the Authentication with Azure AD part of the Featuressection in the link, you need to use the OAuth 2.0 client credentials flow or the authorization code grant flow to acquire a token to call the Graph. And the two ways both need client_id, please refer to the link.

But you can see the Configuring multi-tenant applications section from the link to know how to let your application cross organizations.

Then using Azure Graph API in Java to create users for different tenants.


I'm trying to access a Azure-hosted SQL database on my java application. I checked the port 1433 using nmap and it shows that it's closed:

Starting Nmap 7.12 ( ) at 2016-09-02 09:44 PHT
Nmap scan report for localhost (
Host is up (0.00014s latency).
Other addresses for localhost (not scanned): ::1
1433/tcp closed ms-sql-s

I have edited the /etc/pf.conf and restarted my mac but the port is still closed. Here is my pf.conf:

scrub-anchor "*"
nat-anchor "*"
rdr-anchor "*"
dummynet-anchor "*"
anchor "*"
load anchor "" from "/etc/pf.anchors/"
pass in proto tcp from any to any port 1433

Also, my firewall is set to off.

My java app is throwing this error:

Error starting database: The TCP/IP connection to the host, port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".

Java code:

 String connectionString =

        Connection connection = null;

        try {
            connection = DriverManager.getConnection(connectionString);
            println 'connected';
        catch (Exception e) {
        finally {
            if (connection != null) try {
            } catch (Exception e) {


You're overlooking the Azure SQL server-level Firewall:

You need to whilelist your Mac's Public IP address in there to be able to connect from local. By default only Azure services can reach 1433/TCP on your Azure SQL instance (permitted by Allow access to Azure Services setting in the Azure SQL Firewall).

If you're getting a new Public IP address every time you reboot your DSL/Cable/Fiber modem you'll need to define a whole range of addresses not just the one (i.e. - vs. listing Hopefully you'll always grab an IP address in the same range.


Does the currently azure-java-sdk support Virtual Machine Scale Set creation? I've checked the java docs in and could not find anything there.


Azure SDK for Java 1.0.0-Beta1, which is currently available on MavenCentral does have support for VM Scale Sets


I'm integrating Azure AD login authentication to my web app. I have created an account in azure development portal and registered my app as web app. In the app registration settings, I have provided the redirect URL like below,

redirect URL:

In my java web app, I have implemented the logic to acquire the azure 's token in the above mentioned end point (azureLogin.html). I have used ADAL java library to implement the below code logic

private AuthenticationResult acquireTokenByAuthorizationCode(String authCode) {
    String authority = System.getProperty("", "");
    String clientId = System.getProperty("", "xxxxxxxxxxxxxxxxxxxxxxxxx");
    String clientSecret = System.getProperty("", "xxxxxxxxxxxxxxxxxxxxxxxxxxxx");
    String redirectUrl = System.getProperty("", "");
    AuthenticationResult result = null;
    ExecutorService service = null;
    try {
      service = Executors.newFixedThreadPool(1);
      AuthenticationContext context = new AuthenticationContext(authority, false, service);
      ClientCredential credential = new ClientCredential(clientId, clientSecret);
      Future<AuthenticationResult> future = context.acquireTokenByAuthorizationCode(authCode, URI.create(redirectUrl), credential, null);
      result = future.get();
    } catch (Exception e) {
      LOGGER.error("Error occurred while acquiring token from Azure {}", e.getMessage());
      throw new Exception(String.format("Error occurred while acquiring token from Azure. %s", e.getMessage()));
    return result;

Note: i have not provided value for "home page URL" i believe this is not mandatory

Now while doing the following steps I'm facing the error

Login to

sign in with my account credentials

After landing to the office 365 home page , I can see my web app's icon listed

on clicking my web app's icon/button , i'm getting redirected and finally throwing the below error. there are no log updates in my web app's server log. i'm sure that this has not reached my web app.

"You cannot access this application because it has been misconfigured. Contact your IT department and include the following information:
Undefined Sign-On URL for application"

If I provided my web app's login URL for home page URL field like below,

home page URL:

then while trying to open the my app from office 365 , it is opening my web app's login page (where it will prompt to enter application's DB username & password). this is not what i'm looking for.

what i want to achieve is -> login to office 365 -> click my web app button -> the redirect URL mentioned in the azure portal during my app registration should load - > which will eventually call the code logic written in my web app to acquire the azure token and login to my app with the azure returned token stored in session.

please let me know what I miss here. why i'm getting this Undefined Sign-On URL for application error ? on click of my app's icon in office 365 portal, why it is not redirecting to the redirect URL configured ?


Issue: "You cannot access this application because it has been misconfigured. Contact your IT department and include the following information: Undefined Sign-On URL for application"

Regarding the error, you need to configure home page url then you can fix the error. For more details, please refer to

Issue: on click of my app's icon in office 365 portal, why it is not redirecting to >the redirect URL configured ?

Regarding the issue, I think you miss something about Sign-On URL and redirect url. Sign-On URL and redirect url are different. Typically the sign-on URL is a URL that triggers login against AAD. The redirect url is the location that the authorization server will send the user to once the app has been successfully authorized.


What is the best way to get all the information sent by a website after sending a GET request. My main problem is i am not able to login into Microsoft Account using code. I've wrote a code for getting all the parameters :-


public class Requests {
public static void main(String[] args) throws IOException {

    URL url = new URL("Microsoft Portal URL");
    HttpURLConnection httpCon = (HttpURLConnection) url.openConnection();
    OutputStreamWriter out = new OutputStreamWriter(httpCon.getOutputStream());
    System.out.println("Response Code " + httpCon.getResponseCode());
    System.out.println("Response Status " + httpCon.getResponseMessage());
    System.out.println("Header Fields " + httpCon.getHeaderFields());
    System.out.println("Sent URL " + httpCon.getURL());

I am getting the result as follows :

Response Code 200 
Response Status OK 
Header Fields {null=[HTTP/1.1 200 OK], client-request-id=[9031e090-ea92-4581-b8d1-5b1c66076b50],
Content-Length=[7796], Expires=[-1],
Set-Cookie=[stsservicecookie=ests; path=/; secure; HttpOnly,
x-ms-gateway-slice=productiona; path=/; secure; HttpOnly,
flight-uxoptin=true; path=/; secure; HttpOnly],
X-Powered-By=[ASP.NET], Server=[Microsoft-IIS/8.5],
Cache-Control=[no-cache, no-store], Pragma=[no-cache],
Strict-Transport-Security=[max-age=31536000; includeSubDomains],
Date=[Wed, 16 Mar 2016 08:41:08 GMT], P3P=[CP="DSP CUR OTPi IND OTRi ONL FIN"],
Content-Type=[text/html; charset=utf-8]

I need the Redirect URI which is available only if i am logged in to Microsoft Account. So i need to log in to the website using some code.

---->>>>Thing i want to do is: After Sending a Get request in this format : GET "{client_id}&redirect_uri={redirect_uri}"

It gives me back a parameter named code(if i am using Rest Client application). For getting this code a user must be logged in into Azure Portal.

My problem is when i am doing all this with java code, i am not getting this code. The problem is i am not able to log in using java code. Help me with this issue.


On Azure, there are three groups of REST APIs need to be authenticated, including Resource Management APIs, Service Management APIs and other specifical APIs like for Azure Storage or Service Bus, etc.

  1. To using Resource Management APIs, you need to authenticate Azure Resource Manager requests, please see Authenticating Azure Resource Manager requests. And there are some samples in Java you can refer to on GitHub

  2. To using Service Management APIs, you need to authenticate Service Management Requests, please see Authenticating Service Management Requests. There is an offical blog shown how to get started for Java.

  3. To using some specifical APIs, you need to construct a Shared Access Signature token for delegating access. For example, Delegating Access with a Shared Access Signature for Azure Storage Service REST, you can see the Service SAS Examples to know how to construct.


I am trying to use diamond operator in code

HashMap<String, Integer> unsyncMap = new HashMap<>();

However, I am receiving the following error on Azure DevOps pipeline:

[INFO] -------------------------------------------------------------
##[debug]full match =  /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/[12,58] diamond 
##[debug]file path =  /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/
##[debug]line number = 12
##[debug]column number = 58
##[debug]message =  diamond 
##[error] /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/,58): error :  /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/[12,58] diamond 
##[debug]Processed: ##vso[task.issue type=error;sourcepath= /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/;linenumber=12;columnnumber=58;] /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/[12,58] diamond 
[ERROR] /home/vsts/work/1/s/src/main/java/org/train/modules/hashmaps/[12,58] diamond operator is not supported in -source 6
  (use -source 7 or higher to enable diamond operator)

The yaml is already updated to use JDK 1.11

- master

  vmImage: 'ubuntu-latest'

- task: Maven@3
    mavenPomFile: 'pom.xml'
    mavenOptions: '-Xmx3072m'
    javaHomeOption: 'JDKVersion'
    jdkVersionOption: '1.11'
    jdkArchitectureOption: 'x64'
    publishJUnitResults: true
    testResultsFiles: '**/surefire-reports/TEST-*.xml'
    goals: 'compile'

Also note that the debug of maven build is showing that its using JDK 11

##[debug]Using the specified JDK version to find and set JAVA_HOME
##[debug]Locate JAVA_HOME for Java 1.11 x64
##[debug]Processed: ##vso[telemetry.publish area=TaskHub;feature=Maven]{"jdkVersion":"1.11"}
##[debug]set JAVA_HOME=/usr/lib/jvm/zulu-11-azure-amd64
##[debug]Processed: ##vso[task.setvariable variable=JAVA_HOME;issecret=false;]/usr/lib/jvm/zulu-11-azure-amd64
##[debug]Enabled code coverage successfully


You can configure your java version in mvn compiler plugin. In your pom it should be like:



I'm building a project that consists in one web site and one Java application. The web site is running as a Web App inside Azure, and my application is in a Virtual Machine in Azure too. This java application is a Web Server that an user could consume it from the internet.

So what i'm am trying to accomplish here is, when an user types, it opens the web site, and when he types, he should be redirected to my application in the virtual machine. So he can access my Web Server without using the IP directly.


Per my experience, there are two ways which can satify your needs, and it's not necessary to specify an alone port for a redirect request to VM.

  1. As @DavidMakogon said, you can create a request filter in your Java Application to use the function like response.sendRedirect() to redirect a request for an url mapping pattern, that's based on HTTP 30X.

  2. You can try to create an Azure Application Gateway for your website & VM to direct the request using URL pattern routing, please see the offical article Create an application gateway using URL based routing to know how to get started.


I am familiar with storage library and management library but they don't use the azure portal credentials to login, is there other way?


According to my research, we can Authenticate using userName and Password. For example:

UserTokenCredentials creds = new UserTokenCredentials(
    "<your ad domain>",
Azure.Authenticated azureAuth = Azure.authenticate(creds);

But please note that Microsoft does not recommend customers use the way. For more details, please refer to