Hot questions for Using Enterprise JavaBeans in cluster computing

Question:

I'm trying to create a simple clustered Singleton on Wildfly 8.2. I've configured 2 Wildfly instances, running in a standalone clustered configuration. My app is deployed to both, and I'm able to access it with no problem.

My clustered EJB looks like this:

@Named
@Clustered
@Singleton
public class PeekPokeEJB implements PeekPoke {

    /**
     * Logger for this class
     */
    private static final Logger logger = Logger
            .getLogger(PeekPokeEJB.class);

    private static final long serialVersionUID = 2332663907180293111L;

    private int value = -1;

    @Override
    public void poke() {
        if (logger.isDebugEnabled()) {
            logger.debug("poke() - start"); //$NON-NLS-1$
        }

        Random rand = new SecureRandom();
        int newValue = rand.nextInt();
        if (logger.isDebugEnabled()) {
            logger.debug("poke() - int newValue=" + newValue); //$NON-NLS-1$
        }

        this.value = newValue;

        if (logger.isDebugEnabled()) {
            logger.debug("poke() - end"); //$NON-NLS-1$
        }
    }

    @Override
    public void peek() {
        if (logger.isDebugEnabled()) {
            logger.debug("peek() - start"); //$NON-NLS-1$
        }

        if (logger.isDebugEnabled()) {
            logger.debug("peek() - value=" + value); //$NON-NLS-1$
        }

        if (logger.isDebugEnabled()) {
            logger.debug("peek() - end"); //$NON-NLS-1$
        }
    }
}

...and I've written a very simple RESTful service to let me call these methods through the browser...

@Path("/test")
@Named
public class TestRS extends AbstractRestService {
    /**
     * Logger for this class
     */
    private static final Logger logger = Logger.getLogger(TestRS.class);

    @Inject
    private PeekPoke ejb = null;

    @GET
    @Path("/poke")
    public void poke() {
        if (logger.isDebugEnabled()) {
            logger.debug("poke() - start"); //$NON-NLS-1$
        }

        this.ejb.poke();

        if (logger.isDebugEnabled()) {
            logger.debug("poke() - end"); //$NON-NLS-1$
        }
    }

    @GET
    @Path("/peek")
    public void peek() {
        if (logger.isDebugEnabled()) {
            logger.debug("peek() - start"); //$NON-NLS-1$
        }

        this.ejb.peek();

        if (logger.isDebugEnabled()) {
            logger.debug("peek() - end"); //$NON-NLS-1$
        }
    }
}

I'm able to call both the peek and poke methods from a single Wildfly instance and get the expected value. However, if I attempt to call poke from one instance, and peek from another I see that the values are not being replicated across the EJBs.

I was under the impression that a clustered singleton would replicate the value of 'value' across both application servers, providing the same value regardless of which host I made the peek call from. Is this not correct? Is there something I'm missing that still needs to be added to this code?

I'd appreciate any help you can give me! Thanks!


Answer:

Singleton session beans provide a formal programming construct that guarantees a session bean will be instantiated once per application in a particular Java Virtual Machine (JVM).

The JSR 318: Enterprise JavaBeans TM ,Version 3.1 says:

A Singleton session bean is a session bean component that is instantiated once per application. In cases where the container is distributed over many virtual machines, each application will have one bean instance of the Singleton for each JVM

Hence, in a clustered application, each cluster member will have its own instance of singleton session beans and data is not shared across JVM instances (in Wildfly implementation).

In Wildfly if you need only one instance of singleton in a cluster scope you can use the SingletonService implementation. Using a SingletonService, the target service is installed on every node in the cluster but is only started on one node at any given time.

See:

UPDATE:

WildFly 10 adds the ability to deploy a given application as a "singleton deployment". This is a new implementation of a feature that existed in AS 6.0 and earlier. When deployed to a group of clustered servers, a singleton deployment will only deploy on a single node at any given time. If the node on which the deployment is active stops or fails, the deployment will automatically start on another node.

See: WildFly 10 Final is now available!

Question:

How does an application will be deployed when the server is clustered. I am not talking about docker. I mean Domain mode in WildFly/JBoss or Cluster in Glassfish or Websphere.

Is it occur to be like one standalone application on each node/instance of a cluster? Like if I have a @Singleton instance will it be the one on the whole cluster or one for each node and so one for each running app.

Or it will be treated like the one application just spreaded on a cluster under the hood? like the @Singleton is somewhere on the node X and if I call it from node Y it will call it by tcp or something (if not using EJB client)?


Answer:

Is it occur to be like one standalone application on each node/instance of a cluster? Like if I have a @Singleton instance will it be the one on the whole cluster or one for each node and so one for each running app.

Assuming that the cluster has been set up with two nodes/hosts (i.e., two JVMs), there will two copies (i.e., one copy in each JVM host) of the @Singleton objects.

In general, singleton = single object (instance) per webapplication per JVM

In other words, when you are deploying multiple web applications inside the same server (JVM), you can have multiple objects/instances (one per each web application) of the same class within the same JVM.

Or it will be treated like the one application just spread on a cluster under the hood?

In general, the cluster will be set up in two or more separate hosts (in order to achieve the resilience, load balancing, etc.. reasons), i.e., the application will be deployed into separate JVMs, so there will be separate copies of the @Singleton objects in each JVM.

If I have nodes n1 n2 n3. And on the node n3, I''ll create some bean with singleton injected like @Inject private SingleBean singleBean; Will it be the singleton from node n3 only or there is some way that the singleton from n1 may be injected?

It will be singleton for Node3 (JVM) only.

If you really want to share (because of any reason) the singleton instance across JVMs, you can refer here for more details.

Question:

I developed a typical enterprise application that is responsible for provisioning customer to a 3rd party system. This system has a limitation, that only one thread can work on a certain customer. So we added a simple locking mechanism that consists of @Singleton which contains a Set of customerIds currently in progress. Whenever a new request comes for provisioning, it first checks this Set. If cusotomerId is present, it waits otherwise it adds it to the Set and goes into processing.

Recently it was decided, that this application will be deployed in cluster which means that this locking approach is no longer valid. We came up with a solution to use DB for locking. We created a table with single column that will contain customerIds (it also has a unique constraint). When a new provisioning request comes we start a transaction and try and lock the row with customerId with SELECT FOR UPDATE (if customerId does not exist yet, we insert it). After that we start provisioning customer and when finished, we commit transaction. Concept works but I have problems with transactions. Currently we have a class CustomerLock with add() and remove() methods that take care of adding and removing customerIds from Set. I wanted to convert this class to a stateless EJB that has bean-managed transactions. add() method would start a transaction and lock the row while remove() method would commit transaction and thus unlocked the row. But it seems that start and end of transaction has to happen in the same method. Is there a way to use the approach I described or do I have to modify the logic so the transaction starts and ends in the same method?

CustomerLock class:

@Stateless
@TransactionManagement(TransactionManagementType.BEAN)
public class CustomerLock {

    @Resource
    private UserTransaction tx;

    public void add(String customerId) throws Exception {
        try {
            tx.begin();
            dsApi.lock()
        } catch (Exception e) {
            throw e;
        }
    }

    public void remove(String customerId) throws Exception {
        try {
            tx.commit();
        } catch (Exception e) {
            throw e
        }
    }
}

CustomerProvisioner class excerpt:

public abstract class CustomerProvisioner {

    ...

    public void execute(String customerId) {
        try {
            customerLock.add(customerId);

            processing....

            customerLock.remove(customerId);
        } catch (Exception e) {
            logger.error("Error", e);
        }
    }

    ...

}

StandardCustomerProvisioner class:

@Stateless
public class StandardCustomerProvisioner extends CustomerProvisioner {

    ...

    public void provision(String customerId) {
        // do some business logic
        super.execute(customerId);
    }
}

Answer:

As @Gimby noted, you should not mix container-managed and bean-managed transactions. Since your StandardCustomerProvisioner has no annotation like "@TransactionManagement(TransactionManagementType.BEAN)" - it uses container-managed transactions, and REQUIRED by default.

You have 2 options to make it work:

1) To remove "@TransactionManagement(TransactionManagementType.BEAN)" with UserTransaction calls and run CMT

2) Add this annotation ("@TransactionManagement(TransactionManagementType.BEAN)") to StandardCustomerProvisioner and use transaction markup calls from this method, so all the invoked methods use the same transactional context. Markup calls from CustomerLock should be removed anyway.