Hot questions for Using Cassandra in docker


I have created a Cassandra Client written with Achilles Object mapping in java (using IntelliJ + Gradle). My Client works fine locally in IntelliJ, but throws the exception when deployed in a docker container. I am currently stuck with the below exception in my docker container.

java.lang.NoClassDefFoundError: Could not initialize class io.netty.buffer.PooledByteBufAllocator at com.datastax.driver.core.NettyOptions.afterBootstrapInitialized( at com.datastax.driver.core.Connection$Factory.newBootstrap( at com.datastax.driver.core.Connection$Factory.access$100( at com.datastax.driver.core.Connection.initAsync( at com.datastax.driver.core.Connection$ at com.datastax.driver.core.ControlConnection.tryConnect( at com.datastax.driver.core.ControlConnection.reconnectInternal( at com.datastax.driver.core.ControlConnection.connect( at com.datastax.driver.core.Cluster$Manager.negotiateProtocolVersionAndConnect( at com.datastax.driver.core.Cluster$Manager.init( at com.datastax.driver.core.Cluster.init( at com.datastax.driver.core.Cluster.connectAsync( at com.datastax.driver.core.Cluster.connectAsync( at com.datastax.driver.core.Cluster.connect( at java.util.Optional.orElseGet( at info.archinnov.achilles.configuration.ArgumentExtractor.initSession( at info.archinnov.achilles.configuration.ArgumentExtractor.initConfigContext( at info.archinnov.achilles.bootstrap.AbstractManagerFactoryBuilder.buildConfigContext( at at com.ds.db.cassandra.AchillesClient.( at com.ds.message.RabbitMQMsgClient$ at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(

But the class, io.netty.buffer.PooledByteBufAllocator which is part of netty-buffer-4.0.56.Final.jar is already part of the classpath.

When I tried testing thing locally from my Intellij IDE, things are working fine. But after deployment, I am facing this issue in my docker container.

The service is started in my docker container like this:

java -server -XX:HeapDumpPath=/opt/ds/srv/diagnostics/msgreader-1589749851-2s89z.heapdump -Xmx614m -Xms614m -XX:MaxMetaspaceSize=126M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:CICompilerCount=4 -XX:MaxGCPauseMillis=1000 -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:OnOutOfMemoryError=kill -9 %p -Djava.library.path=/usr/local/lib -Dapp.dir=/opt/ds/sw/apps/msgreader -Dlog.dir=/opt/ds/var/log/msgreader -cp /opt/ds/sw/apps/javacontainer/resources:/opt/ds/sw/apps/msgreader/lib/*:/opt/ds/sw/apps/msgreader/resources:/opt/ds/sw/apps/javacontainer/lib/* com.ds.msg.Server start

From the above cmd, you can notice the -cp argument mentioning the class path. And this path contains netty-buffer-4.0.56.Final.jar.

I later found that netty-all-4.0.51.Final.jar is also part of the class path and this jar also contains the same class file. I even tried removing jars, with all possible combination. But still I am facing the same issue.

Even in case of multiple versions of a jar file, we should be getting NoSuchMethodError, Can anyone please help me understand the problem.


I have finally found the answer, the issue is what I guessed in my question. Multiple versions of same jar, had caused the failure. To find it, I used the following in my gradle file:

apply plugin: 'project-report'

And ran,

gradle htmlDependencyReport

It will give us a good HTML report on the dependencies Tree. We can even use the below cmd, but it will tough to follow up in a multi module gradle projects

gradle dependencies

In the HTML report, I found achilles-core module had dependency on netty-buffer-4.0.56.Final.jar and another module had dependency on netty-all-4.0.51.Final.jar. So when I tried the following for achilles in build.gradle, things were working fine:

compile(group:'info.archinnov', name: 'achilles-core', version: '6.0.0'){
    exclude module: 'netty-buffer'

As netty-all-4.0.51.Final.jar already had the classes required for achilles Object mapping, my project started working on deployment.

Another reason for Failure, even after removing the duplicate jars files from the docker container: (Hard)Restarting the pod, in turn created a new pod, which pulls the same Dockerimage from docker repo.

IntelliJ some how, resolves the PATH issue, when running locally :/


I have installed Docker Toolbox for Mac OSX and running several containers inside. First two I created were with Cassandra and were running fine. After that I've created 2 Debian containers, connected to bash through docker terminal with the purpose to install Oracle JDK8.

At the point when I was about to extract java from the tarball - I've got a ton of "Cannot write: No space left on device" error messages during the execution of "tar" command.

I've checked the space:

$ docker ps -s

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                         NAMES               SIZE

9d8029e21918        debian:latest       "/bin/bash"              54 minutes ago      Up 54 minutes                                                     deb-2               620.5 MB (virtual 744 MB)

49c7a0e37475        debian:latest       "/bin/bash"              55 minutes ago      Up 55 minutes                                                     deb-1               620 MB (virtual 743.5 MB)

66a17af83ca3        cassandra           "/docker-entrypoint.s"   4 hours ago         Up 4 hours          7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp   node-2              40.16 MB (virtual 412.6 MB)

After seeing that output I noticed that one of my nodes with cassandra is missing. In went to check to Kitematic and found out that it is in the DOWN state and I can't start it: "Cannot write node . No space left on device" - error message shown for this attempt.

Are there any limits that Docker has to run the containers?

When I remove all my cassandra ones and leave just a couple of Debian - java is able to be extracted from the tar. So the issue is definitely in some Docker settings related to sizing.

What is the correct way to resolve the issue with space limits here?


$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE cassandra latest 13ea610e5c2b 11 hours ago 374.8 MB debian jessie 23cb15b0fcec 2 weeks ago 125.1 MB debian latest 23cb15b0fcec 2 weeks ago 125.1 MB

The output of df -hi

$ df -hi Filesystem Inodes IUsed IFree IUse% Mounted on none 251K 38K 214K 15% / tmpfs 251K 18 251K 1% /dev tmpfs 251K 12 251K 1% /sys/fs/cgroup tmpfs 251K 38K 214K 15% /etc/hosts shm 251K 1 251K 1% /dev/shm

`df -h 
Filesystem Size Used Avail Use% Mounted on 
none 1.8G 1.8G 0 100% 
/ tmpfs 1002M 0 1002M 0% 
/dev tmpfs 1002M 0 1002M 0% 
/sys/fs/cgroup tmpfs 1.8G 1.8G 0 100% 
/etc/hosts shm 64M 0 64M 0% /dev/shm`

Appreciate help.


I have resolved this issue in docker somehow.

By default the memory for the docker is set to be 2048M by default.

First step I performed is stopping my docker machine:

$ docker-machine stop default

Then I went to the $HOME/.docker/machine/machines/default/config.json file and changed the "Memory" setting to be higher, i.e. 4096.

    "ConfigVersion": 3,
    "Driver": {
        "VBoxManager": {},
        "IPAddress": "",
        "MachineName": "default",
        "SSHUser": "docker",
        "SSHPort": 59177,
        "SSHKeyPath": "/Users/lenok/.docker/machine/machines/default/id_rsa",
        "StorePath": "/Users/lenok/.docker/machine",
        "SwarmMaster": false,
        "SwarmHost": "tcp://",
        "SwarmDiscovery": "",
        "CPU": 1,
        "Memory": 4096, 
        "DiskSize": 204800,
        "Boot2DockerURL": "",
        "Boot2DockerImportVM": "",
        "HostDNSResolver": false,
        "HostOnlyCIDR": "",
        "HostOnlyNicType": "82540EM",
        "HostOnlyPromiscMode": "deny",
        "NoShare": false,
        "DNSProxy": false
    "DriverName": "virtualbox",
    "HostOptions": {
        "Driver": "",
        "Memory": 0,
        "Disk": 0,
        "EngineOptions": {
            "ArbitraryFlags": [],
            "Dns": null,
            "GraphDir": "",
            "Env": [],
            "Ipv6": false,
            "InsecureRegistry": [],
            "Labels": [],
            "LogLevel": "",
            "StorageDriver": "",
            "SelinuxEnabled": false,
            "TlsVerify": true,
            "RegistryMirror": [],
            "InstallURL": ""
        "SwarmOptions": {
            "IsSwarm": false,
            "Address": "",
            "Discovery": "",
            "Master": false,
            "Host": "tcp://",
            "Image": "swarm:latest",
            "Strategy": "spread",
            "Heartbeat": 0,
            "Overcommit": 0,
            "ArbitraryFlags": [],
"config.json" [noeol] 75L, 2560C
            "Overcommit": 0,
            "ArbitraryFlags": [],
            "Env": null
        "AuthOptions": {
            "CertDir": "/Users/lenok/.docker/machine/certs",
            "CaCertPath": "/Users/lenok/.docker/machine/certs/ca.pem",
            "CaPrivateKeyPath": "/Users/lenok/.docker/machine/certs/ca-key.pem",
            "CaCertRemotePath": "",
            "ServerCertPath": "/Users/lenok/.docker/machine/machines/default/server.pem",
            "ServerKeyPath": "/Users/lenok/.docker/machine/machines/default/server-key.pem",
            "ClientKeyPath": "/Users/lenok/.docker/machine/certs/key.pem",
            "ServerCertRemotePath": "",
            "ServerKeyRemotePath": "",
            "ClientCertPath": "/Users/lenok/.docker/machine/certs/cert.pem",
            "ServerCertSANs": [],
            "StorePath": "/Users/lenok/.docker/machine/machines/default"
    "Name": "default" 

Finally, started my docker machine again:

$ docker-machine start default


I am experimenting with Cassandra and Opscenter. In the Opscenterd's log file, I found this line

2015-07-29 16:10:16+0000 [] ERROR: Problem while calling CreateClusterConfController (SingleNodeProvisioningError): Due to a limitation with one-node clusters, OpsCenter will not be able to communicate with the Datastax Agent unless list en_address/broadcast_address in cassandra.yaml are set to Please ensure these match before continuing.

Because I deployed Cassandra and Opscenter in different Docker containers, I must set listen_address to the container's internal IP (because Cassandra sitting in a container knows nothing about its host) and broadcast_address to the corresponding host's bridge IP. This is the normal setup if you deploy Cassandra on machines behind separate gateways (like AWS EC2 where each instance has a private and a public IP).

Question 1: What exactly is the limitation with one-node cluster?

Question 2: How should I workaround the problem in this case?



Question 1: What exactly is the limitation with one-node cluster?

OpsCenter (via underlying python driver) is reading cluster information from Cassandra’s system tables (namely, system.peers and system.local), with most of the information coming from system.peers, including broadcast interfaces for each of the nodes.

However, that table does not contain information about the node itself, only about its peers. When there are no peers, there is no way to get broadcast address from Cassandra itself, and that’s what OpsCenter uses to tie actual Cassandra instances to the internal representation. In this case OpsCenter uses whatever address you specified as a seed ( here), and when agents report with a different IP (they’re getting Cassandra’s broadcast address via JMX), OpsCenter would discard those messages.

Question 2: How should I workaround the problem in this case?

Try setting local_address in address.yaml to, this should do the trick.


I'm using docker-compose to run 3 containers 1-My webapplication 2-Postgres 3-Cassandra

Once I use: docker-compose up

My webapp launchs this exception: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra/

Once all container are running, i'm able to enter into my webapp's and try to ping cassandras container before it dies (webapp container), and all packets are succesfully returned so i guess there actually IS connectivity between them.

The weirdest thing is that once I got this exception: .InvalidQueryException: Keyspace 'myKeyspace' does not exist

Wich means connection has been stablished, but was before I add persistance and created the mentioned schema, but I did change nothing on my compose.yml to get this new result

Here is my docker-compose.yml:

version: '3.1' 


    container_name: "cassandra"
    image: cassandra
        - 9042:9042

      - /home/cassandra:/var/lib/cassandra


    container_name: "postgresql"
    image: postgres:11.1-alpine
    restart: always
        POSTGRES_DB: mywebapp
        POSTGRES_USER: postgres
        POSTGRES_PASSWORD: postgres
       #- ./startup.sql:/docker-entrypoint-initdb.d/startup.sql 
       - postgresdata:/var/lib/postgresql/data

        - 5432:5432

    container_name: "mywebapp"
    image: openjdk:10-jre-slim
    hostname: mywebapp
        - ./lib:/home/lib
        - ./mywebapp-1.0.1-SNAPSHOT-exec.jar:/home/mywebapp-1.0.1-SNAPSHOT-exec.jar

        - java
        - -jar
        - -Djava.library.path=/home/lib
        - /home/mywebapp-1.0.1-SNAPSHOT-exec.jar

        - LD_LIBRARY_PATH=/home/lib
        - spring.datasource.url=jdbc:postgresql://postgresql:5432/mywebapp
        - spring.cassandra.contactpoints=cassandra
        - spring.cassandra.port=9042
        - spring.cassandra.keyspace=mywebapp
        #- spring.datasource.username=postgres
        #- spring.datasource.password=postgres
        #- spring.jpa.hibernate.ddlAuto=update+

        - 8443:8443
        - 8080:8080
        - cassandra

volumes: postgresdata:

Thank you all in advance


I am assuming your web app requires for the cassandra service to be running when it starts. You should add depends_on entry to your web app service so docker starts it only when cassandra is started

And the links entry is not necessary as docker automatically will use the service names as hostnames in the network created for this docker-compose project. Same goes for the network_type: bridge - that is the default network type, so you can omit that in your case.


I want to connect to cassandra running on server in a container. Can anyone please give a simple code for that.


Assuming you're trying to do this in java, here is just one small example (there are lots of variants to this). FYI, this is using DSE. You'll need the java drivers as well to make this work.

  public void connect(String nodes, String username, String password, String keyspace) {

    cluster = Cluster.builder()
     .withCredentials(username, password)

session = cluster.connect(keyspace);

Metadata metadata = cluster.getMetadata();

System.out.printf("Connected to cluster: %s\n",

for ( Host host : metadata.getAllHosts() ) {
   System.out.printf("Datacenter: %s; Host: %s; Rack: %s\n",
      host.getDatacenter(), host.getAddress(), host.getRack());

The nodes contain one or more hosts to connect to. It doesn't matter where they are located (physical servers, vms, containers, etc). Those are the initial contact points (typically the seed nodes). Once connected, the entire cluster will be known to the client application where it will spawn connections to all nodes.

Hope this helps you get started.