Hot questions for Using Ubuntu in docker

Question:

I just started to tryout docker to see if it helps setting up our development environment , which consists of

  1. Jdk 1.6
  2. Eclipse
  3. RabbitVCS
  4. Tomcat
  5. MySQL server

Our Development desktops are mostly Ubuntu 16.04 . I would have eclipse, RabbitVCS installed on host and the rest in container .

If every thing goes well , developers should be able to download a Docker image . Running that image should give them JDK , Tomcat and MySQL server . Developers can then start using RabbitVCS to check out projects .

Is this feasible with Docker ?


Answer:

tl;dr

It is not feasible to run eclipse in host OS and use JDK/JRE runs in a container because eclipse has a dependency on JRE. Similarly you cannot have tomcat in one container and JRE/JDK in another container because tomcat needs JRE to run.

Explanation

I would have eclipse, RabbitVCS installed on host and the rest in container....... Running that image should give them JDK , Tomcat and MySQL server

Are you trying to use JDK running on a docker container (and Eclipse IDE running on host os) for active development? If that is the case, it not not feasible (arguable, you can do remote debugging; but remember, debugging is very different than active development) . And it is not the intend of docker. While developing, your developers may need to have JDK installed in their host machine itself. In your case, only sensible thing to run from container is mysql as you don't have any active development there.

Edit: To build a portable development environment

In docker-land, one possible solution is, have eclipse+jdk+tomcat in the same docker image and mount the X11 socket into the container so that you can expose eclipse GUI running in the container to your host OS

More read here: https://blog.jessfraz.com/post/docker-containers-on-the-desktop/

Or just ditch Docker and go to full-blown VM like virtualbox, you might be able to find pre-built images with eclipse in it (not sure), or you can build one after installing all the required packages on a base image and share it amongst your developers.

Question:

I am trying to solve this problem, but unfortunately I can't figure it out.

What am I trying?

I am trying to redirect a location to a Jersey 2 (JAX-RS 2) Web app which can interact with the requests as usual, but with the difference, that additional parts of the url are in it. The Java application with Jersey 2 is running on a tomcat docker image.


Let me try to explain it better like this:

This is how all my requests have worked:

[IP-address]:8888/myJersey2App/something/function

Recently I also added SSL, so that my server interacts with https.

Let's say the url for my website is:

http://my.example.com

So the request would be now like this:

https://my.example.com/myJersey2App/something/function

Nginx configuration:

events{
}
http{
    server{
        listen 80;
        listen [::]:80;
        server_name my.example.com;
        return 301 https://$server_name$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name my.example.com;

        ###
        # ssl configuration ...
        ###

        location / {
            proxy_pass http://localhost:8888;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }       
    }
}

This works, since Jersey 2 still gets, that /myJersey2App/something/function is the same path as in the code.

Web.xml configuration:

<servlet-mapping>
    <servlet-name>Jersey Web Application</servlet-name>
    <url-pattern>/myJersey2App/*</url-pattern>
</servlet-mapping>

This is an example, of how Jersey 2 interacts with requests:

@Path("something")
public class JavaClass {
    @GET
    @Path("/function")
    @Produces(MediaType.TEXT_PLAIN)
    public Response myFunction() {
        return Response.ok().entity("MyFunction!").build();
    }

What do I want?

Right now, I have to start multiple docker, with the same application, but with a different port and url.

This is how it should be requested then:

Runs on port 7777:

https://my.example.com/docker1/myJersey2App/something/function

Runs on port 8888:

https://my.example.com/docker2/myJersey2App/something/function

Runs on port 9999:

https://my.example.com/docker3/myJersey2App/something/function

It is actually the same, but with the difference, that dockerX is in the URL.

This is how I thought the Nginx configuration would be:

...
    server {
        listen 443 ssl http2;
        server_name my.example.com;

        ###
        # ssl configuration ...
        ###

        location /docker1 {
            proxy_pass http://localhost:7777;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /docker2 {
            proxy_pass http://localhost:8888;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /docker3 {
            proxy_pass http://localhost:9999;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
...

Unfortunately, my Jersey2 Application won't recognize the url anymore, since the dockerX is inside the url, of course, since dockerX is not mentioned in code (which should not be mentioned!)


Is it possible to configurate Nginx so that my Jersey application still can recognize it even if the url would look like this:

https://my.example.com/docker1/abc/def/ghi/myJersey2App/something/function

My Jersey application should still recognize that it only has to start with /myJersey2App/

I was thinking of something like "the user sees still the dockerX URL, but in background it is treated like this:

http://localhost:8888/myJersey2App/something/function

So, user sees this:

https://my.example.com/docker2/myJersey2App/something/function

But behind it is treated like this:

http://localhost:8888/myJersey2App/something/function

Is it possible to configure Nginx, so that it works like this?

I hope you can help me with that problem.


Answer:

I just figured out by myself.

This is how the nginx configuration should be:

...
    location /docker1 {

        rewrite /docker1/(.*) /$1  break;

        proxy_pass http://localhost:7777;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /docker2 {

        rewrite /docker2/(.*) /$1  break;

        proxy_pass http://localhost:8888;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /docker3 {

        rewrite /docker3/(.*) /$1  break;

        proxy_pass http://localhost:9999;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
...

When you do a request, you still see the dockerX part, but behind it is treated as if dockerX part doesn't exists, because rewrite removes this part. And well the Jersey Java application can work normally, without changing the actual Jersey 2 application.

Question:

I have the following definition of TestContainers (version 1.12.2) module prepared to test liquibase schema in our app. When trying to execute I'm receiving Connection Refused error like it would not exist, however during run of the test I've checked containers and their up:

private static final String DB_URL = String.format("jdbc:postgresql://%s:%s/%s", "localhost", 5432, DB_NAME);

    @Rule
    public final GenericContainer<?> container = 
            new GenericContainer<>("mdillon/postgis:9.5")
                .withExposedPorts(5432)
                .withEnv("POSTGRES_USER", USER)
                .withEnv("POSTGRES_PASSWORD", PASSWORD)
                .withEnv("POSTGRES_DB", DB_NAME);

    @Test
    public void transactionSchemaWasUpdated() throws Exception {
        try (Connection connection = DriverManager.getConnection(DB_URL, USER, PASSWORD)) {
            // GIVEN
            Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new JdbcConnection(connection));
            database.setDefaultSchemaName(SCHEMA);
            Liquibase liquibase = new Liquibase("install.xml", new ClassLoaderResourceAccessor(), database);
            liquibase.setChangeLogParameter("schemaName", SCHEMA);
            // WHEN
            liquibase.update("main");
            // THEN
            assertEquals(getAppVersion(), getDbVersion(connection));
        }
    }

Docker ls result during run of tests:

378e828e4149        mdillon/postgis:9.5                 "docker-entrypoint.s…"   7 seconds ago       Up 6 seconds        0.0.0.0:32784->5432/tcp   thirsty_stonebraker
6a270c963322        quay.io/testcontainers/ryuk:0.2.3   "/app"                   8 seconds ago       Up 7 seconds        0.0.0.0:32783->8080/tcp   testcontainers-ryuk-78a4fc8d-4fb9-41bf-995f-b31076b02465

Error:

org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

Answer:

When a port is exposed in testcontainers, it actually does not use the same port, but another one. According docs:

From the host's perspective Testcontainers actually exposes this on a random free port. This is by design, to avoid port collisions that may arise with locally running software or in between parallel test runs.

You need to ask the container for the mapped port:

Integer actualPostgresPort = container.getMappedPort(5432);

If you analyze the output of docker ps, you will see that port 5432 is not being exposed, but 32784 instead.

Question:

I want to create and execute performance tests against the application. For now, my idea was to use multi-stage build on the first stage to build the application, on the second - build the performance testing and start both application and performance tests in the same container.

My Dockerfile looks like this

# build stage
FROM gradle:jdk11
ARG ARTIFACT_PATH=json-comparison-application/build/libs
ARG ARTIFACT_NEW=app.jar
ARG ARTIFACT_OLD=json-comparison-application-0.0.1-SNAPSHOT.jar
RUN echo ${ARTIFACT_PATH}
RUN apt-get install git && git clone https://github.com/xxxx/json-comparison.git
WORKDIR json-comparison
RUN chmod +x gradlew && \
    ./gradlew clean build -x pmdMain -x spotbugsMain -x checkstyleMain --no-daemon && \
    cd ${ARTIFACT_PATH} && mv ${ARTIFACT_OLD} ${ARTIFACT_NEW}

# performance test stage
FROM ubuntu:18.04
# simplified adoptopenjdk/11 without CMD entry point, probably better move to some external dockerfile
ARG ESUM='6dd0c9c8a740e6c19149e98034fba8e368fd9aa16ab417aa636854d40db1a161'
ARG BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.5%2B10/OpenJDK11U-jdk_x64_linux_hotspot_11.0.5_10.tar.gz'
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8'
ENV JAVA_VERSION jdk-11.0.5+10
RUN apt-get update \
    && apt-get install -y --no-install-recommends curl ca-certificates fontconfig locales \
    && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
    && locale-gen en_US.UTF-8 \
    && rm -rf /var/lib/apt/lists/*
RUN set -eux; \
    curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
    echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
    mkdir -p /opt/java/openjdk; \
    cd /opt/java/openjdk; \
    tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
    rm -rf /tmp/openjdk.tar.gz;
ENV JAVA_HOME=/opt/java/openjdk \
    PATH="/opt/java/openjdk/bin:$PATH"
# custom part of the stage
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python default-jre-headless python-tk python-pip python-dev \
                       libxml2-dev libxslt-dev zlib1g-dev net-tools && \
    pip install bzt
WORKDIR /home/tmp
ARG ARTIFACT_PATH=json-comparison/json-comparison-application/build/libs
ARG ARTIFACT_NEW=app.jar
RUN echo ${ARTIFACT_PATH}
COPY --from=0 /${ARTIFACT_PATH}/${ARTIFACT_NEW} .
# prototype for test
CMD ["bzt", "quick_test.yml"]

But it fails during build with error

Step 23/24 : COPY --from=0 /${ARTIFACT_PATH}/${ARTIFACT_NEW} .
COPY failed: stat /var/lib/docker/overlay2/f227e7b77fba105ba0769aa355458900d202add59c98583f0fd0936cbf4dfc11/merged/json-comparison/json-comparison-application/build/libs/app.jar: no such file or directory

What is the problem?


Answer:

At first sight the problem comes from the (absolute vs. relative) paths of the .jar in your two stages.

Can you verify that they are the same?

For example, you may replace the last command of the first stage with

RUN […] && \
  cd ${ARTIFACT_PATH} && mv ${ARTIFACT_OLD} ${ARTIFACT_NEW} && readlink -f ${ARTIFACT_NEW}

and relaunch the build. If you don't obtain

/json-comparison/json-comparison-application/build/libs/app.jar

but a longer path, then you'll know the correct path to put in ARTIFACT_PATH within the second stage…

Alternatively, you could just get rid of the relative path in the first stage and replace WORKDIR json-comparison with:

WORKDIR /json-comparison

As an aside, it can be useful to name your build stages in the following way:

FROM gradle:jdk11 as builder
[…]

FROM ubuntu:18.04 as test
[…]

Then, this allows you to only build the first stage by running:

$ docker build --target builder -t $IMAGE_NAME .

Question:

I am receiving an java.net.UnknownHostException: postgres-service on a machine where I can ping postgres-service on the command line. This is in the context of Kubernetes (more specifically GKE) services and Docker images. Could it be that Java requires additional packages (in comparison to ping) to be installed before it can resolve symbolic IP addresses such as postgres-service? I meanwhile guess the answer is no, and that the problem lies with resolving postgres-service via kube-dns is this particular situation (see UPDATE).

UPDATE The evidence (including the stacktrace below) suggests that the exception is triggered when Tomcat 9 tries to set-up a JDBC realm with connectionURL="jdbc:postgresql://postgres-service/mydb". The URL is configured in the context descriptor of a web app, which runs inside a Docker image derived from tomcat:9. The context descriptor is generated by a script configured as the image's ENTRYPOINT, which also starts Tomcat (just like the original tomcat:9 does), i.e. the last few lines of the Dockerfile look as follows:

COPY tomcat-entrypoint.sh /
ENTRYPOINT [ "/tomcat-entrypoint.sh" ]
CMD ["catalina.sh", "run"]

I can ping postgres-service after entering a shell with kubectl exec -it <image> bash. Could it be that Tomcat (when run as the image's "single process" with pid 1 by way of the Dockerfile's CMD) sees a different DNS configuration than bash that runs at its sibling? The actual DNS configuration employs kube-dns, as is apparent from /etc/resonf.conf.

org.postgresql.util.PSQLException: The connection attempt failed.
    at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:280)
    at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
    at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
    at org.postgresql.Driver.makeConnection(Driver.java:407)
    at org.postgresql.Driver.connect(Driver.java:275)
    at org.apache.catalina.realm.JDBCRealm.open(JDBCRealm.java:661)
    at org.apache.catalina.realm.JDBCRealm.startInternal(JDBCRealm.java:724)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5054)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:724)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:700)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
    at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:596)
    at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1805)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: postgres-service
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at org.postgresql.core.PGStream.<init>(PGStream.java:64)
    at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:150)
    ... 19 more

Answer:

I had been using a VM instance without scope compute-rw for development so far (see here). I've now recreated it including that scope and rebuilt all relevant Docker images there. Apparently this has resolved the issue.

UPDATE There was also a second issue in that I had clusterIP: None as part of the service specification of postgres-service (now gone). It beats me why I was still able to ping postgres-service from another pod in the same cluster.

Question:


Answer:

It is not correctly written word "ubuntu".

right cmd:

docker run -i -t ubuntu /bin/bash