Hot questions for Using Ubuntu in memory

Question:

When I'm trying too start the elasticsearch in my Ubuntu the startup script gives me the following error:

Java HotSpot(TM) Client VM warning: INFO: os::commit_memory(0x74800000, 201326592, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 201326592 bytes for committing reserved memory.

I try already to search by this and I couldn't find the solution for this. If I restart the machine everything work well for a day and then the elasticsearch goes down and appear this error.

I already setup the property bootstrap.mlockall: true in the elasticsearch.yml file, and also the properties in the default elasticsearch file:

ES_HEAP_SIZE=512 (I have 1GB of RAM)
MAX_LOCKED_MEMORY=unlimited

Someone know what I need to do?

Thanks


Answer:

You have configured a virtual machine with 1 GB of RAM, but elastic is trying to start with 2 GB of RAM (default for Elasticsearch version 5.X)

Either give more memory to your VM, or change Elasticsearch JVM settings /etc/elasticsearch/jvm.options and lower the values of the following parameters: -Xms512m -Xmx512m

Question:

I was developing an App. I had to modify my eclipse.ini so I wanted to know the purpose and meaning of these parameters XXMaxPermSize, vmargs, Xms and Xms, in order to correctly use them. I am using eclipse 3.8 on ubuntu 14.04, with java 7.

--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
-vmargs
-Xms40m
-Xmx384m
-Dorg.eclipse.equinox.p2.reconciler.dropins.directory=/usr/share/eclipse/dropins

Answer:

Like Greg says, everything after -vmargs are VM args which are supplied to the JVM when an application starts. -Xmx is the maximum heap size, -Xms is the initial heap size, and the launcher.XXMaxPermSize is presumably an argument to the eclipse executable. This increases the size of the permagen space. I suspect this argument only really works pre java 8, as permagen was eliminated in 8.

Question:

I am using Android studio 3.1.3 (latest build as of writing this) with Gradle 3.1.3.

And don't know if it matters or not but I have recently upgraded to Ubuntu 18.04

Whenever I start Android studio, it starts with very small memory footprint. (single process named java takes around 1GB of RAM)

Now when I start build process, one more java process starts running taking around 500MB of RAM. Still it's no problem as I have 8GB of RAM.

After using studio for about hour or two (includes number of builds as I test on real device), suddenly computer freezes and there are three java process taking up almost 5GB of RAM (approx 2.3, 1.5 and 1.2 GB each). Those processes will not release memory even if studio is seating idle. I have to exit the studio and restart it to make it go away.

Here is the screenshot of my system monitor windows.

And below is the description of each process.

Is anyone else facing this issue? When I was in Ubuntu 16.04 and old android studio, this was not the problem. Does Ubuntu has to do anything with it?


Answer:

I tried many things from many places from internet but nothing seemed to work.

So I downgraded back to Ubuntu 16.04 and issue is no more happening. Maybe some issue with my setup in 18.04 (which I hardly think might be the reason because I did re-setup from scratch twice and issue still persisted) or might be problem with 18.04 (not blaming!)

Thing is, I did not face problem of ram overflow only in Android studio but IntelliJ and TeamCity setup as well. Somehow many many instances of Java kept running in RAM by OS (sometimes over 10 instances of JVM, couple of which took 2 GB each even after build and everything was finished)

Hope it helps someone!

Question:

I am trying to run Stanford parser in Ubuntu using python code. My text file is of 500 Mb which i am trying to parse.I have a RAM of 32GB. I am increasing the JVM size, but i don't whether it is actually increasing or not because every-time i am getting this error. Please help me out

WARNING!! OUT OF MEMORY! THERE WAS NOT ENOUGH  ***
***  MEMORY TO RUN ALL PARSERS.  EITHER GIVE THE    ***
***  JVM MORE MEMORY, SET THE MAXIMUM SENTENCE      ***
***  LENGTH WITH -maxLength, OR PERHAPS YOU ARE     ***
***  HAPPY TO HAVE THE PARSER FALL BACK TO USING    ***
***  A SIMPLER PARSER FOR VERY LONG SENTENCES.      ***
Sentence has no parse using PCFG grammar (or no PCFG fallback).  Skipping...
Exception in thread "main" edu.stanford.nlp.parser.common.NoSuchParseException
    at edu.stanford.nlp.parser.lexparser.LexicalizedParserQuery.getBestParse(LexicalizedParserQuery.java:398)
    at edu.stanford.nlp.parser.lexparser.LexicalizedParserQuery.getBestParse(LexicalizedParserQuery.java:370)
    at edu.stanford.nlp.parser.lexparser.ParseFiles.processResults(ParseFiles.java:271)
    at edu.stanford.nlp.parser.lexparser.ParseFiles.parseFiles(ParseFiles.java:215)
    at edu.stanford.nlp.parser.lexparser.ParseFiles.parseFiles(ParseFiles.java:74)
    at edu.stanford.nlp.parser.lexparser.LexicalizedParser.main(LexicalizedParser.java:1513)

Answer:

You should divide the text file into small pieces and give them to the parser one at a time. Since the parser creates an in-memory representation for a whole "document" it is given at a time (which is orders of magnitude bigger than the document on disk), it is a very bad idea to try to give it a 500 MB document in one gulp.

You should also avoid super-long "sentences", which can easily occur if casual or web-scraped text lacks sentence delimiters, or you are feeding it big tables or gibberish. The safest way to avoid this issue is to set a parameter limiting the maximum sentence length, such as -maxLength 100.

You might want to try out the neural network dependency parser, which scales better to large tasks: http://nlp.stanford.edu/software/nndep.shtml.

Question:

I have an application with a number of microservices and I'm trying to understand if Docker provides any memory advantages. My services are Java 7/Tomcat 7. Let's say I have 20 of them; is there any advantage for me to run Docker on top of an AWS EC2 Ubuntu 12.04 VM? I understand the value of run-anywhere for developer workstations, etc.; my primary question/concern is about the VM memory footprint. If I run each of these 20 services in their own container, with their own Tomcat, my assumption is that I'll need 20x the memory overhead for Tomcat, right? If this is true, I'm trying to decide if Docker is of value or is more overhead than it's worth. It seems like Docker's best value proposition is on top of a native OS, not as much in a VM; is there a different approach besides EC2 VM on AWS where Docker is best?

I'm curious how others would handle this situation or if Docker is even a solution in this space. Thanks for any insight you can provide.


Answer:

No, there's no memory advantage over running 20 Tomcat processes. The Docker daemon and ancillary processes for 'publishing' ports will consume extra memory.

Docker's advantage is over 20 VMs, which will consume vastly more memory. It provides more isolation than processes alone, e.g. each process will see its own filesystem, network interface, process space. Also Docker provides advantages for packaging and shipping software.

Question:

I am using Ubuntu 12.04 with jenkins as a continuous integration server and have SonarQube installed and setup. Sonar worked for a large amount of time, but recently the service will shutdown immediately after starting the service. I am using Sonar 5.3 and my server has 4GB RAM where most of the time only 1-2GB are occupied.

The sonar.log says:

WrapperSimpleApp: Encountered an error running main: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.String.<init>(String.java:426)
at java.lang.String.<init>(String.java:491)
at java.io.UnixFileSystem.list(Native Method)
at java.io.File.list(File.java:1122)
at java.io.File.listFiles(File.java:1207)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1645)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteQuietly(FileUtils.java:1566)
at org.sonar.application.PropsBuilder.initTempDir(PropsBuilder.java:102)
at org.sonar.application.PropsBuilder.build(PropsBuilder.java:73)
at org.sonar.application.App.main(App.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:240)
at java.lang.Thread.run(Thread.java:745)
<-- Wrapper Stopped

I googled for solutions but they suggest to increase the heap-size when starting the .jar file via command line, but the file that defines the sonar-service in /etc/init.d doesnt even run a .jar file.


Answer:

This looks like SONAR-7125, fixed in SonarQube 5.4. Workaround is to delete the temp folder manually before restart.

Question:

My Jenkins is running in an Ubuntu server instance. At the completion, when a Checkmarx report is being generated, I get a Java heap space issue as shown in the screen shot:

Can someone help me how to increase Java heap space in Checkmarx?

To read the Atlassian KB article "Scan Fails with Java Heap Space Exception" an account seems to be necessary.


Answer:

Read more about what is OutOfMemoryError here. Jenkins itself run as a Java process and if your Jenkins job is also a java process, both of them could cause out of memory Error.

By seeing the log it looks like your job is running into the error. So read also about How to set a JVM option in Jenkins globally for every job?.

Edit: If your Jenkins process itself running into OutOfMemoryError, then refer to Increase heap size in Java on how to increase the JVM heap size for Java processes.

Normally -Xmx2048M is used to specify the max heap size for a java process, in my example i am setting it to 2048 MB. Depending on your configuration, you specify this value in different ways.

Question:

I'm trying to increase the jvm heap space memory in an ubuntu system. When I run the command

java -Xmx2000m

The output it gives is LITERALLY the exact same as the output it gives you when you just type

java

into the command line. I.E. it gives a description of how to use the java CLI, but refuses to acknowledge the fact that I told it to do something. It doesn't give me a failure message, yet when I run

java -XshowSettings

it reveals that the max heap space size has not been changed.

How can I get the java CLI to stop acting like a politician in a tv interview, and to start replying to what I tell it to do so I know what to change about what I'm doing?


Answer:

You need to run both parameters within a single invocation, as they are scoped to the JVM you are creating on the run:

java -Xmx2000m -XshowSettings

Output (cut to the bare minimum for this answer)

VM settings:
    Max. Heap Size: 1.95G

If you need your memory parameters set as a global default, you can use the _JAVA_OPTIONS environment variable.

For instance:

set _JAVA_OPTIONS=-Xmx2000m

Question:

I've a Linode 4GB ram and I installed on it Apache Solr and Java 8, and currently running couple of jars in the background. I didn't use Solr yet, I only installed it and left it running. After a day or so I went to Solr's page on my server and saw this: Solr says 98% of memory is used

But when I type free -m in the console: Less than 32% is used!

And when I used top command the total memory usage is less than 32% !


Answer:

Solr apparently includes "buff/cache" into that number. If you add it to "used" you get 98%.

That memory is reclaimable, so you shouldn't be worried. Operating system is just using RAM for disk cache as much as possible.

Question:

i've got two computers running on Mac OS X El Capitan and Ubuntu 16.04 LTS. On both is Java SDK 1.8.0_101 installed.

When I try to start an game server on Ubuntu with more memory than available, I get the following output:

$ java -Xms200G -Xmx200G -jar craftbukkit.jar nogui
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f9b6e580000, 71582613504, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 71582613504 bytes for committing reserved memory.
# An error report file with more information is saved as:
# 

On Mac OS I don't get this error:

$ java -Xms200G -Xmx200G -jar craftbukkit.jar nogui
Loading libraries, please wait...

Both computers have 8GB of memory. I also tried with an other Apple computer - same problem.

Is that a bug of java mac version?

(Is there a way to force limit the memory usage e.g. to 1GB? -Xmx1G won't work and I wasn't able to find another reason. Not here neither on google)

Thank you!

Sorry for my bad english...


Answer:

It's a difference in how the operating systems work. Linux has a concept of a 'fixed' swap - this is based on physical RAM + the various swapfiles/swap partitions added to the system. This is considered the maximum limit of memory that can be committed.

OSX doesn't consider swap as fixed. It will continue to add swapfiles as more and more memory is committed on the operating system (you can see the files being added in /var/vm).

As a result, you can ask OSX for significantly more memory than is available and it will effectively reply with 'ok', while under linux it will go 'no'.

The upper-bound limit is still enforced by java - once the heap goes above the size specified it will return a java.lang.OutOfMemoryError: Java heap space exception, so if you're specifying a -Xmx1G then it should be enforced by the JRE.

You can see the difference with a simple test program:

import java.util.Vector;

public class memtest {

    public static void main(String args[]) throws Exception {
        Vector<byte[]> v = new Vector<byte[]>();
        while (true) {
            v.add(new byte[128 * 1024]);
            System.out.println(v.size());
        }

    }
};

If this program is run with -Xmx100M it dies with a Java heap space message after ~730 iterations, when run with -Xmx1G it dies with a Java heap space message after ~7300 iterations, showing that the limit is being enforced by the java virtual machine.

Question:

I'm using Hadoop in my application, but just before the program exits I get this error java.lang.OutOfMemoryError: Java heap space I already modified the mapred-site.xml and added this property to it

<property> <name>mapred.child.java.opts</name> <value>-Xmx4096m</value> </property>

but still the exception appears. I used this command in terminal: java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize' and this was the result:

uintx AdaptivePermSizeWeight                    = 20              {product}           
     intx CompilerThreadStackSize                   = 0               {pd product}        
    uintx ErgoHeapSizeLimit                         = 0               {product}           
    uintx HeapSizePerGCThread                       = 87241520        {product}           
    uintx InitialHeapSize                          := 1054841728      {product}           
    uintx LargePageHeapSizeThreshold                = 134217728       {product}           
    uintx MaxHeapSize                              := 16877879296     {product}           
    uintx MaxPermSize                               = 174063616       {pd product}        
    uintx PermSize                                  = 21757952        {pd product}        
     intx ThreadStackSize                           = 1024            {pd product}        
     intx VMThreadStackSize                         = 1024            {pd product}        
java version "1.6.0_31"
OpenJDK Runtime Environment (IcedTea6 1.13.3) (6b31-1.13.3-1ubuntu1~0.12.04.2)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)

If anyone could please advise how to fix this issue please.


Answer:

Your problem is a memory leak.

You should consider reviewing your code to see what's causing that resource leak. Usually, it's caused by instances where the GC is not able to remove the data from your memory.

Question:

I'm working with the ILOG CPLEX library in Java to solve an ILP problem. I'm using the default settings and did not adjust any parameters. I used the example code which I found online in samples for my main loop:

if (cplex.solve()) {
    Log.printLine("CPLEX solved successfully");
} else {
    Log.printLine("probably insufficient memory or some other weird problem.");
}

I launched my jar on an Ubuntu 14 system with 24GB RAM and let it solve larger problems. When my problem becomes too big to solve with 24GB RAM I expect CPLEX to return false from the solve method. Instead, my CPLEX keeps running endlessly until my kernel kills the process. I verified this by checking kern.log:

Nov  6 00:21:47 node0 kernel: [3779722.641458] Out of memory: Kill process 3600 (java) score 980 or sacrifice child
Nov  6 00:21:47 node0 kernel: [3779722.641476] Killed process 3600 (java) total-vm:36562768kB, anon-rss:23969732kB, file-rss:688kB

This is my first time working with CPLEX and I was wondering how I can make it so that CPLEX will return false to the solve method when it runs out of memory to work with (rather than starving the system resources)?

I tried looking this up online and found some C++ threads about the WorkMem and TreeLimit parameters but I am unable to find how I can configure these with the Java library.

Is anyone able to help me out further please? Thanks.

EDIT: Here is the CPLEX log

Found incumbent of value 5000.000000 after 0.09 sec. (48.51 ticks)
Tried aggregator 1 time.
MIP Presolve eliminated 600000 rows and 1 columns.
MIP Presolve modified 156010 coefficients.
Reduced MIP has 171010 rows, 770000 columns, and 3170000 nonzeros.
Reduced MIP has 770000 binaries, 0 generals, 0 SOSs, and 0 indicators.
Presolve time = 5.54 sec. (2155.22 ticks)
Probing time = 5.51 sec. (186.83 ticks)
Tried aggregator 1 time.
Reduced MIP has 171010 rows, 770000 columns, and 3170000 nonzeros.
Reduced MIP has 770000 binaries, 0 generals, 0 SOSs, and 0 indicators.
Presolve time = 3.68 sec. (1438.46 ticks)
Probing time = 3.45 sec. (181.50 ticks)
Clique table members: 263821.
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 4 threads.
Root relaxation solution time = 43.34 sec. (14019.88 ticks)

Nodes                                         Cuts/
Node  Left     Objective  IInf  Best Integer    Best Bound    ItCnt     Gap

0+    0                         5000.0000        0.0000           100.00%
0     0     4547.0452 14891     5000.0000     4547.0452       20    9.06%
0     0     4568.6089 12066     5000.0000    Cuts: 6990   318432    8.63%

It goes on until the kernel kills it.


Answer:

To change the WorkMem parameter, you'd do something like this:

IloCplex cplex = new IloCplex();
cplex.setParam(IloCplex.Param.WorkMem, 2048);

See the documentation for TreeLimit and MIP.Strategy.File. While looking into this, I spotted a minor problem in the TreeLimit documentation. It mentions 128MB there (the old default value of WorkMem), but it should be 2048MB instead. This is being fixed.

You can find many examples of how to change parameters in the examples shipped with CPLEX (e.g., MIPex3.java, etc., which can be found in the examples sub-directory).

For more information see running out of memory.

All of the links here are for CPLEX 12.6.2, but you should be able to select the documentation for different versions in the knowledge center if you have something else installed.

Question:

At first I could be able to start daemons and run jobs properly, then out of nowhere, I cant start the daemons (start-dfs, start-yarn). After running .sh the terminal waits forever (as in the picture http://imgur.com/Sr5I5aw). The only way to stop is ctrl+c. The logs hs_error_pidxxxx.log says something about insufficient memory (http://imgur.com/3e3VolG).

I tried some advises found in sites, such as adding swap memory, rebooting. I still cant start the daemons.

Here're some in conclusion (in case someone might be confused due to my bad communication skill):

  • My vm has 4gb of memory with about 3.5 free at first.

  • I could be able to run daemons properly on the very same vm.

Thank you in advance for every help.

PS. I'm using Hadoop 2.5.1 with HBase 0.98.11 on Ubuntu 14.04


Answer:

I've solved this problem by removing "export HADOOP_CLASSPATH=/path-to-hbase/hbase classpath" from hadoop-env.

If anyone knows what I did wrong, I would be really appreciated to know that. Thanks.