Java native method memory leak

JavaMadeSoEasy.com (JMSE)

Memory leak happens when number of objects(these objects are not needed) created becomes large and time spent in garbage collection increases.

“Memory leaks ends up throwing OutOfmemoryError but OutOfmemoryError doesn’t means memory leak in java”.

Static variables are only garbage collected when the class loader which has loaded the class in which static field is there is garbage collected.

For more details click here — Static variables are not garbage collected?

Also read about ThreadLocal in multithreading in java

Addition of numbers using Integers turns out into very costly operation in terms of performance, boxing/unboxing and unnecessary object formations.

Just imagine a situation where 1000000’s. of number are being added using Integer, it will literally explode our heap memory and boxing/unboxing operations will have adverse effect on performance.

So, Addition of numbers using Integers turns out into very costly operation in terms of performance, boxing/unboxing and unnecessary object formations, because it creates memory leak.

An entry in a WeakHashMap will be automatically removed by garbage collector when its key is no longer in ordinary use. Mapping for a given key will not prevent the key from being discarded by the garbage collector, (i.e. made finalizable, finalized, and then reclaimed). When a key has been discarded its entry is removed from the map in java.

Читайте также:  Php unix socket permission denied

For more details read : WeakHashMap in java

3.5) Using custom key in map without Overriding equals() and hashCode() method can cause memory leak >

How using custom key in map without Overriding equals() and hashCode() method can cause memory leak? Let’s learn about that in detail.

If custom key is used and equals() and hashCode() method are not overridden then, key will not be retrieved by using get() method.

Learn how you can use custom key (Employee object) in custom HashMap Custom implementation — put, get, remove

3.6) Close DBC Statement , PreparedStatement , CallableStatement , ResultSet and Connections in java to avoid memory leaks >

You must ensure that you close all the JDBC Statement , PreparedStatement , CallableStatement , ResultSet and Connections in java to avoid memory leaks. You must always close all the above mentioned objects in finally block in java because finally block is always executed irrespective of exception is thrown or not by java code.

Источник

Native memory leak example

We have written quite a lot about memory leaks in Java. The pattern confirming to you the presence of a memory leak is the growth of used heap memory after Major GC events. The major GCs constantly free less and less memory exposing a clear growth trend.

There is however a different type of a memory leak affecting Java deployments out there. This leak would happen in native memory and you would notice no clear trend when monitoring different memory pools within the JVM. The symptoms would include a perfectly healthy chart in regards of heap & permgen consumption as seen below, coupled with the continuous increase of the total memory used by the Java process on the operating system level:

Java memory leak from native code

Example

As I recently stumbled upon a problem where native memory leakage proved to be a problem, I decided to open up the details giving you an example how such leaks can actually happen in the real world. I was able to reduce the example to a simple enough code just loading and transforming classes:

public static void main(String[] args) throws InterruptedException < final BottomlessClassLoader loader = new BottomlessClassLoader(); while (true) < loader.loadAnotherClass(); Thread.sleep(100); >>

So that is all there is – an unterminated loop, just loading classes using the BottomlessClassLoader class loadAnotherClass() method.

Now let us launch this code in two different ways:

  • First launch is just generating classes and keeping the references, essentially just piling up class definitions in memory.
  • Second launch is attaching a javaagent and is a tad bit more complex, generating classes similar to the first launch and transforming the bytecode using the agent’s premain method:
public class BloatedAgent < public static void premain(String agentArgs, Instrumentation inst) < inst.addTransformer((loader, name, clazz, pd, originalBytes) ->originalBytes, true); > >

The transformation is special in regards that it actually does not apply any transformations, returning the original bytes of the class unchanged.

As the next step, the memory usage from both launches was monitored from the OS. In both launches, memory usage of the Java process was captured at certain intervals, using

$ top -R -l 0 -stats mem,time -pid

command, resulting in the data exposed via following chart:

java native leak

Understanding the problem

What we see from above is that the second launch is consuming a lot more memory. This is surprising. If you recall, the transformation itself does not actually transform the class, returning the original bytecode. So one might expect the memory consumption for both of the launches would be identical.

First part in understanding the problem starts to make sense when you think about the class definition storage. After all, shouldn’t the class definitions reside in the permgen/metaspace and would monitoring the permgen also be sufficient to detect this particular issue?

Apparently not. Whenever we return non-null value from the transform method, the JVM assumes that the class was modified in some way. Additionally, when we set the canRetransform parameter (in the second param after the lambda) in Agent’s premain method to true, the JVM expects that at some point you will attempt to retransform the class applying a different transformation. As a result, the original non-transformed bytecode is kept by the JVM “just in case”.

This approach, weird at first point, starts to make sense when thinking about classloaders where loading is an expensive operation, say, some network class loader. You would not want to go to the trouble of fetching the very same bytes once again. Therefore, the JVM caches the original bytecode of the class. It does not store it into the metaspace or permgen, but rather into its own native memory. As a result of this, you would not experience any growth in either heap and permgen/metaspace growth but only would notice the problem when monitoring native memory consumption.

The second part of the answer is hidden in the java.lang.instrument.ClassFileTransformer Javadoc, where for the method transform() it is clearly stated that in cases where the transformation is not actually applied, the transform() method should return null. In this case the JVM implementation is aware of the fact that the class was not actually transformed and there is no need to store additional copy of the bytecode in native memory.

So the fix to the issue was as easy as making the transformation to return null instead of the original behavior where the bytecode itself was returned. But was it easy to troubleshoot the issue? No way, this includes three days from my life which I will never get back. I can only hope that sharing this knowledge will end up saving someone from going through the same mess in the future.

Cancel

Comments

Thanks for sharing this post. It was very helpful.
We have came across “Native Memory Leak” recently in our production servers. I looked at the premain and agentmain classes and methods, there is nothing over there. We didn’t change anything and it is by default with Oracle weblogic installation.
Could you please tell me what else could be the root cause of this issue?

Below is our javaagent code:
package weblogic.diagnostics.debugpatch.agent;

import java.lang.instrument.ClassDefinition;
import java.lang.instrument.Instrumentation;
import java.lang.instrument.UnmodifiableClassException;
import weblogic.diagnostics.debug.DebugLogger;
import weblogic.diagnostics.utils.SecurityHelper;

public class DebugPatchAgent
private static final DebugLogger DEBUG_LOGGER = DebugLogger.getDebugLogger(“DebugDebugPatches”);
private static Instrumentation singleton;

public static void premain(String agentArguments, Instrumentation instrumentation) singleton = instrumentation;
>

public static void agentmain(String args, Instrumentation inst) premain(args, inst);
>

public static boolean isRedefineClassesSupported()
return singleton != null ? singleton.isRedefineClassesSupported() : false;
>

public static void redefineClasses(ClassDefinition[] classDefs)
throws ClassNotFoundException, UnmodifiableClassException, IllegalAccessException

if (!isRedefineClassesSupported()) if (DEBUG_LOGGER.isDebugEnabled())
DEBUG_LOGGER.debug(“DebugPatchAgent Class redefinition is not supported”);
>
return;
>
singleton.redefineClasses(classDefs);
>
>

Below is our Manifest file details:

Manifest-Version: 1.0
Premain-Class: weblogic.diagnostics.debugpatch.agent.DebugPatchAgent
Agent-Class: weblogic.diagnostics.debugpatch.agent.DebugPatchAgent
Can-Redefine-Classes: true
Implementation-Title: debugpatch-agent
Implementation-Version: 12.2.1.2
Implementation-Vendor: Oracle, Inc.

Hi Saish. What makes you believe that the production servers are having a “Native Memory Leak”? In any case, I would recommend starting the troubleshooting with enabling native memory tracking for the affected JVM, see https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html for details.

Gleb Smirnov February 8, 2019

Thank Gleb for the useful article. Wondering if this issue affects significantly across java deployments these days.

Thank you Gleb. I am wondering where in native memory the JVM caches the original bytecode of the class. And if the additional copy of the bytecode is stored where would it be.

Vitaly Grinberg May 21, 2016

Hi Vitaly, thanks for your response. I’m not sure I fully understand by what you mean by “where”, though. But you might want to take a look at the _cached_class_file in instanceKlass [1]. It’s set from within jvmtiRedefineClasses [2].

Since JVMTI should support multiple independent and simultaneous agents, does the usage of two agents increase the probability of the native memory leak as described above.
Will XX:NativeMemoryTracking=detail on the command line help?

Vitaly Grinberg July 23, 2017

Yes, I do believe that having multiple agents attached results in a greater probability of such a leak manifesting itself.

Yes, native memory tracking will help, just check the `Internal` section in the output.

Gleb Smirnov July 24, 2017

Источник

Оцените статью