OpenSees Cloud
OpenSees AMI
How to Find a Memory Leak in OpenSees
Original Post - 29 May 2022 - Michael H. Scott
Visit Structural Analysis Is Simple on Substack.
Memory leaks plague virtually all software written in C++ or any other language that requires programmers to manage memory.
OpenSees is no exception. With code written by many people with varying knowledge of C++ and very little overall QA/QC, it’s fair to say OpenSees has more than its fair share of memory leaks.
It’s a wonder the air hasn’t leaked out of the balloon leaving space for a new framework on the earthquake engineering block. On the contrary, OpenSees and its user base continue to grow.
Most memory leaks–OpenSees or otherwise–spring from forgetting the maxim “For every new you need a delete.” Whenever you’ve finished using dynamically allocated memory, you have to deallocate that memory. Collect your garbage before moving on to other code blocks.
Here’s an example from a post made by Charles Zhang in the OpenSees Facebook group. After a little back-and-forth, we tracked the memory leak down to ShellMITC4
elements with PlateFiber
sections and ElasticOrthotropic
materials (Charles must be modeling mass timber floor diaphragms).
To figure out the source of the leak, I made a special form of a MWE called a MLE, or a Minimal Leaking Example. Run several times through a dummy analysis of a simple model built from the suspects on the list, then monitor your system resources, e.g., using the top
command in Linux or the Task Manager in Windows.
import openseespy.opensees as ops
Nruns = 1000000 # A large number of runs
for i in range(Nruns): # Could also use while True:
ops.wipe()
ops.model('basic','-ndm',3,'-ndf',6)
ops.node(1,0,0,0)
ops.node(2,1,0,0)
ops.node(3,1,1,0)
ops.node(4,0,1,0)
ops.fix(1,1,1,1,0,0,0)
ops.fix(2,1,1,1,0,0,0)
ops.fix(3,1,1,1,0,0,0)
ops.fix(4,1,1,1,0,0,0)
# No memory leak
#ops.nDMaterial('ElasticIsotropic',1,20,0.1)
# Causes memory leak
ops.nDMaterial('ElasticOrthotropic',1,20,20,20,0.1,0.1,0.1,10,10,10)
ops.section('PlateFiber',1,1,0.1)
ops.element('ShellMITC4',1,1,2,3,4,1)
ops.timeSeries('Constant',1)
ops.pattern('Plain',1,1)
ops.load(1,0,0,0,20,0,0)
ops.system('UmfPack')
ops.numberer('RCM')
ops.constraints('Plain')
ops.integrator('LoadControl',0)
ops.algorithm('Newton')
ops.analysis('Static')
ops.analyze(1)
if i % (Nruns//10) == 0:
print(i)
input("Press RETURN to stop process")
The last input
command prevents the operating system from ending the process before you can take a look at top
or Task Manager.
For what it’s worth, this type of loop is not unlike what you would do in an incremental dynamic analysis or a Monte Carlo simulation, where with larger model sizes and longer run times per analysis, any memory will leak manifest and wreak havoc much faster than this single element model.
Also, for what it’s worth, there are actual tools, e.g., valgrind, efence, and Visual Studio, for tracking down memory leaks. What I’m showing here is the equivalent of debugging with print statements–it gets the job done, but it ain’t pretty.
Anyway, with the ElasticOrthotropic
material there is a large memory usage (see the python3
command with 20.1 in the %MEM
column) after the MLE finishes. On the other hand, simply changing the material to ElasticIsotropic
, the MLE uses only 0.3% system memory.
So, the material is the difference maker. But what exactly makes the difference? There’s nothing inherently wrong memory-wise with the ElasticOrthotropic
material. But how these materials are instantiated for PlateFiber
sections is different.
ElasticIsotropic has its own plate fiber implementation, ElasticIsotropicPlateFiber, while ElasticOrthotropic uses the default PlateFiberMaterial wrapper which performs static condensation to enforce \(\sigma_{33}=0\). A quick look in NDMaterial.cpp, where the material wrapper is created in getCopy(), shows the leak–the three-dimensional copy was not being deleted.
Although it was an easy fix, it is quite embarrassing that this leak has been there for 20 years. Also, with an MLE provided by Dr. Silvia Mazzoni a couple months ago, we plugged another longstanding memory leak with time series.
You gotta take it one leak at a time.
If you do come across a memory leak, I’d love to hear about it. Send me a MLE, or post it as a GitHub issue, and we can take it from there.