OpenSees Cloud
OpenSees AMI
Stop Hogging All the RAM
Original Post - 16 Nov 2022 - Michael H. Scott
Visit Structural Analysis Is Simple on Substack.
While writing a previous post on elastic shear beams available in OpenSees, I noticed that the ElasticTimoshenkoBeam3d class stores the element stiffness matrix, along with several other matrices, as private data. As a result, each instance of this class keeps its own copy of several 12x12 matrices for the element response instead of writing to shared memory, i.e., a static matrix shared by all instances of the class like what’s done in most other frame elements.
With more memory allocated to each element object, a large model with many ElasticTimoshenkoBeam3d elements will drain a device’s RAM more quickly than a model comprised of, let’s say, ElasticBeam3d elements.
To assess how quickly RAM is drained, I hopped on AWS and launched a
t2.large
EC2 instance–a clean slate of 8 GiB RAM (and 2 vCPU). On this
virtual machine, I wrote a simple script with an infinite loop that
creates beam elements between two nodes in a three-dimensional model.
import openseespy.opensees as ops
ops.wipe()
ops.model('basic','-ndm',3,'-ndf',6)
L = 1
E = 1
A = 1
Av = 1
I = 1
J = 1
G = 0.4
ops.node(0,0,0,0); ops.fix(0,1,1,1,1,1,1)
ops.node(1,L,0,0)
ops.geomTransf('Linear',1,0,0,1)
i = 0
while True:
i += 1
ops.element('elasticTimoshenkoBeam',i,0,1,E,G,A,J,I,I,Av,Av,1)
#ops.element('elasticBeamColumn',i,0,1,A,E,G,J,I,I,1)
if i % 1000 == 0:
print(i)
Let the script run and see how many elements can be created before
depleted RAM kills the process. The script ran on AWS, so no blue
screens of death or forced reboots on my local device–just a simple
Process Killed
message, wait a couple seconds, then on to the next
analysis.
The last print()
statement for each element type showed:
-
elasticTimoshenkoBeam
– 1,144,000 elements -
elasticBeamColumn
– 9,561,000 elements
With all things being equal, we can allocate over eight times as many
elasticBeamColumn
elements as elasticTimoshenkoBeam
elements. The number
of elements would reduce if we defined a unique node for each element,
as we’d be allocating other memory for Node objects, but you get the
point.
In addition to fewer elements, the time taken to build the model also
suffers due to excessive memory allocation. I modified the script to
measure how long it takes to build a 1,000,000 element model, which we
know from the first analysis will not deplete the RAM on the t2.large
virtual machine.
import openseespy.opensees as ops
ops.wipe()
ops.model('basic','-ndm',3,'-ndf',6)
L = 1
E = 1
A = 1
Av = 1
I = 1
J = 1
G = 0.4
ops.node(0,0,0,0); ops.fix(0,1,1,1,1,1,1)
ops.node(1,L,0,0)
ops.geomTransf('Linear',1,0,0,1)
t1 = time.time()
for i in range(1000000)
ops.element('elasticTimoshenkoBeam',i,0,1,E,G,A,J,I,I,Av,Av,1)
#ops.element('elasticBeamColumn',i,0,1,A,E,G,J,I,I,1)
print(time.time() - t1)
Again, with all things being equal, creating 1,000,000
elasticTimoshenkoBeam
elements takes about four times longer than
creating a similar size model with elasticBeamColumn
elements.
-
elasticTimoshenkoBeam
– 14.9 seconds -
elasticBeamColumn
– 3.87 seconds
Allocating memory takes time, not just space.
This post is not an indictment of the elasticTimoshenkoBeam
element or
its author. Instead, the post shows that an element’s use of RAM can be
important for large models.