OpenSees Cloud

OpenSees AMI

OpenSees to JSON

Original Post - 25 Sep 2022 - Michael H. Scott

Visit Structural Analysis Is Simple on Substack.


The plain text JSON format is widely used for moving information to and from back end servers in web applications. So I’ve been learning how to work with JSON for OpenSees Cloud and a couple other projects.

JSON is nothing new in the OpenSees universe. Some element and material classes write key-value pairs when the OPS_PRINT_PRINTMODEL_JSON flag is passed to the Print() function. However, like sendSelf and recvSelf, the JSON print functionality is largely broken as many people–myself included–have not implemented this option in their models. The keys, indentation, and scoping are unclear when writing JSON data from the bottom up.

From the top down you have more control over keys and scoping. To this end, I wrote top-level Python scripts to save an OpenSees model to JSON using DIY print statements. I made some rookie moves though in order to manage the brackets and avoid dangling commas at the end of node and element lists. The scripts quickly got out of hand, then Minjie said “you should use the json package”. Thanks again, Minjie!

It only takes a few lines of code to save basic OpenSees model information to JSON. Consider the famous three member truss example. First, you save the model data to a Python dictionary–modelData in the code below.

import openseespy.opensees as ops

ops.wipe()
ops.model('basic','-ndm',2,'-ndf',2)

ops.node(1,0,0); ops.fix(1,1,1)
ops.node(2,144,0); ops.fix(2,1,1)
ops.node(3,168,0); ops.fix(3,1,1)
ops.node(4,72,96)

ops.uniaxialMaterial('Elastic',1,3000.0)

ops.element('truss',1,1,4,10.0,1)
ops.element('truss',2,2,4,5.0,1)
ops.element('truss',3,3,4,5.0,1)

#
# Analyze, do whatever
#

# Get dimension and coordinate bounds
modelData = {'ndm': ops.getNDM()[0], 'bounds': ops.nodeBounds()}

# Get nodal coordinates
nodeData = {}
for nd in ops.getNodeTags():
    nodeData[nd] = {'coords': ops.nodeCoord(nd)}
modelData['nodes'] = nodeData

# Get element types and connectivity
elementData = {}
for ele in ops.getEleTags():
    elementData[ele] = {'nodes': ops.eleNodes(ele),
                        'type': ops.eleType(ele)}
modelData['elements'] = elementData

The model data includes the number of model dimensions, global bounds on the model, the nodal coordinates, and the element types and connectivity. The code shown will work for any OpenSees model, not just the three member truss. You can save more model data, but I didn’t want this post to get out of control.

Next, you write the modelData dictionary to a JSON formatted file with the json.dump() function.

import json

with open('model.json','w') as outjson:
    json.dump(modelData,outjson)

In its default mode, the json.dump() function produces a one line file whose size is 325 bytes for the simple three member truss model.

Default JSON output

To make the JSON file human readable, use the indent option.

with open('model.json','w') as outjson:
    json.dump(modelData,outjson,indent=2)

This option produces a formatted, multi-line file that is useful for debugging.

Indented JSON output

Due to whitespace characters, human readable JSON files can become unnecessarily large. For this small model, the JSON file size increased to 669 bytes, over twice as large as the default dump.

For the smallest file size possible, remove the indentation and specify list and key-value separators without whitespace.

json.dump(modelData,outjson,separators=(',', ':'))

This option produces a compact one line file.

Compact JSON output

The compact file is 281 bytes, which is not much less than the original 325 bytes or the readable 669 bytes. However, I have seen a 21.1 MB JSON file generated from a large OpenSees model of solid elements reduced to 8.5 MB by simply removing whitespace.

The JSON model data covered in this post is enough to render an OpenSees model, but not much else. You can add more model data as well as analysis results by calling ops.nodeDisp(), ops.eleResponse(), and similar commands. You will see that the JSON files quickly become very large and calling all the node and element response functions after each analysis step will bog your script down.