Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
M MEDUSA_4.2.x
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Packages & Registries
    • Packages & Registries
    • Package Registry
  • Wiki
    • Wiki
  • Members
    • Members
  • Activity
  • Graph
  • Commits
Collapse sidebar
  • acc
  • MEDUSA_4.2.x
  • Wiki
  • MEDUSA Training How to run MEDUSA 1D ( on NOC linux system)

Last edited by julpal Apr 29, 2024
Page history

MEDUSA Training How to run MEDUSA 1D ( on NOC linux system)

  • Introduction
  • Getting ready
    • logging in
    • Getting NEMO-MEDUSA
  • Compiling NEMO-MEDUSA
    • Preparing the environement
    • Compiling MEDUSA 1D "the easy way"
    • Compiling MEDUSA 1D "the old way"
      • Compile the 1D config
      • Compile MEDUSA 1D
  • The Running Directory
  • Running MEDUSA 1D
    • Running The PAPA "ready to use" set-up.
    • Running from 3D-extracted conditions.
      • Prepare your running directory :
      • Extract the initial conditions and forcings
      • Extract your own location.
      • Running in high latitudes.
  • Viewing your Outputs

Introduction

This is the tutorial we will follow for the MEDUSA training.
I'll try to be make it as detailed as possible so that no one get lost.. but obviously, there might be things that are now evident to me, that are not for non-modelers, or non-linux users... So, let me Know, tell me if something is not clear.

In the following documentation, i'll assume you have access to the NOC-southampton linux system. it should work (with some adaptation) on another system, but we'll see that later.

For now, I'll show you

  • How to get NEMO-MEDUSA,
  • Explain you some fundamental knowledge to understand how it works, but we won't go in details.
  • How to compile the model,
  • How to run it in 1D,
  • How to see the results.

The idea is that at the end of the training, you should be able to run NEMO-MEDUSA 1D by yourself.

Getting ready

logging in

For this training, we need to log on specific machines.
Usually we run NEMO-MEDUSA on superconputers like ARCHER2 or the NOC's ANEMONE. But not all of you have access to these machines, so we'll do it on more normal linux systems.

First, i want you to log on one of the following machines : poseidon or amphitrite (you can see here the linux machines available).
for this, go on the linux system as you normally do (how you do that will depent if you're on linux, mac or windows, but once that done, we can all follow the same procedure). and run :

ssh -Y poseidon

if you want to go to this one, or

ssh -Y amphitrite

Getting NEMO-MEDUSA

Once you are there, we can get NEMO-MEDUSA

create a directory where you want to work in :

mkdir MEDUSA-training
cd MEDUSA-training

Now you are in your working directory, we can download NEMO-MEDUSA:

 git clone --recurse-submodules https://git.noc.ac.uk/acc/medusa_4.2.x.git MEDUSA_git

The NEMO-MEDUSA code is downloaded into the directory i've called MEDUSA_git, but you can call it as you want.

let's have a look inside :

cd MEDUSA_git

image

  • MEDUSA_resources includes :
    image
    with
    • src : where all the fortran code files are;
    • cfgs : where ready-to-compile configuration are stored
    • arch : files used to compile
    • sette : some automatic stability tests (used for development purpose)
    • README.md : the MEDUSA git home page documentation
    • add_msa_to_nemo : script to put MEDUSA at the right place
  • nemo is the NEMO code :
    image
    We find the same architecture than in MEDUSA_resources (because MEDUSA is made to fit in NEMO by replicating it's structure, ...) plus :
    • makenemo : The compilation script -- compilation is used to create an executable we'll be able to run, from all our fortran code (if you do some changes in a fortran file, it will only be taken into account at run time if you recompile the code).

Here, NEMO and MEDUSA have both been downloaded, but MEDUSA is outside of NEMO. We need to put the different parts are the right place.
That's what add_msa_to_nemo is for. So let's run it:

./add_msa_to_nemo

Now MEDUSA is at the right place :

image

image

Here, all is in place, we can now compile the code.

Compiling NEMO-MEDUSA

Preparing the environement

  1. You need to upload the libraries :
 module load netCDF-Fortran/4.6.1-gompi-2023a
  1. Then you need to grab a file that tells NEMO the all it needs to know about the environment it will compile and run in. This one i have prepared for you. You just need to copy it into you arch directory. Assuming you are still in MEDUSA_git
cp /noc/users/jpp1m13/WORKING/MEDUSA2/MEDUSA_git/nemo/arch/arch-auto.fcm nemo/arch/
## or because of permission problem, it's also here :    
cp /noc/users/jpp1m13/arch-auto.fcm nemo/arch/

You are now ready to compile!

There are 2 ways to compile MEDUSA 1D:

  • An "easy way" where everything is already in place to be compiled in 1 go,
  • A more complicated one, where you have to to it in 2 steps and move things around.
    We'll do both, because doing it the "old Way" you should better understand how things work. But let's start with the easy one.

Compiling MEDUSA 1D "the easy way"

To make things easier, and reduce the risks of error, i've added a MEDUSA 1D reference configuration : C1D_MED_PAPA. That means that you just need to compile it to get MEDUSA running in 1D.

cd nemo

nemo is your NEMO4.2 base directory (i might refer to it as ${NEMO} later on, it is the directory with makenemo inside.

  • Now you can compile your MEDUSA 1D configuration -- valid whatever the machine -- type :
    ./makenemo -m  auto -r C1D_MED_PAPA -n C1D_MED -j 16
    It should compile successfully the dynamic and the passive tracer all at once. Here our 1D config is called C1D_MED. Compiling this way, All the files are at the right place already, so - still, double check - you should be able to happily skip the file management part of the next section.

Once compiled, You have a new directory: C1D_MED in nemo's cfgs directory :
Let's have a look:
image
with :

  • BLD : where the code is compiled for this config.
  • MY_SRC : specific fortran code used instead of those in the src directory
  • WORK : All fortran files used in the compilation process.
  • cpp_C1D_MED.fcm : list of compilation keys used for this config.
  • EXP00 : a minimalist,ready to use, running directory

Compiling MEDUSA 1D "the old way"

I call it "the old way", as you'll have to compile in few different steps, manually. You can use this way of doing to learn and understand how the config works. If you just want to compile and get something running, jump to the next section : Compiling MEDUSA 1D "the easy way"

Compile the 1D config

First thing is to compile a first time a new config based on C1D_PAPA : from your NEMO4.2 base directory (should be something like ${NEMO} defined just above, but of course, adapt to your own medusa path).

  • Now you can compile a first time -- valid whatever the machine -- type :
    ./makenemo -m  auto -r C1D_PAPA -n C1D_MED -j 16
    It should compile successfully the dynamic side only, but will create the right files at the right place. Here our 1D config is called C1D_MED

Compile MEDUSA 1D

Modify our new config component and cpp keys. At the moment they are based on C1D_PAPA which is dynamics only.

  • update cfgs/work_cfgs.txt change our config line to be (just add the TOP component) :
    C1D_MED  OCE TOP
  • Then cfgs/C1D_MED/cpp_C1D_MED.fcm add the key_top, it should be like (not necessarily exactly the same, just add key_top) :
     bld::tool::fppkeys   key_xios key_linssh key_vco_1d3d key_top
  1. To compile properly, MEDUSA still needs few things : the MEDUSA's adapted version of NEMO and NEMO-TOP subroutines. Just copy them from the ORCA2_MEDUSA config, with :
    cp -d cfgs/ORCA2_MEDUSA/MY_SRC/* cfgs/C1D_MED/MY_SRC/
  2. Now everything is ready to compile. You can do so with :
    ./makenemo -m auto -r C1D_MED -j 16

The Running Directory

Let's have a look at EXP00 :
image

we can see :

  • some xml files. These are from the XIOS software. it's been developed to manage the outputs of the model. The files we (users) have to know about are:
    • context_nemo.xml : tells which field-def and file-def files are used at run time.
    • field-def*.xml : define ALL possible outputs/diagnostics the model needs to know about. All those we could possibly ask for, they are defined here.
    • file-def*xml : where we tell what outputs we want, and the frequency! XIOS, as you'll see, is very flexible!
  • namelists: Namelists are where give information to the model. NEMO-MEDUSA will read the namelists at run time, to know parameter values, initial files name and location, forcing files name and location, etc. There are 2 kinds of namelist for each model component: ref and cfg.
    • ref are the default values. you should not modify them.
    • For good practice and clarity, the cfg namelist should only contain the modified parameters. This helps to "easily" know what a run is doing.
  • nemo : Our executable.

We still miss few things to be able to run the model. Of which, initial conditions and forcing fields. Let's do a first run with a "ready to use" set-up.

Running MEDUSA 1D

Running The PAPA "ready to use" set-up.

From here, NEMO-Executable is ready, but we might still need to adapt/include some namelists and other files to be able to run.

  • go into cfgs/C1D_MED/EXP00.
    cd cfgs/C1D_MED/EXP00
  • copy and link the files :
       cp -rd ../../ORCA2_MEDUSA/EXPREF ..
       cp    ../EXPREF/xco2.atm . 
  • And add these only if compiled the old way:
       ln -s  ../../SHARED_MEDUSA/namelist_medusa_ref .
       cp    ../EXPREF/namelist_medusa_cfg .
       cp -d ../EXPREF/namelist_top_* .
       cp -d ../EXPREF/*medusa*xml .
  1. All MEDUSA files are here. We need to adapt some.

    • Update context_nemo.xml : add references to field*medusa*xml and file*medusa*xml files. It should look like this afterward:
      <!--
      ============================================================================================== 
      NEMO context
      ============================================================================================== 
      -->
      <context id="nemo">
          <!-- $id$ -->
      
      <!-- Fields definition -->
          <field_definition src="./field_def_nemo-oce.xml"/>   <!--  NEMO ocean dynamics                       --> 
          <field_definition src="./field_def_nemo-medusa.xml"/>   <!--  NEMO ocean dynamics                     -->
      
      <!-- Files definition -->
          <file_definition src="./file_def_nemo-oce.xml"/>     <!--  NEMO ocean dynamics                     -->
          <file_definition src="./file_def_nemo-medusa.xml"/>     <!--  NEMO ocean dynamics                     -->
      
      <!-- Axis definition -->
          <axis_definition src="./axis_def_nemo.xml"/>
      
      <!-- Domain definition -->
          <domain_definition src="./domain_def_nemo.xml"/>
      
      <!-- Grids definition -->
          <grid_definition   src="./grid_def_nemo.xml"/>
      
      
      </context>
    • Update namelist_top_cfg : change ln_rsttr to .false. (we don't start MEDUSA from a restart file, you can if you want, but not at first, for that you'd need to extract the right vertical profile from a restart_trc file)
    • Update namelist_medusa_cfg : ln_read_dust to .false. (I'd recommend to change that to true when you get MEDUSA 1D to run. you would need to extract the right location from forcing fields).
    • Update file_def_nemo-medusa.xml : get the right output frequency. it is 1d in file_def_nemo-oce.xml. you can keep the same or update to what you prefer.
  2. Now we only miss the initial conditions and forcing. Fortunately these already exits for the PAPA location. you can get them by running in your config EXP00 directory (which is the running directory in our test) :

    wget https://zenodo.org/record/3386310/files/INPUTS_C1D_PAPA_v4.0.tar?download=1  
    tar xvf 'INPUTS_C1D_PAPA_v4.0.tar?download=1'

    If `wget is not available on your system (it wasn't when i did my tests) you can grap it from my directory :

    cp /noc/users/jpp1m13/WORKING/MEDUSA2/MEDUSA_git/nemo/cfgs/TEST_MED1D/EXP00/INPUTS_C1D_PAPA_v4.0.tar\?download=1 .
    tar xvf 'INPUTS_C1D_PAPA_v4.0.tar?download=1'

    The namelist are already adapted for these forcing and initial fields.

  3. In our test MEDUSA starts from constant values. you can change that by extracting the right profile from a trc restart file.

Now you can run with ./nemo &

Running from 3D-extracted conditions.

Prepare your running directory :

You might want to run in different locations...
For this we'll use 3D initial conditions and forcings from UKESM1.1.
Let's make a new running directory :

  • go to cfgs/C1D_MED/
  • then :
# make a new running directory :  
mkdir EXP_1D
cd EXP_1D 
  • Copy and run the appropriate COPY_ME bash script that will prepare you running directory for the 1D runs forced with UKESM1 :
cp /noc/users/jpp1m13/SCRATCH/SETTING_RUNNING_DIR/1D_NOC_LINUX/COPY_ME__NEMOMED-4.2_1D_UKESM1.sh .
./COPY_ME__NEMOMED-4.2_1D_UKESM1.sh 

Extract the initial conditions and forcings

To run this 1D run, you want to extract the initial conditions and the forcings from a specific location. you need the coordinates (latitude and longitude) and well as the corresponding grid indices (i and j). If you look in your namelist_cfg file, you'll find within namc1d an example of famous site coordinates to use :

!-----------------------------------------------------------------------
&namc1d        !   1D configuration options                             (ln_c1d =T default: PAPA station)
!-----------------------------------------------------------------------
!! Localion name  |  real coords(lat-lon) | grid idices(j-i) | grid coords        | bathy depth
!!OWS India         (59.00N, -19.00E)       (273, 269)        (59.02N, -18.46E)    1516
!!PAP site          (49.00N, -16.50E)       (258, 272)        (49.11N, -16.07E)    4888
!!BATS              (31.67N, -64.17E)       (236, 224)        (32.07N, -64.52E)    4093
!!ESTOC             (29.07N, -15.25E)       (232, 273)        (28.72N, -15.50E)    3513
!!OWS Papa          (50.00N, -145.00E)      (262, 142)        (50.13N, -145.37E)   4290
!!HOTS              (22.75N, -158.00E)      (225, 130)        (22.41N, -158.50E)   4688
!!Kerfix            (-50.67N, 68.42E)       (113, 357)        (-50.70N, 68.50E)    1796
!! -- don't forget to run:  ./subset_init_forcing.sh ${i} ${j} to get the local files.
!--------------------------------------------------------------------------
   rn_lat1d    =    50.0     !  Column latitude
   rn_lon1d    =    -145.0    !  Column longitude
/
  1. You need to update the rn_lat1d and rn_lon1d in this namelist. Here we use the coordinates of PAPA.
  2. You also need to update the bathymetry depth in the namusr_def namelist in namelist_cfg. To make things easier, each site bathy depth is given with the coordinates. Here we adapt for the PAPA site:
    !-----------------------------------------------------------------------
    &namusr_def    !   C1D user defined namelist
    !-----------------------------------------------------------------------
       rn_bathy    =  4290.   ! depth in meters 
    /  
  3. Then, you need to extract forcings and initial conditions. in your running dir you'll find subset_init_forcing.sh Run this script by specifying the i and j indices to extract :
    ./subset_init_forcing.sh ${i} ${j} 
    
    ## for example, to extract what we need for the PAPA site, type :
    ./subset_init_forcing.sh 142 262

There, all should be ready. Then make sure

  • context.xml use the correct xml files;
  • The output frequency in the file*xml files match what you want,
  • one last look at the namelists. Here, all is ready to run 10y, with 5days frequency outputs.

Finally, to run :

./nemo &

Once your model is running, you can see what it's doing with:

tail -f ocean.output

If your model fails with XIOS error, maybe somthing went wrong while copying the xml files. in that case, copy the xml files in the C1D_MED_PAPA configuration.
from your running directory :

cp ../../C1D_MED_PAPA/EXPREF/*xml .

and run again :)

Extract your own location.

If you want to run in a location which is not listed in the namelist_cfg, you'll need to find the coordinate by yourself, in a "DIY" way. for that you need the mesh_mask.nc file, which gather all kind of information from the grid. it can be found there : /noc/users/jpp1m13/WORKING/UKESM/MESH/eORCA1/NEMO42/mesh_mask.nc

Open it with ferret (see the last section). see the map, and decide the (i,j) location you want to run on. once (i,j) known, you can get the latitude and longitude with ferret :

list NAV_LAT[i=$i, j=$j]
list NAV_LON[i=$i, j=$j]

where $i and $j are the values you've chosen.
Then to get the sea-floor depth at that location you can get the max vertical level with MBATHY[i=$i, j=$j] ; and get the depth in metre with

list NAV_LEV[k=$MBATHY]

where $MBATHY is the vertical level. Then fill the namelist_cfg with those number, and extract the forcings and initial conditions accordingly.

Running in high latitudes.

The C1D configuration runs without the sea-ice model in. For some reason, maybe because the test case doesn't need it. But because of this, trying to run NEMO 1D in high latitudes will end with abnormally low temperature.

Here are the additional steps needed if you need to run with the ice model ON.

  1. Add the sea-ice component. for this : update cfgs/work_cfgs.txt change our config line to be (add the ICE component) :
    C1D_MED  OCE TOP ICE
  2. Add the sea-ice cpp key If you remember, this is done in cfgs/C1D_MED/cpp_C1D_MED.fcm add the key_si3 key, it should be like :
    bld::tool::fppkeys   key_xios key_linssh key_top key_si3
  3. Compile with the sea ice in: Just compile the model again as previously done :
    • from your nemo directory, run :
      ./makenemo -m X86_ARCHER2-Gnu_4.2 -r C1D_MED -j 16
  4. Add the ice namelists They should already be here, but if not you should be able to find both namelist_ice_ref and namelist_ice_cfg in cfgs/ORCA2_ICE_PISCES/EXPREF directory.
  5. Activate the SI3 model at run time Change nn_ice in namelist_cfg to 2. If nn_ice is not in your namelist_cfg copy it from namelist_ref and add it in (make sure to add it in the correct namelist/section).
  6. Include the ice xml files. if they are not in your running directory yet, copy the file_def_nemo-ice.xml and field_def_nemo-ice.xml from the cfgs/ORCA2_ICE_PISCES/EXPREF directory, and include them in the context.xml file. it should look like this :
    <!-- Fields definition -->
        <field_definition src="./field_def_nemo-oce.xml"/>    <!--  NEMO ocean dynamics     -->
        <field_definition src="./field_def_nemo-ice.xml"/>    <!--  NEMO ocean dynamics     -->
        <field_definition src="./field_def_nemo-medusa.xml"/> <!--  NEMO-MEDUSA ocean biology -->
    
    <!-- Files definition -->
        <file_definition src="./file_def_nemo-oce.xml"/>     <!--  NEMO ocean dynamics      -->
        <file_definition src="./file_def_nemo-ice.xml"/>     <!--  NEMO ocean dynamics      -->
        <file_definition src="./file_def_nemo-medusa.xml"/>  <!--  NEMO ocean biology       -->

From there you should be ready to run 1D with the ice model activated.

Viewing your Outputs

Your run successfully finished! you now have some netcdf outputs :
image

  • the grid files are NEMO outputs,
  • ptrc-T are MEDUSA main tracers
  • diad-T are MEDUSA diagnostics (there are a lot...!)

to see them easily, you can use ferret:

module load ferret
ferret

if ferret is not available, you can log on another linux-machine like theia (open another linux session, log on theia: ssh -Y theia and go to your running directory, where your outputs are) . To use ferret, just :

use $file_name
show data ## to see the data
shade var[k=1,l=1]

Here i plot the surface of a 3D variable (k=1) and if there are several time record, l=1 ask to plot the first i'll show you more on the training, but if you're used to matlab or python, do not hesitate to use your own thing ;)

If you want to compare 2 outputs with ferret:
load both files, and check the data with show data. you now have 2 data sets. you can specify to which dataset you refer to with [d=...] like we just did for the vertical layer or time. here is an example :

use $file1
use $file2
show data ## to see the data
shade din[k=1,l=1, d=1] !! File1's DIN
set w 2 !! open a new window
shade din[k=1,l=1, d=2] !! File2's DIN
set w 3 !! 3rd window 
shade/lev=(-inf)(-10,10,0.5)(inf)/pal=white_centered din[k=1,l=1, d=2]-din[k=1,l=1, d=1]
!!just plot the diff, with a diff colorbar, and setting the colorbar limits
Clone repository
  • Compile MEDUSA from scratch
  • Developing on MEDUSA git Best practice
  • How to SETTE test MEDUSA.
  • How to run MEDUSA 1D on ARCHER2 and ANEMONE
  • How to run on ANEMONE
  • How to run on ARCHER2
  • MEDUSA DOCU : Chl light
  • MEDUSA DOCU : Rivers
  • MEDUSA Guide some useful explanations
  • MEDUSA NEMO4.2
  • MEDUSA Training How to run MEDUSA 1D ( on NOC linux system)
  • Home
  • todo_list