Commit 261a6104 authored by sbiri's avatar sbiri
Browse files

Update ASFC_notebook.ipynb

parent d292fcfa
No related merge requests found
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"AirSeaFluxCode is developed to provide an easy and accessible way to calculate turbulent surface fluxes (TSFs) from a small number of bulk variables and for a viariety of bulk algorithms. \n",
"\n",
"By running AirSeaFluxCode you can compare different bulk algorithms and to also investigate the effect choices within the implementation of each parameterisation have on the TSFs estimates. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Getting started\n",
"\n",
"Let's first import the basic python packages we will need for reading in our input data, to perform basic statistics and plotting"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# first import all packages you might need\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import netCDF4 as nc\n",
"import numpy as np\n",
"import pandas as pd\n",
"from tabulate import tabulate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AirSeaFluxCode examples\n",
"\n",
"AirSeaFluxCode is set up to run in its default setting with a minimum number of input variables, namely wind speed; air temperature; and sea surface temperature. Let's load the code, import some real data composed for testing it (Research vessel data) and run AirSeaFluxCode with default settings (output height 10m, cool skin/warm layer corrections turned off, bulk algorithm Smith 1988, gustiness on, saturation vapour pressure following Buck (2012), tolerance limits set for both flux estimates and height adjusted variables (['all', 0.01, 0.01, 1e-05, 1e-3, 0.1, 0.1]), number of iterations are ten, non converged points are set to missing and Monin-Obukhov stability length is calculated following the ECMWF implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from AirSeaFluxCode import AirSeaFluxCode"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inDt = pd.read_csv(\"data_all.csv\")\n",
"date = np.asarray(inDt[\"Date\"])\n",
"lon = np.asarray(inDt[\"Longitude\"])\n",
"lat = np.asarray(inDt[\"Latitude\"])\n",
"spd = np.asarray(inDt[\"Wind speed\"])\n",
"t = np.asarray(inDt[\"Air temperature\"])\n",
"sst = np.asarray(inDt[\"SST\"])\n",
"rh = np.asarray(inDt[\"RH\"])\n",
"p = np.asarray(inDt[\"P\"])\n",
"sw = np.asarray(inDt[\"Rs\"])\n",
"hu = np.asarray(inDt[\"zu\"])\n",
"ht = np.asarray(inDt[\"zt\"])\n",
"hin = np.array([hu, ht, ht])\n",
"del hu, ht, inDt\n",
"outvar = (\"tau\", \"sensible\", \"latent\", \"u10n\", \"t10n\", \"q10n\")\n",
"# run AirSeaFluxCode\n",
"res = AirSeaFluxCode(spd, t, sst, \"bulk\", meth=\"UA\", lat=lat, hin=hin, hum=[\"rh\", rh], P=p, cskin=0, Rs=sw,\n",
" tol=['all', 0.01, 0.01, 1e-05, 1e-3, 0.1, 0.1], L=\"tsrv\", out_var = outvar)\n",
"flg = res[\"flag\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"res is the output of AirSeaFluxCode which is a dataFrame with keys: \"tau\", \"sensible\", \"latent\", \"u10n\", \"t10n\", \"q10n\". Let's plot the flux estimates."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(3, 1, sharex=True, sharey=False)\n",
"fig.tight_layout()\n",
"ax[0].plot(res[\"tau\"], \"-\", color=\"grey\", linewidth=1, alpha = 0.8)\n",
"ax[1].plot(res[\"sensible\"], \"-\", color=\"grey\", linewidth=1, alpha = 0.8)\n",
"ax[2].plot(res[\"latent\"], \"-\", color=\"grey\", linewidth=1, alpha = 0.8)\n",
"ax[0].set_ylabel('tau (Nm$^{-2}$)')\n",
"ax[1].set_ylabel('shf (Wm$^{-2}$)')\n",
"ax[2].set_ylabel('lhf (Wm$^{-2}$)')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can save the output in a csv file"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res.to_csv(\"test_AirSeaFluxCode.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and generate some statistics which you can save in a txt file"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Input summary\", file=open('./stats.txt', 'a'))\n",
"print('input file name: {}, \\n method: {}, \\n gustiness: {}, \\n cskin: {},'\n",
" ' \\n tolerance: {}, \\n qmethod: {}, \\n L: {}'.format(\"data_all.csv\", \"UA\", \"on\",\n",
" 0, \"all\", \"Buck2\", \"Rb\"),\n",
" file=open('./stats.txt', 'a'))\n",
"ttl = np.asarray([\"tau \", \"shf \", \"lhf \", \"u10n \", \"t10n \", \"q10n \"])\n",
"header = [\"var\", \"mean\", \"median\", \"min\", \"max\", \"5%\", \"95%\"]\n",
"n = np.shape(res)\n",
"stats = np.copy(ttl)\n",
"a = res.iloc[:,:-1].to_numpy(dtype=\"float64\").T\n",
"stats = np.c_[stats, np.nanmean(a, axis=1)]\n",
"stats = np.c_[stats, np.nanmedian(a, axis=1)]\n",
"stats = np.c_[stats, np.nanmin(a, axis=1)]\n",
"stats = np.c_[stats, np.nanmax(a, axis=1)]\n",
"stats = np.c_[stats, np.nanpercentile(a, 5, axis=1)]\n",
"stats = np.c_[stats, np.nanpercentile(a, 95, axis=1)]\n",
"print(tabulate(stats, headers=header, tablefmt=\"github\", numalign=\"left\",\n",
" floatfmt=(\"s\", \"2.2e\", \"2.2e\", \"2.2e\", \"2.2e\", \"2.2e\", \"2.2e\")),\n",
" file=open('./stats.txt', 'a'))\n",
"print('-'*79+'\\n', file=open('./stats.txt', 'a'))\n",
"del a"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment