{ "cells": [ { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## PrePARE\n", "\n", "The [PrePARE](https://cmor.llnl.gov/mydoc_cmip6_validator/) software tool is provided by [PCMDI](https://pcmdi.llnl.gov/) (Program for Climate Model Diagnosis and Intercomparison) to verify that CMIP6 files conform to the CMIP6 data protocol. The CMIP6 data protocol comprises requirements set out in different documents published by the CMIP6 WIP (Working Group on Climate Models Infrastructure Panel).\n", "\n", "- The [Data request](https://cmip6dr.github.io/Data_Request_Home/) contains variable specifications (frequency, cell methods,..)\n", "- The [Model output requirements](https://goo.gl/neswPr) specify the data format, structure and content.\n", "- The CMIP6 meta data standard is defined in [Attributes, DRS, File names, directory structure, CV](https://goo.gl/v1drZl).\n", "- All participants have to be regsitered in this [Registry for allowed models](https://github.com/WCRP-CMIP/CMIP6_CVs).\n", "\n", "These documents are translated into `.json` formatted **Controlled Vocabularies** and tables readable by PrePARE and named [cmip6-cmor-tables](https://github.com/PCMDI/cmip6-cmor-tables)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "PrePARE performs [10 different tests](https://goo.gl/NmuENr) which can be summarized by the following points:\n", "\n", "1. Check for invariable and conditional **required global attributes** and valid values of those. \n", "2. Are **file names and paths** conform to the project's data reference syntax (DRS)?\n", "3. Check for required **variable attributes**.\n", "4. **Coordinates**: Some variables are requested on specific coordinates that need to be provided in the files in a compliant format." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following, we run `PrePARE` for a subset of CMIP6 pool data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preparation\n", "\n", "We will use the PrePARE binary in a shell but wrapped by this python notebook. We provide a conda environment which all levante users can use." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to use this environment as a kernel for jupyter notebooks, you can use `ipykernel` as shown in the next cell. Afterwards, reload your browser and select the new kernel for the quality assurance notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "#The following line activates the source for working in a shell.\n", "#source activate /work/bm0021/conda-envs/quality-assurance\n", "#\n", "#The following line installs a jupyter kernel for the conda environment\n", "#python -m ipykernel install --user --name $kernelname --display-name=\"$kernelname\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Per default, shells inside a kernel are **not** started from the environment of the kernel. That means, the PrePARE executable is not found:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be changed either by using a helper script for the kernel as follows. You can also do that at the top of notebooks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "import os\n", "newpath=f\"{os.sep.join(sys.executable.split(os.sep)[:-1])}:{os.environ['PATH']}\"\n", "os.environ['PATH']=newpath\n", "pp=!which PrePARE\n", "pp=pp[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also import some useful packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# copy2 copies without errors\n", "from shutil import copy2\n", "# tqdm gives a progressbar for for loops\n", "from tqdm import tqdm\n", "import subprocess" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the data standard evolves over time, we need to find the matching version for the datasests which should be tested. For that, we need `git` to checkout the corresponding version of the data standard tables, named [cmip6-cmor-tables](https://github.com/PCMDI/cmip6-cmor-tables). You can clone the tables repository via:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import git \n", "import re\n", "# The following clones the cmip6 cmor tables if not available:\n", "working_path=\"./\"\n", "cmip6_cmor_tables_url=\"https://github.com/PCMDI/cmip6-cmor-tables.git\"\n", "if \"cmip6-cmor-tables\" not in os.listdir(working_path):\n", " git.Git(working_path).clone(cmip6_cmor_tables_url)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One table in the tables repository only contains the global attributes and no information about the variables: `CMIP6_CV.json` where 'CV' is for Controlled Vocabulary. In contrast to those tables which contain variables, only the recent version of the global attributes table is valid. This is because this file is mostly never changed but rather extended. Whenever we checkout a different version of the tables repository, we need to copy the recent global attributes CV into that version. Therefore, we copy this CV to a save place named `recentCV`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recentCV = working_path+\"CMIP6-CV-20210419.json\"\n", "copy2(working_path+\"cmip6-cmor-tables/Tables/CMIP6_CV.json\", recentCV)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Settings\n", "\n", "The following variables are important for PrePARE and will be defined:\n", "\n", "- `logChunk` will hold the results of PrePARE\n", "- `cmip6-cmor-table-path` is the directory for the input tables\n", "- `exec` is the executable which we will run in bash" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prepareSetting = {\n", " \"exec\" : pp,\n", " #\"logChunk\":\"/mnt/lustre01/work/bm0021/prepare-test/\",\n", " \"logChunk\":\"prepare-test\",\n", " \"cmip6-cmor-table-path\" : working_path+\"cmip6-cmor-tables/Tables\"\n", "}\n", "!mkdir -p {prepareSetting[\"logChunk\"]}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prepareSetting[\"exec\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Initialization\n", "\n", "We read in the dataset list, load the git repository and copy the recent Controlled Vocabulary for required attributes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = git.Git(prepareSetting[\"cmip6-cmor-table-path\"]) \n", "g.reset(\"--hard\")\n", "g.checkout(\"master\")\n", "copy2(prepareSetting[\"cmip6-cmor-table-path\"]+\"/CMIP6_CV.json\", recentCV)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Assume, we want to test the dataset `dset_id` in directory `trunk`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trunk=\"/work/ik1017/CMIP6/data/\"\n", "dset_id=\"CMIP6.ScenarioMIP.DKRZ.MPI-ESM1-2-HR.ssp370.r1i1p1f1.Amon.tas.gn.v20190710\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to find out the data standard version used for the creation of the files which should be tested, we need to retrieve the value from the global attribute `data_specs_version` from one file of the dataset. We assign a corresponding attribute `data_specs_version` to the `dset_id` and combine it in a *dictionary*." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsets_to_test={dset_id :\n", " { \"dset_path\":trunk+'/'.join(dset_id.split('.')),\n", " \"data_specs_version\":\"\"\n", " }\n", " }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function `addSpecs` will retrieve the specs attribute by using the bash tool `ncdump -h` showing the header of a file including all attributes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def addSpecs(entry):\n", " print([os.path.join(entry[\"dset_path\"],f) \n", " for f in os.listdir(entry[\"dset_path\"]) \n", " ])\n", " try:\n", " fileinpath = [os.path.join(entry[\"dset_path\"],f) \n", " for f in os.listdir(entry[\"dset_path\"]) \n", " if os.path.isfile(\n", " os.path.join(entry[\"dset_path\"],f)\n", " )]\n", " except:\n", " return \"\"\n", "# ncdump_exec=\"/sw/rhel6-x64/netcdf/netcdf_c-4.4.1.1-gcc48/bin/ncdump\"\n", " dsv = !ncdump -h {fileinpath[0]} | grep data_specs_version | cut -d '\"' -f 2\n", " dsv = ''.join(dsv)\n", " return dsv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, we apply it for all dsets in the `dsets_to_test` dictionary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for dset, entry in dsets_to_test.items():\n", " print(dset)\n", " entry[\"data_specs_version\"] = addSpecs(entry)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Retrieving all versions of the cmip6-cmor-table repository\n", "\n", "We are using the `tags` of the version releases and reformat their values to be conform to the `data_specs_version`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tags = reversed(g.tag(\"-n\").split(\"\\n\"))\n", "tagdict = {\"data_specs_versions\":[]} \n", "for tag in tags :\n", " tl = tag.split(\" \", 1)[0]\n", " tllen = len(tl.split(\".\"))\n", " if tllen > 3 :\n", " continue\n", " dsvnumber = tl.split(\".\")[tllen-1]\n", " dsvnumber = \"\".join(filter(str.isdigit, dsvnumber))\n", " dsv = \"['01.00.\"+dsvnumber+\"']\"\n", " if dsv not in tagdict[\"data_specs_versions\"] :\n", " tagdict[\"data_specs_versions\"].append(dsv)\n", " tagdict[dsv]={\"tag_label\":tl,\n", " \"description\":tag.split(\" \",1)[1]}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(tagdict['data_specs_versions'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Application\n", "\n", "We loop over the datasets to be checked.\n", "Note that for many different datasets from different *sources*, it might be helpful to loop over different `data_specs_version`s instead so that we checkout each `cmip6-cmor-tables` repository version only once.\n", "\n", "For the `PrePARE` run itself, we define the function `checkSubset` where:\n", "\n", "- We skip datasets for which we do not have a corresponding table version\n", "- We define a unique `logPath` for each dataset we are going to test using the `logChunk`, `data_specs_version` and the `dset_id`. PrePARE is able to create own directories which we also exploit. If there is already data in it, we skip the test to avoid duplications.\n", "- Checkout the correct cmip6-cmor tables and overwrite the CV with the most recent one saved in the beginning of this script.\n", "- Run PrePARE with 8 parallel processes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def checkSubset(dset_id, dsetatts):\n", " print(dsetatts)\n", " if not \"['\"+dsetatts[\"data_specs_version\"]+\"']\" in tagdict[\"data_specs_versions\"] :\n", " print(\"No matching tag for data_specs_version {}\".format(dsetatts[\"data_specs_version\"]))\n", " return\n", " logPath=prepareSetting[\"logChunk\"]+\"/\"+dsetatts[\"data_specs_version\"].split('.')[2].split(\"'\")[0]+\"/\"+dset_id\n", " if os.path.exists(logPath) and len(os.listdir(logPath)) != 0:\n", " return\n", " tag2checkout = tagdict[\"['\"+dsetatts[\"data_specs_version\"]+\"']\"][\"tag_label\"]\n", " g.reset(\"--hard\")\n", " g.checkout(tag2checkout)\n", " copy2(recentCV, prepareSetting[\"cmip6-cmor-table-path\"]+\"/CMIP6_CV.json\")\n", " #\n", " a = subprocess.run(\"{0} -l {1} --all --table-path {2} {3}\".format(\n", " prepareSetting[\"exec\"],\n", " logPath,\n", " prepareSetting[\"cmip6-cmor-table-path\"],\n", " dsetatts[\"dset_path\"]),\n", " capture_output=True, shell=True)\n", " print(a)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!rm -r prepare-test30" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for dset_id, dsetatts in dsets_to_test.items() :\n", " checkSubset(dset_id, dsetatts)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Results\n", "\n", "As we let PrePARE write logifles for each dataset, we have to collect the results to get an overview.\n", "Each logfile start with a description of\n", "- how many files were scanned\n", "- how many files had failed\n", "\n", "Apparently, if 0 files have failed, the dataset (if we get one logfile per dataset) has passed the checks. The next lines are not clearly formulated so that we parse them. We can distinguish between two error categories. The maximal severity of the errors `max_severity` is determined with every new match of an error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Critical errors\n", " - if the filename or filepath is not conform to the data standard\n", " - if the data structure could not be parsed\n", " - are identified by error keywords `filename`, `not understood`, `SKIPPED`\n", "- Minor issues\n", " - if a value of a required global attribute could not be found.\n", " - are identified by error keywords `CV Fail`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "errorSeverity=[\"Passed\", \"Minor Issue\", \"Major Issue\"]\n", "parsedict={\"meta\": [\"filename\", \"creation_date\", \"dset_id\", \"specs_version\"],\n", " \"filenoDict\":{\"checked\": 'files scanned: (\\d+)',\n", " \"failed\": 'with error\\(s\\): (\\d+)'\n", " },\n", " \"errorDict\":{\"filename\": 2,\n", " \"Warning\" : 1,\n", " \"CV FAIL\" : 1,\n", " \"Permission denied\" : 2,\n", " \"not understood\" : 2,\n", " \"SKIPPED\" : 2},\n", " }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We subdivide the parsing into two processes, `parse_file` and `collect_errors`. `parse_file` is executed if errors are detected in `collect_errors`. As an argument, we provide not only the path to the logfile but rather a dictionary that will be filled with all important metadata to assess the PrePARE results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def collect_errors(dset_entry) :\n", " errors=[]\n", " max_severity=0\n", " for line in open(dset_entry[\"logfile_name\"]):\n", " for errorKeyword in parsedict[\"errorDict\"].keys() :\n", " match = re.findall(errorKeyword, line)\n", " if match:\n", " errors.append(errorKeyword)\n", " max_severity=max(max_severity,int(parsedict[\"errorDict\"][errorKeyword]))\n", " dset_entry[\"errors\"]=tuple(errors)\n", " dset_entry[\"max_severity\"]=max_severity" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def parse_file(dset_entry):\n", " checkedFiles=[]\n", " failedFiles=[]\n", " for line in open(dset_entry[\"logfile_name\"]):\n", " match = re.search(parsedict[\"filenoDict\"][\"checked\"], line)\n", " if match:\n", " checkedFiles.append(''.join(match.group(1)))\n", " match = re.search(parsedict[\"filenoDict\"][\"failed\"], line)\n", " if match:\n", " failedFiles.append(''.join(match.group(1)))\n", " if not checkedFiles or not failedFiles :\n", " print(dset_entry[\"logfile_name\"], checkedFiles, failedFiles)\n", " dset_entry[\"checked\"]=int(checkedFiles[0])\n", " dset_entry[\"failed\"]=int(failedFiles[0])\n", " dset_entry[\"passed\"]=dset_entry[\"checked\"]-dset_entry[\"failed\"]\n", " if not dset_entry[\"failed\"] == 0 :\n", " collect_errors(dset_entry) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We finally collect all results in a dictionary `prepare_dict` where the `dset_id`s are the keys. For that, we loop over all logfiles." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prepare_dict = {}\n", "specs_paths=os.listdir(prepareSetting[\"logChunk\"])\n", "for specs_path in tqdm(specs_paths):\n", " for dirpath, dirnames, logfile_names in os.walk(os.path.join(prepareSetting[\"logChunk\"], specs_path)):\n", " for logfile_name in logfile_names :\n", " dset_entry = {\"logfile_name\":os.path.join(dirpath, logfile_name),\n", " \"creation_date\":logfile_name.split(\".\")[0].split(\"-\")[1],\n", " \"dset_id\":dirpath[len(os.path.join(prepareSetting[\"logChunk\"], specs_path))+1:],\n", " \"specs_version\": \"01.00.\"+specs_path}\n", "\n", " parse_file(dset_entry)\n", " prepare_dict[dset_entry[\"dset_id\"]]=dset_entry" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(prepare_dict)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "python3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" } }, "nbformat": 4, "nbformat_minor": 4 }