Intake catalogs#

In order to make the DKRZ CMIP data pool more FAIR, we support the python package intake-esm which allows you to use collections of climate data easily and fast.

We provide a tutorial here: https://tutorials.dkrz.de/intake.html

The offical intake-esm page: https://intake-esm.readthedocs.io/

Features

  • display catalogs as clearly structured tables inside jupyter notebooks for easy investigation

import intake
col = intake.open_esm_datastore("/work/ik1017/Catalogs/dkrz_cmip6_disk.json")
col.df.head()
/opt/conda/envs/datapoolservices/lib/python3.13/site-packages/intake_esm/__init__.py:6: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
  from pkg_resources import DistributionNotFound, get_distribution
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
Cell In[1], line 2
      1 import intake
----> 2 col = intake.open_esm_datastore("/work/ik1017/Catalogs/dkrz_cmip6_disk.json")
      3 col.df.head()

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/intake_esm/core.py:107, in esm_datastore.__init__(self, obj, progressbar, sep, registry, read_csv_kwargs, columns_with_iterables, storage_options, **intake_kwargs)
    105     self.esmcat = ESMCatalogModel.from_dict(obj)
    106 else:
--> 107     self.esmcat = ESMCatalogModel.load(
    108         obj, storage_options=self.storage_options, read_csv_kwargs=read_csv_kwargs
    109     )
    111 self.derivedcat = registry or default_registry
    112 self._entries = {}

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/intake_esm/cat.py:238, in ESMCatalogModel.load(cls, json_file, storage_options, read_csv_kwargs)
    235 json_file = str(json_file)  # We accept Path, but fsspec doesn't.
    236 _mapper = fsspec.get_mapper(json_file, **storage_options)
--> 238 with fsspec.open(json_file, **storage_options) as fobj:
    239     data = json.loads(fobj.read())
    240     if 'last_updated' not in data:

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/fsspec/core.py:105, in OpenFile.__enter__(self)
    102 mode = self.mode.replace("t", "").replace("b", "") + "b"
    104 try:
--> 105     f = self.fs.open(self.path, mode=mode)
    106 except FileNotFoundError as e:
    107     if has_magic(self.path):

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/fsspec/spec.py:1338, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
   1336 else:
   1337     ac = kwargs.pop("autocommit", not self._intrans)
-> 1338     f = self._open(
   1339         path,
   1340         mode=mode,
   1341         block_size=block_size,
   1342         autocommit=ac,
   1343         cache_options=cache_options,
   1344         **kwargs,
   1345     )
   1346     if compression is not None:
   1347         from fsspec.compression import compr

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/fsspec/implementations/local.py:206, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
    204 if self.auto_mkdir and "w" in mode:
    205     self.makedirs(self._parent(path), exist_ok=True)
--> 206 return LocalFileOpener(path, mode, fs=self, **kwargs)

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/fsspec/implementations/local.py:383, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
    381 self.compression = get_compression(path, compression)
    382 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 383 self._open()

File /opt/conda/envs/datapoolservices/lib/python3.13/site-packages/fsspec/implementations/local.py:388, in LocalFileOpener._open(self)
    386 if self.f is None or self.f.closed:
    387     if self.autocommit or "w" not in self.mode:
--> 388         self.f = open(self.path, mode=self.mode)
    389         if self.compression:
    390             compress = compr[self.compression]

FileNotFoundError: [Errno 2] No such file or directory: '/work/ik1017/Catalogs/dkrz_cmip6_disk.json'
col.esmcat.description
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[2], line 1
----> 1 col.esmcat.description

NameError: name 'col' is not defined

Features

  • browse through the catalog and select your data without being on the pool file system

⇨ A pythonic reproducable alternative compared to complex find commands or GUI searches. No need for Filesystems and filenames.

tas = col.search(experiment_id="historical", source_id="MPI-ESM1-2-HR", variable_id="tas", table_id="Amon", member_id="r1i1p1f1")
tas
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[3], line 1
----> 1 tas = col.search(experiment_id="historical", source_id="MPI-ESM1-2-HR", variable_id="tas", table_id="Amon", member_id="r1i1p1f1")
      2 tas

NameError: name 'col' is not defined

Features

  • open climate data in an analysis ready dictionary of xarray datasets

Forget about annoying temporary merging and reformatting steps!

tas.to_dataset_dict(cdf_kwargs={"chunks":{"time":1}})
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[4], line 1
----> 1 tas.to_dataset_dict(cdf_kwargs={"chunks":{"time":1}})

NameError: name 'tas' is not defined

Features

  • display catalogs as clearly structured tables inside jupyter notebooks for easy investigation

  • browse through the catalog and select your data without being on the pool file system

  • open climate data in an analysis ready dictionary of xarray datasets

intake-esm reduces the data access and data preparation tasks on analysists side

Catalog content#

The catalog is a combination of

  • a list of files (at dkrz compressed as .csv.gz) where each line contains a filepath as an index and column values to describe that file

    • The columns of the catalog should be selected such that a dataset in the project’s data repository can be uniquely identified. I.e., all elements of the project’s Data Reference Syntax should be covered (See the project’s documentation for more information about the DRS) .

  • a .json formatted descriptor file for the list which contains additional settings which tell intake how to interprete the data.

According to our policy, both files have the same name and are available in the same directory.

print("What is this catalog about? \n" + col.esmcat.description)
#
print("The path to the list of files: "+ col.esmcat.catalog_file)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[5], line 1
----> 1 print("What is this catalog about? \n" + col.esmcat.description)
      2 #
      3 print("The path to the list of files: "+ col.esmcat.catalog_file)

NameError: name 'col' is not defined

Creation of the .csv.gz list :

  1. A file list is created based on a find shell command on the project directory in the data pool.

  2. For the column values, filenames and Pathes are parsed according to the project’s path_template and filename_template. These templates need to be constructed with attribute values requested and required by the project.

    • Filenames that cannot be parsed are sorted out

  3. Depending on the project, additional columns can be created by adding project’s specifications.

    • E.g., for CMIP6, we added a OpenDAP column which allows users to access data from everywhere via http

Configuration of the .json descriptor:

Makes the catalog self-descriptive by defining all necessary information to understand the .csv.gz file

  • Specifications for the headers of the columns - in case of CMIP6, each column is linked to a Controlled Vocabulary.

col.esmcat.attributes[0]
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[6], line 1
----> 1 col.esmcat.attributes[0]

NameError: name 'col' is not defined

Defines how to open the data as analysis ready as possible with the underlaying xarray tool:

  • which column of the .csv.gz file contains the path or link to the files

  • what is the data format

  • how to aggregate files to a dataset

    • set a column to be used as a new dimension for the xarray by merge

    • when opened a file, what is concat dimension?

    • additional options for the open function

Jobs we do for you#

  • We make all catalogs available under /pool/data/Catalogs/ and in the cloud

  • We create and update the content of project’s catalogs regularly by running scripts which are automatically executed and called cronjobs. We set the creation frequency so that the data of the project is updated sufficently quickly.

    • The updated catalog replaces the outdated one.

    • The updated catalog is uploaded to the DKRZ swift cloud

    • We plan to provide a catalog that tracks data which is removed by the update.

!ls /work/ik1017/Catalogs/dkrz_*.json
ls: cannot access '/work/ik1017/Catalogs/dkrz_*.json': No such file or directory
Hide code cell source
import pandas as pd
#pd.options.display.max_colwidth = 100
services = pd.DataFrame.from_dict({"CMIP6" : {
    "Update Frequency" : "Daily",
    "On cloud" : "Yes", #"https://swift.dkrz.de/v1/dkrz_a44962e3ba914c309a7421573a6949a6/intake-esm/mistral-cmip6.json",
    "Path to catalog" : "/pool/data/Catalogs/dkrz_cmip6_disk.json",
    "OpenDAP" : "Yes",
    "Retraction Tracking" : "Yes",
    "Minimum required Memory" : "10GB",
}, "CMIP5": {
    "Update Frequency" : "On demand",
    "On cloud" : "Yes", #"https://swift.dkrz.de/v1/dkrz_a44962e3ba914c309a7421573a6949a6/intake-esm/mistral-cmip5.json",
    "Path to catalog" : "/pool/data/Catalogs/dkrz_cmip5_disk.json",
    "OpenDAP" : "Yes",
    "Retraction Tracking" : "",
    "Minimum required Memory" : "5GB",
}, "CORDEX": {
    "Update Frequency" : "Monthly",
    "On cloud" : "Yes", #"https://swift.dkrz.de/v1/dkrz_a44962e3ba914c309a7421573a6949a6/intake-esm/mistral-cordex.json",
    "Path to catalog" : "/pool/data/Catalogs/dkrz_cordex_disk.json",
    "OpenDAP" : "No",
    "Retraction Tracking" : "",
    "Minimum required Memory" : "5GB",
}, "ERA5": {
    "Update Frequency" : "On demand",
    "On cloud" : "Yes",
    "Path to catalog" : "/pool/data/Catalogs/dkrz_era5_disk.json",
    "OpenDAP" : "No",
    "Retraction Tracking" : "--",
    "Minimum required Memory" : "5GB",
}, "MPI-GE": {
    "Update Frequency" : "On demand",
    "On cloud" : "Yes",# "https://swift.dkrz.de/v1/dkrz_a44962e3ba914c309a7421573a6949a6/intake-esm/mistral-MPI-GE.json
    "Path to catalog" : "/pool/data/Catalogs/dkrz_mpige_disk.json",
    "OpenDAP" : "",
    "Retraction Tracking" : "--",
    "Minimum required Memory" : "No minimum",
}}, orient  = "index")
servicestb=services.style.set_properties(**{
    'font-size': '14pt',
})

servicestb
  Update Frequency On cloud Path to catalog OpenDAP Retraction Tracking Minimum required Memory
CMIP6 Daily Yes /pool/data/Catalogs/dkrz_cmip6_disk.json Yes Yes 10GB
CMIP5 On demand Yes /pool/data/Catalogs/dkrz_cmip5_disk.json Yes 5GB
CORDEX Monthly Yes /pool/data/Catalogs/dkrz_cordex_disk.json No 5GB
ERA5 On demand Yes /pool/data/Catalogs/dkrz_era5_disk.json No -- 5GB
MPI-GE On demand Yes /pool/data/Catalogs/dkrz_mpige_disk.json -- No minimum

Best practises and recommendations:#

  • Intake can make your scripts reusable.

    • Instead of working with local copy or editions of files, always start from a globally defined catalog which everyone can access.

    • Save the subset of the catalog which you work on as a new catalog instead of a subset of files. It can be hard to find out why data is not included anymore in recent catalog versions, especially if retraction tracking is not enabled.

  • Intake helps you to avoid downloading data by reducing necessary temporary steps which can cause temporary output.

  • Check for new ingests by just repeating your script - it will open the most recent catalog.

  • Only load datasets with to_dataset_dict into xarrray with the argument cdf_kwargs={"chunks":{"time":1}}. Otherwise, the chunnk will let your memory exceed limits.

Technical requirements for usage#

  • Memory:

    • Depending on the project’s volume, the catalogs can be big. If you need to work with the total catalog, you require at least 10GB memory.

    • On jupyterhub.dkrz.de, start the notebook server with matching ressources.

  • Software:

    • Intake works on the basis of xarray and pandas.

    • On jupyterhub.dkrz.de , use one of the recent kernels:

      • unstable

      • bleeding edge

Load the catalog#

#import intake
#collection = intake.open_esm_datastore(services["Path to catalog"][0])

Next step:#