The http protocol allows "random access" reading of GRIB files. This procedure requires an index file and an http
program that supports random access. The
wgrib inventory is used for the index file, and cURL is used for the random-access http program.
Both are freely
available, widely used, work on many platforms and are easily
scripted/automated/put into a cronjob. Two perl scripts are also
required, get_inf.pl and get_grib.pl, which are downloadable from
the NCEP CPC website (see "Requirements" below).
The basic format of the quick download is:
get_inv.pl INV_URL | grep FIELDS | get_grib.pl GRIB_URL OUTPUT
-
INV_URL is the URL of a wgrib inventory, for example:
http://nomad.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.2008120200/gfs.t00z.pgrbf12.grib2.idx
- FIELDS is string that selects the desired fields (wgrib compatible), for example:
":HGT:500 mb:"
- GRIB_URL is the URL of the grib file, for example:
http://nomad.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.2008120200/gfs.t00z.pgrbf12.grib2
- OUTPUT is the name of the for the downloaded grib file
The "get_inv.pl INV_URL" downloads the wgrib inventory off the net and adds
a range field.
The "grep FIELDS" uses the grep command to select desired
fields from the inventory. Use of the "grep FIELDS" is similar to the
procedure used when using wgrib to extract fields.
The "get_grib.pl
GRIB_URL OUTPUT" uses the filtered inventory to select the fields
from GRIB_URL to download. The selected fields are saved in OUTPUT.
See the
wgrib home page for more information and tricks on using grep and egrep.
|