Configuration
The pipeline requires a YAML configuration file. The specifiable parameters are given here.
Required Configuration Parameters
ps_dir: Directory path for the input processing set (MSv4).dataset_key: Key that specifies which partition of the MSv4 to process.swiftly_config: The SwiFTly configuration. Details are given below.output_dir: Directory path for output files.pixel_scale: The size of a pixel on the sky sphere, supplied as a string. Degrees (“deg”), arcseconds (“asec”), and arcminutes (“amin”) are supported units. Examples: “0.01deg”, “1asec”, “0.05amin”.
Optional Configuration Parameters
dask_address: Address for the Dask scheduler.wtower_size: Size of the wtower, default is 100.gridder: Name of gridder, ‘wtowers’ is currently the only supported gridder.grid_support: Support parameter, default is 8.grid_oversampling: Oversampling parameter, default is 16384.shear_u: Shear u parameter, default is 0.0.shear_v: Shear v parameter, default is 0.0.major_cycles: Number of major cycles to use in the continuum imaging pipeline. Defaults to 1.backward_queue_size: The max number of facet tasks in the swiftly backwards queue. Defaults to 20.forward_queue_size: The max number of subgrid tasks in the swiftly forwards queue. Defaults to 20.fracthresh: The fractional of the peak dirty image brightness at which to stop minor cycles.gain: The “loop gain”, i.e. the fraction of the brightest pixel that is removed in each iterationniter: Maximum number of minor cycles to clean for.parallel_cleaning: Boolean. If True, deconvolution is applied in parallel across facets. Otherwise, applied in serial on the full image.
SwiFTly Configurations
Configuration naming convention:
<image size>[<fov>]-n?<padded facet size>-<padded subgrid size>
Effectiveness percent gives communication overhead (100% would be not transferring any redundant data). If “n” is given, it is a new-style configuration with yN_size = yP_size. This results generally in being able to cover the image with fewer facets, for instance:
“12k[1]-8k-384”: …, # nfacet=4², eff 60.6%
“12k[1]-n8k-384”: …, # nfacet=2², eff 66.4%
“12k[1]-n4k-384”: …, # nfacet=4², eff 57.8%
I.e. whereas before we would need 4 8k facets to cover 12k, with new-style configuration 2 is enough, so we can actually make facets half as big, which is much cheaper to compute. Note however that over-the-wire efficiency decreases a bit (not always the case, but generally true because without image-space resampling we have less freedom in parameter choice)
Configurations are sorted by “families” with a fixed N:yP_size ratio while keeping subgrid size constant (which generally leads to an equivalent configuration / same efficiency).