Relating canopy photography with forest biodiversity

Canopy structure modulates the availability of light and nutrient cycling below the canopy, thus having a key role in driving forest diversity. However, very few studies have used digital photography to link estimated canopy attributes with forest biodiversity. Amongst, these studies are often focused on linking canopy overstory to understory plant diversity (e.g., Mestre et al. 2017; Sercu et al. 2017), given the expected mutual role between these two components (e.g, Hederovà et al. 2023).

Beside plants, many other organisms are affected by canopy structure, for at least two reasons. Firstly, sessile organisms are likely influenced by light and micro-climate conditions below the canopy, which are strongly modulated by canopy structure; example include lichens (Benìtez et al. 2019) and mosses (Niinemets and Tobias 2013). But canopy, and its complementary gap structure, are also key attributes for mobile (and particularly flying) organism. A recent study showed that below-canopy structure influence mobility of moths, and thus their functional community structure (La Cava et al. 2024).

Canopy structure influence on mobility-traits of moths (image from La Cava et al. 2024).

In 2023, I have collaborated with an Italian group in Northern Alps, which used fisheye photography to relate canopy openness with bat and bird diversity in Italian Alps. The study (Rigo et al. 2024) demonstrated that forest structure strongly affect the diversity of these taxa. Canopy openness, in particular, influences bird nesting behaviour.

This was one of the first studies extending the use of canopy photography to explore the taxonomic diversity of mobile animals. Future studies are planned to evaluate different canopy photographic methods, and their relevance for different taxon, while future studies are expected to expand the application of canopy photography and its links with multi-taxon diversity.

The article reference is: Rigo, F., Paniccia, C., Anderle, M., Chianucci, F., Obojes, N., Tappeiner, U., Hilpold, A. and Mina, M., 2024. Relating forest structural characteristics to bat and bird diversity in the Italian Alps. Forest Ecology and Management554, p.121673. doi: https://doi.org/10.1016/j.foreco.2023.121673

LAIr: an R package to estimate LAI from NDVI

Leaf Area Index can be collected in the field, using either direct or indirect optical methods (for a review, see Yan et al. 2019, Chianucci 2020). However, in situ measurements are time-consuming and unpractical for large areas. Proximal and remotely-sensed information offers a unique way to obtain spatially-extensive mapping of LAI, from landscape to the global scale. While active sensors like LiDAR and SAR have recently considered attention for monitoring LAI (Wang et al. 2020), so far the majority of applications consider passive optical sensing (Chianucci et al. 2016; Fang et al. 2019; Xie et al. 2019).

Passive optical methods typically derive LAI from empirical equation relating LAI to some vegetation indices (VIs). The Normalized Difference Vegetation Index (NDVI) is amongst the most widely used VI in vegetation monitoring, as it is simple and can be derived from the widest array of multi-spectral sensors currently available. However, the relationship between LAI and NDVI is essentially non-linear, and sensitive to vegetation type (crop-specific), canopy conditions and density. Therefore, many conversion equations have been proposed and published in the literature, deriving from applications in different regions, on different crops, and with different sensors.

The LAIr package provides a simple tool to implement the conversion formulas available in literature, as compiled in Bajocco et al. 2022. A single function NDVI2LAI() allows to select the most suitable formula(s) by selecting those based on available vegetation and sensor attributes, and apply the conversion equation(s) to raster or numeric inputs. The next paragraphs describe in detail the methodology and the package functioning.

The LAIr package logo by N. Puletti & S. Bajocco

The package can be installed from CRAN:

install.packages('LAIr')

The LAIr package features a single NDVI2LAI() function. The function allows to import an input Raster* or numeric vector and select the suitable conversion equation formula(s) based on a set of optional vegetation (category, type, name), or sensor (sensor name, platform, resolution) filtering parameters. If no arguments are not considered, the function by default implement all the available functions.

The list of all available LAI-NDVI equations have been compiled by Bajocco et al. 2022 and can be screened by typing NDVI2LAIeq, which allows to see also the available options for each filtering parameter.

Figure from Bajocco et al. 2022. The workflow of LAI-NDVI equations available in NDVI2LAIeq.

For more info, see Bajocco et al. 2022 and the LAIr package.

Ground and aerial methods for monitoring vegetation cover

In vegetation, the term ‘cover’ refers to the vertical projection of vegetation area for the unit ground area. This variable is unitless, and ranges from 0 to 1. However, many definition of cover exists, depending on the vegetation type, component, methods is considered in the measuration.

This resulted in a heterogeneous quantities of definitions, which are not harmonized, hampering comparability among different measurements and methods.

In 202 I collaborated with a research, led by Linyuan Li, which reviewed the many definition available on vegetation cover.

Example of many definition of vegetation cover. From Li et al. 2023

The review harmonized the many definitions of cover and its related quantities. In addition, it described the many available instruments and methods to measure and monitor cover at spatially-extensive scales, from ground to satellite. Finally, the study describes the pros and cons of many methods, and describes the challenges and future perspective of monitoring this key vegetation variable.

Li, L., Mu, X., Jiang, H., Chianucci, F., Hu, R., Song, W., Qi, J., Liu, S., Zhou, J., Chen, L. and Huang, H., 2023. Review of ground and aerial methods for vegetation cover fraction (fCover) and related quantities estimation: definitions, advances, challenges, and future perspectives. ISPRS Journal of Photogrammetry and Remote Sensing199, pp.133-156, doi: https://doi.org/10.1016/j.isprsjprs.2023.03.020

LAD: an R package to estimate leaf angle distribution (LAD) from measured leaf inclination angles

Leaf angle distribution (LAD) is an important factor for describing optical features of vegetation canopies (Ross 1981). It influences several processes such as photosynthesis, evapotranspiration, spectral reflectance and transmittance (Vicari et al. 2019). LAD influence on radiation transmission is described by leaf projection function (also known G-function, it is the projection coefficient function of unit foliage area on a plane perpendicular to the viewing direction), which is used for indirectly leaf area index (LAI) measurement (Ross 1981).

LAD is one of the most poorly characterized parameters due to its difficulty in measuring leaf inclination angles. Several methods and instruments have been proposed, however, their use has been generally hampered by difficulties in applying them to tall and closed canopies, their unsatisfactory ability to reproduce measurements and high costs.

As alternative to direct manual measurements, Ryu et al. (2010) proposed a robust and simple leveled photographic method to measure leaf inclination angles, which was proven comparable to manual clinometer measurements (Pisek et al. (2011)).

Me and colleague Lorenzo Cesaretti have created the R package “LAD” to calculate the Leaf angle distribution (LAD) function and the G-function from measured leaf inclination angles obtained from leveled photography or other methods. Once a reliable set of leaf inclination angles measurements are taken (a minimum of 75 measurements per species are recommended by Pisek et al. (2013)), two parameters μ,ν are derived for fitting a Beta distribution, which is considered a reliable function to describe the LAD.

In order to interpret the measured distribution, the LAD package allows comparing the measured distribution against six theoretical LAD described by de Wit (1965):

  • In spherical canopies, the relative frequency of leaf inclination angle is the same as for a sphere;
  • planophile canopies are dominated by horizontally-oriented leaves;
  • plagiophile canopies are dominated by inclined leaves;
  • erectophile canopies are dominated by vertically-oriented leaves;
  • extremophile canopies are characterized by both horizontal and vertical leaves;
  • uniform canopies are characterized by equal proportion of leaf inclination angles for any angle.

The six theoretical distribution described by de Wit (1965).

The package can be installed from CRAN:

install.packages("LAD")


The R package have two key functions:

  • fitLAD(): it calculates both the LAD and the G-function from two-parameters Beta distribution:
  • calcLAD(): it calculates summary statistics, LAD, G-function and distribution type from measured leaf inclination angle.

Example:

calcLAD(Chianucci,Angle_degree,type='summary',Genus,Species)

## Joining, by = c("Genus", "Species")
## Joining, by = c("Genus", "Species")
## # A tibble: 138 x 8


##    Genus  Species         MTA    SD     N    mu    nu distribution
##  6 Acer   monspessulanum  20.1  14.8   105  4.21  1.21 planophile
##  7 Acer   negundo         59.4  17.5   114  1.68  3.26 erectophile
##  8 Acer   platanoides     26.8  16.7  1254  3.56  1.51 planophile
##  9 Acer   pseudoplatanus  33.8  17.1   102  3.44  2.07 planophile
## 10 Acer   rubrum          30.3  16.1  1001  3.98  2.03 planophile
## # ... with 128 more rows

The article describing the package is available at: Chianucci F., Cesaretti L. LAD: an R package to estimate leaf angle distribution from measured leaf inclination angles. bioRxiv 2022.10.28.513998; doi: https://doi.org/10.1101/2022.10.28.513998

bRaw: an R package for digital RAW canopy imagery

The bRaw logo

Digital photography is an increasingly popular tool to estimate forest canopy attributes. However, estimates of gap fraction, upon which calculations of canopy attributes are based, are sensitive to photographic exposure in upward-facing images (Macfarlane et al. (2014)). Several studies indicated that camera exposure is the major source of uncertainties in indirect leaf area index estimation from canopy photography (Chianucci 2020; Yan et al. 2019). In addition, a previous study found that at least 10 different methods to determine exposure for canopy photography were used by scientists in the last two decades, hindering comparability among different studies and protocols (Beckschäfer et al. (2013)).

Rather than looking for an optimal exposure from in-camera JPEG images, shooting raw has the advantage of higher radiometric resolution (bit-depth ≥ 12 bit) and linear relationship with actual brightness. While several studies tested various approaches to use RAW imagery (Cescatti et al. 2007; Lang et al. 2010; Hwang et al. 2016), Macfarlane et al. (2014) found that shooting raw with one stop of underexposure and applying a linear contrast stretch yielded largely insensitive results, thus providing a way for standardizing and optimizing photographic exposure.

The bRaw package replicates the methodology proposed by Macfarlane et al. (2014). The key steps of the procedure are:

  1. read the Bayer pattern from RAW imagery;
  2. convert the raw image into a 16 bit portable grey map (‘pgm’) format;
  3. select the blue channel of the pgm;
  4. (optionally) apply a circular mask (in case of circular fisheye images);
  5. contrast stretch the image (or mask);
  6. (optionally) apply a gamma adjustment (2.2);
  7. Export the 16-bit linear, enhanced blue channel as a 8-bit single channel ‘jpeg’.
Input RAW imagery. Pixel values are non-scaled integers in 16-bit depth.
Zoomed-view of the RAW imagery. The demosaiced pattern (no color interpolation) can be observed.

All the steps are performed by a single function ‘raw_blue()’, which yields:

raw_blue(filename,circ.mask=NULL,gamma.adj=TRUE,display=TRUE)
The blue channel 8-bit JPEG image obtained after linear contrast stretching 16-bit RAW imagery

The linear contrast stretch yielded high dynamic range in the histogram and resulted in a lighter image, as only one channel and one-forth of the raw number of pixels (corresponding to the blue pixels) are used:

Histogram of contrast-stretched blue pixels derived from 16 bit RAW imagery

The package can be installed from gitLab using devtools (Wickham et al. 2021):

# install.packages("devtools")
devtools::install_gitlab("fchianucci/bRaw")

The article describing the package is available here: https://biorxiv.org/cgi/content/short/2022.10.25.513518v1

hemispheR: an R package for fisheye canopy image analysis

A couple of months ago, I wrote a post where I illustrated the open-access tools available for digital canopy image analysis (you can find it here). For the purpose, I reviewed both the existing software and language-programming tools to create a list of all existing free canopy image processing solutions. I have been very surprised to find that very few R solutions were available for processing canopy images. This was quite surprising, as the digital image format is suitable for being handled by R.

After that post, I decided to create my own R packages. The first one I created was coveR, which is tailored for restricted-view canopy photography (digital cover photography). I described it in another post in the blog here. Thanks to the post I made my acquaintance with Martin Macek and together decided to create our own R package for canopy hemispherical images.

The challenges of making a package for fisheye image analysis had been twofold:

1) firstly, the available R packages dealing with hemispherical images focused on specific processing steps, such as thresholding (package ‘caiman’; Díaz et al., 2021), gap fraction inversion (‘hemiphoto2LAI’; Zhao et al., 2019), canopy openness retrieval (‘Sky’; Bachelot, 2016). Therefore, a ‘new’ R package should provide all the steps required for processing digital hemispherical images in a single package, from importing images up to retrieve canopy attributes. This lead to the second issue…

2) compared with other canopy image method, the processing of fisheye image is rather complex, as many specific steps like setting a circular mask, perform gamma correction, correct for lens distortion, divide the hemisphere in zenith and azimuth bins, among others, are required. Such complexity of image processing needs to comply with the requisite of having a simple and flexible R tool, to make a package simple and ready-to-use.

The results of mine & Martin’s joined efforts is hemispheR. The hemispheR package uses the functionality of ‘raster’ package (Hijmans, 2021), which ensures faster processing of images not otherwise possible when dealing with other image formats in the R environment. The ‘raster’ package also has the strong advantage of importing any kind of raster graphic image formats (i.e, pixel matrix), including raw imagery. The package allows analysis of both circular and full-frame fisheye images. In addition, it can import either single channel images or mixing channel (including greenness indices), to allow using them for either upward-facing (forest canopies) and downward-facing (short canopies and crops) images.

Left: Circular fisheye image. Right: Full-frame fisheye image.

The package features the following functions, which are ordered sequentially, following the fundamental image processing steps:

  1. import_fisheye(): imports an image channel (or a mixing channel) and applies a circular mask (in case of circular images);
  2. binarize_fisheye(): thresholds the selected image channel and returns a binary image;
  3. gapfrac_fisheye(): calculate the gap fraction for defined zenith and azimuth bins;
  4. canopy_fisheye(): infer canopy attributes from the angular distribution of gap fraction

To complement the package, additional features allows to import circular mask parameters for a known set of camera & lens equipment (using the function camera_fisheye()), and provide a long list of fisheye lenses available to correct for lens distortion (available as list.lenses).

The package allows inspecting all the steps of the processing such as:

Importing images:

Example of an imported circular fisheye image using import_fisheye() function

Classify images:

Example of a classified image using the binarize_fisheye() function

Extract angular gap fraction:

Example of circular image classified by setting 5 rings and 8 azimuth segments, and zenith range 0-75°

The package can be installed from CRAN by typing:

install.packages(“hemispheR”)

The development version package can be installed by typing :

devtools::install_git(“https://gitlab.com/fchianucci/hemispheR“)

The article describing the package is available here: https://doi.org/10.1016/j.agrformet.2023.109470

      By providing a simple, transparent, and flexible image processing procedure, hemispheR supported the use of DHP for routine measurements and monitoring of forest canopy attributes. Hosting the package in a Git repository will further support development of the package, through either collaborative coding or forking projects.

Chianucci, F. and Macek, M., 2023. hemispheR: an R package for fisheye canopy image analysis. Agricultural and Forest Meteorology336, p.109470.

coveR: an R Package for processing digital cover images of tree canopies

Digital Cover Photography (DCP) is an increasingly popular method for estimating tree canopy attributes like canopy cover and leaf area index (LAI). Unlike fisheye photography, where the entire canopy footprint (180° field of view – FOV) is captured inside a circle that occupies roughly half the image pixels inside the rectangular camera frame, DCP uses all the image pixels to sample a restricted canopy portion close to the zenith (30° FOV), which brings several advantages. Indeed, the high resolution and the uniform sky luminance yield few mixed pixels (Chianucci 2016), which makes DCP relatively insensitive to sky condition (Macfarlane et al. 2007a), camera exposure (Macfarlane et al. 2014), image classification (Macfarlane 2011) and actual canopy density (Chianucci 2016).

An example of a digital cover photography (DCP) image of a beech canopy

In addition, the use of a standard camera with a normal (often fixed) lens holds strong potential for widespread use of DCP, making this method implementable to many devices including smartphones  (De Bei et al. 2016),  micro-cameras, raspberry Pi and other home-made sensors (Kim et al. 2019), and remote trail cameras (Chianucci et al. 2021).

Examples of implementation of DCP in smartphone APPs (left), camera-traps (middle) and Raspberry Pi modules (right).

Another strong advantage is that the image acquisition and processing in DCP is simpler than fisheye photography, with the latter requiring many complex and time-consuming steps like controlling camera exposure and gamma correction, correcting lens distortion, setting a circular mask, dividing the image into concentric zenith rings and azimuth segments.

The main limitation in the operational use of DCP is that existing solutions to process canopy images are predominantly tailored for fisheye photography (see my previous article here), whereas open-access tools for DCP are lacking.

For this reason I created a new R package ‘coveR‘ to allow full processing of cover images in R. The package allows step-by-step analysis of cover images, while accessing the intermediate image and analysis outputs.

Illustrative workflow of coveR package. For details, see Chianucci et al. 2022.

The first version of the package contained five functions which can be used sequentially in a pipeline (see the figure above):

INPUT %>% open_blue() %>% thd_blue() %>% label_gaps() %>% extract_gap() %>% get_canopy()

An additional function canopy_raster() allows to return the output image derived from gap classification.

The latest version of the package contains a single function coveR(), which allows to perform all the processing steps, while optionally export the output image:

INPUT %>% coveR(export.image=TRUE)

Using the functionality of terra package (Hijman 2021), the full processing of each single image is very fast.

The package can be downloaded from R using the following lines of code:

# install.packages(“devtools”)
devtools::install_gitlab(“fchianucci/coveR“)

A similar version coveR2 was also created to be installed directly from CRAN:

install.packages(“coveR2”).

Compared to the other, coveR2 has no reading EXIF functionality (which can be useful for continuous camera acquisition). While existing canopy photography protocols have mostly focused on fisheye photography, the coveR package can effectively support the use of DCP in long-term forest research and monitoring programs.

The final published article in TREES is freely available here: https://t.co/X78VyagoZo

Credits: Carlotta Ferrara and Nicola Puletti contributed to develop coveR package.

CrowNet: first project started!

CrowNet is a collaborative canopy and tree phenology monitoring network based on continuous digital canopy images from remote camera traps. The continuous camera system is based on acquiring daily images from upward-pointing camera traps using their time-lapse feature, and then inferring phenological transition stages from the annual series of canopy attributes derived from daily images (details on the methodology in Chianucci et al. 2021).

(Left): Image acquired from a camera-trap; (Middle): image was thresholded and then (Right): pixels classified into large, between-crowns gaps (grey), small, within-crown gaps (white) and canopy (black) for estimating canopy structure.

In December 2021, the first monitoring sites have been established by Reparto Carabinieri Biodiversità Pratovecchio, in the framework of a project aimed at monitoring tree canopy and phenology of the main diffuse mountain tree species and forests in State Reserves of Central Apennines, including the UNESCO primeval-forest heritage in Sasso Fratino Integral Reserve.

Thirteen camera traps have been installed by Reparto Carabinieri Biodiversità with the following objectives:

  1. Acquire information on how the trees species and forests responds to rapid climatic change;
  2. Testing the effectiveness of the camera monitoring system at larger spatial and temporal scales;
  3. Creating a first series of continuous data of canopy and phenology based on camera traps.
Example of camera trap installation in a beech forest. Photo: A. Pellegrini
The camera was mounted in a tree, oriented at the zenith and protected by a screen. Photo: M. Gonnelli.

The first stage of the project foresees the continuous acquisition of daily images between December 2021 to December 2022. Estimates from camera traps will be validated with periodic optical measurements carried out in the field. The result of the first year will be a key point to test the effectiveness of the camera trap method and will be the first contribution to the collaborative CrowNet project.

Estimate canopy structure from terrestrial lidar intensity images

Light Detection And Ranging (LiDAR), also known as laser scanning, is an active technology which uses information from optically-directed laser beams to precisely obtain 3D information of target objects. This technology have recently experienced a relevant upsurge of interest in forest ecology, largely supported by the recent advancements in terrestrial laser scanning (TLS) technology, which provide unprecedented 3D field information on trees and forests (Malhi et al. 2018).

While the basic premise for collecting 3D point cloud is similar across TLS instruments, distinct ranging methods have emerged: time-of-flight (TOF) and phase-shift (P-S) sensors. They differ primarily by a balance of cost and signal-to-noise ratio (SNR). Time-of-flight scanners emit laser pulse and measure the amount of time of return singles from intercepted targets. Phase-shift scanners emit a continuous signal, modulating frequency and amplitude to produce a unique outgoing signal.

Phase-shift sensors have many advantages as they are quicker, cheaper, lighter-weighted, and have lower beam divergence compared to TOF sensors, which yielded high-resolution data. However, P-S scanners are limited by their lower SNR, lower range, and increased ranging artifacts, particularly in complex canopies (Newnham et al., 2012), which generally reduced their deployment compared to TOF scanners.

As the main limitations of phase-shift scanner affected the 3D-point cloud, a new method was recently developed in a study led by Mirko Grotti and colleagues (Grotti et al. 2020) which is based only on laser return intensity (LRI). LRI is a raw measure of the backscatter of the signal recorded by the sensor, and thus it can be considered an unclassified version of the point cloud, as it captures the scene ‘viewed’ by the sensors, including point and non-point (gap data).

An example of an intensity image derived from the phase-shift FARO Focus 3D x130

Intensity-images were derived from a phase-shift FARO Focus 3D x130 laser scanner and then processed using a procedure comparable to digital hemispherical photography:

  • create a binary image of gap and canopy pixels by histogram analysis
  • divide image in zenith (y) and azimuth (x) bins
  • apply theoretical equation relating LAI to angular gap fraction

I compiled a MATLAB routine to process these intensity images, which is available here.

The new image-based methodology has many advantages:

  • The intensity-based method is insensitive to range, as the image classification focused on gap, which by definition has nearly-zero intensity values;
  • The intensity-based approach removes all the issues of point-cloud filtering in P-S scan data;
  • Use intensity-only information allows accessing TLS data at its highest resolution (which is more than one order of magnitude higher compared to passive optical or TOF scan data)

In addition, the finest resolution achievable by TLS to sample the canopy and the theoretical background comparable to passive optical methods make the intensity-image approach an ideal benchmarking tool for validating indirect optical instruments and methods, avoiding the need of destructive, more time-consuming direct measurements.

Open-access tools for canopy image processing

Digital canopy photography is the cheapest, most flexible, but also effective tool to estimate canopy attributes light Leaf Area Index (LAI), canopy cover, and radiation regime.

The key strengths of digital canopy photography are:

  • the cheap and widespread availability of digital cameras: the off-the-shelf, general-purpose nature of digital cameras makes this technology accessible to any user.
  • the embedding of digital cameras in many devices, including smartphones (I have already discussed it in a previous article here), camera traps (see example in CrowNet page), phenocams (Richardson et al. 2018), and home-made solutions like Rasperry Pi (Wilkinson et al. 2021).

In addition,

  • the increasing availability of free software, language programming, libraries and apps to process images are really supporting use of canopy photography also by non-experts and/or end users.

Here I reported examples of free tools to process canopy photography. For smartphone applications, I already discussed them here.

Hemispherical photography processing tools

Images obtained from a camera equipped with a fisheye lens are (and have been) traditionally the most widely used in canopy photography research. The processing of digital hemispherical images requires the following basic steps:

(a) correcting images for fisheye-lens distortion;

(b) binary classification of digital image in sky (1) and non-sky (0) pixels to calculate gap fraction (GF);

(c) subdivision of the circular image in zenith annuli and azimuth segments;

(d) application of algorithms to retrieve canopy structure from GF at each zenith and azimuth bin

An example of a circular fisheye image acquired with the Nikon Coolpix4500 and the FC-E8 lens converter

There are many open-access solutions to perform digital hemispherical image processing.

Stand-alone solutions (freeware) for processing fisheye image include Gap Light Analyzer (GLA) developed at Fraser University . The software have a long history (Frazer et al. 1999) and it is still relatively used due to its simplicity. A main limitation of the software is that image classification (b) is performed manually, which is a rather subjective and time-consuming option.

Alternative freeware is CAN-EYE, which is a really comprehensive processing tool (not limited to upward fisheye images) developed at INRAE by Marie Weiss and Frederic Baret. The thresholding is automatic, and the software allows batch processing, which is very useful for canopy photography users. The software works in Windows operative system (OS).

Another example of freeware is CIMES, developed by Jean-Michel Walter, which is a command-line program, indeed compatible to any OS. Hemisfer is another software developed by Patrick Schleppi, it is not ‘fully’ open-access, in that the freeware version has some restrictions in terms of allowable input image size.

As alternative to software, some (open-software) image plugins have been developed for processing fisheye canopy images. An example is the Hemispherical 2.0 plugin of ImageJ compiled by Philip Beckschäfer, which calculate gap fraction and associated statistics after binarizing fisheye images.

With reference on open R libraries, there are some packages which deserves interest:

  • The ‘hemiphot‘ library process circular fisheye images to get estimates of canopy and light regime. No lens-correction is performed, and the thresholding is manual.
  • The ‘caiman‘ library is designed to automatically threshold fisheye images. So this package could be used in combination with hemiphot to firstly binarize and then retrieve canopy attributes.
  • The ‘hemiphoto2LAI‘ implement almost all existing algorithms to retrieve LAI from gap fraction. The input are the gap fraction and zenith angle, not an image, so the above libraries could be used to get the input data.
  • The ‘Sky‘ library allows to threshold fisheye images and calculate openness

N.B. its surprising that only few solutions are available in R to deal with traditional circular fisheye image. I hypothesize that image processing in R is not the best suitable place, while MATLAB has a larger set of tools and computational capabilities for image matrix analysis. An example is HPEval tool based on MATLAB, while in-built MATLAB toolbox allows fisheye lens calibration.

I have also created my own R package called hemispheR, which is described in another post here.

An open-source Python library to process circular fisheye images is the ‘CanopyGapAnalyzer‘ which is designed to threshold and then calculate canopy parameters for batches of hemispherical images.

Recently I found a weird, ‘fisheyerizing’ web-tool inspired by a work by Andis (2021), which transforms panorama images, which can be created from smartphones using Google Street View, in circular fisheye images which are also processed by the web-tool. Take a look at this tool, called Canofi.

Sometimes the fisheye image is not circular, and we are referring to full-frame fisheye image (Macfarlane et al. 2007a). These kind of images have a reduced FOV such that the full zenith angle range extends to the corners of the rectangular image, increasing the image resolution. As the above scripts are not tailored to such a kind of image (which are indeed common), I created a script to threshold these images and calculate canopy attributes from them, which is available here.

An example of a full-frame hemispherical image. All the image pixels are used in this kind of fisheye images, increasing the number of pixels used for canopy analysis compared with circular hemispherical photography.

Cover photography processing tools

Cover photography (Macfarlane et al. 2007b) is narrow-lens canopy photography. The method was invented by Craig Macfarlane and has a key advantage of simplifying canopy image processing (indeed, no ‘circular’ handling of zenith rings and azimuth segments of cover images is required). But there are many other advantages, as the method is suitable for any kind of device, including smartphones and camera-traps, and it is also less sensitive to sky conditions and gamma correction than fisheye pictures.

Cover photography use a narrow lens (e.g. 50 mm) to achieve a FOV of about 30°

In terms of image processing, the key steps are

  • classify the image;
  • apply an algorithm to calculate foliage cover and effective LAI from classified GF.

This means that any kind of thresholding freeware, plugin or code can be used to process these images. For example, the ImageJ plugin ‘Auto Threshold‘, which has been also translated in the R library ‘autothresholdr‘, can be used to calculate gap fraction after thresholding cover images. With GF, it is possible to calculate the complement foliage cover, and derive effective LAI by assuming an extinction coefficient k from the formulas reported in Macfarlane et al. 2007b.

However, the estimation of foliage clumping, upon which LAI can be derived from effective LAI, require further step of separating gaps into large, between-crowns gaps, and small, within-crown gaps. I created an R package ‘coveR‘ to process DCP images, which is described in the article here. Alternatively, there is a specific open Python library Canopy Cover (CaCo) which allows to perform separation of large and small gap in cover images (Alivernini et al. 2018). For non-practitioners of Python, a similar approach can be achieved using the ‘Analyze particles’ plugin of ImageJ, using the same method descrived in Alivernini et al. 2018.

In short, there are currently many free options to process canopy images, and smartphone APPs are likely to increment the plafond of open-source solutions for canopy photographers!