However, all siRNAs were capable of prolonging cell survival, alb

However, all siRNAs were capable of prolonging cell survival, albeit to different extents. This protective effect was most pronounced for cells transfected with the E1A siRNA. Although such cells displayed severe cytopathic effects and were already partially detached from the culture vessels, the culture viability was remarkably high (>80%) at 6 days post-infection. We repeated the experiment using lower and higher MOIs (2 TCID50/cell and 6 TCID50/cell, respectively) and obtained comparable TSA HDAC mw results with a tendency towards higher and lower protection at decreased and increased MOIs, respectively (data not shown). The observed protective effect

of the E1A siRNA could not be attributed to a possible unspecific general increase in cellular metabolic activity, because neither the E1A siRNA nor any of the other siRNAs altered

the viability of uninfected cells (Supplementary Fig. 6). Thus, although the E1A siRNA did not inhibit the output of infectious virus progeny as efficiently as did the DNA polymerase siRNA, it enhanced the viability of infected cells and kept them alive for a prolonged time period. In the present study, we evaluated a larger panel of potential targets, and also determined the inhibitory effect of siRNAs on wild-type adenovirus. SiRNAs directed against the E1A, DNA polymerase, MLN8237 mouse pTP, and IVa2 transcripts were all capable of efficiently silencing the respective genes in the course of an adenovirus infection. By contrast, although having displayed a comparable silencing capacity in luciferase reporter assays, the hexon- and protease-directed siRNAs, showed only a limited capacity to reduce the number of ML transcripts. This observation can be attributed to the markedly higher amounts of hexon and protease mRNAs generated

from the particularly strong MLP, in comparison with the mRNA levels of the other genes. This high number of MLP-derived late mRNAs may become even more problematic in RNAi-based attempts to inhibit adenovirus multiplication, because the virus-associated RNAs (VA-RNAs) I and II (non-coding RNAs produced in low amounts during the early stages of infection, but in vast amounts at later Tyrosine-protein kinase BLK time points) appear to counteract RNAi. This effect is thought to be partially caused by the incorporation into and saturation of the RISC by VA-RNA subfragments, which behave like miRNAs (Andersson et al., 2005). Thus, siRNA-mediated inhibition of adenovirus gene expression during the early stages of infection may generally be more beneficial than inhibition of late-stage gene expression. In this regard, inhibition of viral DNA replication may be particularly advantageous, because a decrease in viral genome copy numbers should significantly lower VA-RNA gene copy numbers.

, 1997 and Hellsten et al , 1999) and Loktak (India) (Singh and K

, 1997 and Hellsten et al., 1999) and Loktak (India) (Singh and Khundrakpam, 2011). Other lakes are largely turbid such as the Boraphed Reservoir in Thailand (Mizuno and Mori, 1970). Whether these lakes indeed show alternative stable states has not been proven by this review and would require further research. Model results also indicated 3 lakes to have habitats that are particularly suitable for macrophyte growth mainly because of their shallowness. These are Lake Upemba (Congo), Lake Istokpoga (USA) and Lake Tathlina (Canada). Indeed,

macrophytes are abundantly present in Lake Upemba. Also in Lake Istokpoga macrophytes are flourishing. Despite great effort, removal of excess macrophytes from this lake had only a temporary effect (O’Brien and Hapgood, 2012) indicating that Lake Istokpoga has conceivably only one stable state which is macrophyte dominated. Cilengitide chemical structure Whether Lake Tathlina (Canada) is also macrophyte dominated is not clear because data are not available. The majority Cell Cycle inhibitor of the lakes fall outside the suggested domain with the possibility of having macrophytes. These large shallow lakes are expected to be prone to the size effect. This is not surprising, since they have a large fetch or depth reducing

the window of opportunity for macrophytes ( Fig. 2A, process 1). However, this contrasts to observations in the literature showing that in most of the lakes macrophytes had a chance to grow at least some time in history ( Table 1). In some of the lakes this can be explained by natural water fluctuations. A drop in water level restricts the surface area where size effects prevail. For example, the water fluctuations in Lake Chad make the lake switch from a great large inland ‘sea’ in wet periods to a marshy macrophyte-rich area in dry periods ( Leblanc et al., 2011). Additionally, in Lake Beyşehir and Lake Uluabat (both in Turkey) receding water levels made large areas suitable for macrophyte Avelestat (AZD9668) growth, whereas higher water levels prevented macrophytes to grow ( Beklioglu

et al., 2006). Water level fluctuations can thus lead to alternating behaviour of lakes to eutrophication, which will be showing a turbid state during high water levels, a macrophyte dominated state during extreme low water levels and possibly alternative stable states in between ( Blindow et al., 1993 and Van Geest et al., 2005). However, fluctuating water levels are not the sole explanation of macrophyte presence in all lakes. So far, the effects of spatial heterogeneity have been ignored. If spatial heterogeneity is accounted for, as with the data of Taihu, there may well be compartments within large shallow lakes that are more sheltered or shallower and thereby being suitable for macrophyte growth.

matl ) By 1804 (Rennell, 1804; see suppl matl ), the Nasirpur c

matl.). By 1804 (Rennell, 1804; see suppl. matl.), the Nasirpur course (called the Dimtadee River on the map) flowed immediately to the north of the town of Nasirpur. The map of Arrowsmith (1804; see suppl. matl.) notes that the Indus flood season over the delta was in April, May and June, two months earlier than today, possibly indicating a greater contribution from the Himalaya. Pinkerton (1811; see suppl. matl.)

states that the Indus River is navigable for 900 km upstream. Steamships continued find more to ply the river as a cargo transport to Attock until replaced by railways in 1862 (Aitkin, 1907). The Baghar channel (Fig. 1) began to silt up in circa 1819. The Indus River then forged its main channel down its former Sattah Branch, but turned west, reaching the sea via the Ochito Branch (Fig. 1; Holmes, 1968). Through the period 1830–1865 (SDUK, 1833 and Johnston, 1861; see suppl.

matl.) the main Indus Delta channel was located along the modern Indus course, and numerous distributary channels were maintained both to the west and to the southeast (Fig. 7). On an 1833 map (SDUK, 1833; see suppl. matl.) the tide is stated as reaching inland 111 km. By 1870–1910 (Letts, 1883; see suppl. matl.), the main Indus had shifted further south and east while still maintaining flow to the western distributary channels (Fig. 7; also see Johnston and Johnston, 1897 in the suppl. matl.). By BMS-387032 nmr 1922 (Bartholomew, 1922; suppl. matl. and Fig. 7), the Ochito River channel was the main branch,

but this had largely been abandoned by 1944 (Fig. 7). The Indus channel is reduced to a single thread in its deltaplain, and the number of delta distributary channels has decreased during the 19th century, from ∼16 to 1 (Table 1 and Fig. 6). The modern delta does not receive much fluvial water or sediment. There were zero no-flow days prior to the Kotri Barrage construction in 1955. After construction (c. 1975), up to 250 no-flow days per year occur. The average annual water and sediment discharges during 1931–1954 were 107 km3 and 193 Mt, respectively. During the 1993–2003 period these rates dropped an order-of-magnitude to 10 km3 and 13 Mt (Inam et al., 2007). The Indus discharge downstream of the Kotri Barrage is usually limited to only Cell press 2 months: August–September, with the sea now intruding the delta up to 225 km (Inam et al., 2007). Abandoned Indus Delta channels have been tidally reworked all along the coast (Fig. 8 and Fig. 9). We mapped this evolution of delta channels using high-resolution imagery: (1) the 1944 topographic maps (USACE, 1944; RMS location error ±196 m), (2) the 2000 SRTM/SWDB database (see suppl. matl.; RMS error ±55 m), and (3) LANDSAT imagery from 1978, 1989, 1990, 1991, 2000 (RMS location error between ±32 m and 196 m). Imagery was selected to be representative of being part of the same astronomic tidal stage.

Such units are typically stratiform, and based upon superposition

Such units are typically stratiform, and based upon superposition (where Upper = Younger and Lower = Older). However, at the present time, the deep, cross-cutting roots of the potential Anthropocene Series can, for practical purposes, be

effectively resolved in both time and space. Their significance can only grow in the future, Ion Channel Ligand Library order as humans continue to mine the Earth to build their lives at the surface. We thank Paolo Tarolli for the invitation to speak on this topic at the European Geosciences Union, Vienna, 2013, and Jon Harbor and one anonymous referee for very useful comments on the manuscript. Simon Price is thanked for his comments. Colin Waters publishes with the permission of the Executive Director, British Geological Survey, Natural Environment Research

Council and the support of the BGS’s Engineering Geology Science area. “
“Fire evolved on the Earth under the direct influence of climate and the accumulation of burnable biomass at various times and spatial scales (Pausas and Keeley, 2009 and Whitlock et al., 2010). However, since humans have been using fire, fire on Earth depends not only on climatic and biological factors, but also on the cultural background of how people manage ecosystems and fire (Goudsblom, 1992, Pyne, 1995, Bowman et al., 2011, Coughlan and Petty, 2012 and Fernandes, 2013). A number of authors, e.g., PCI32765 Pyne (1995), Bond et al. (2005), Pausas and Keeley (2009), Bowman et al. (2011), Coughlan and Petty (2012), Marlon et al. (2013), have been engaged in the demanding task of illustrating this synthesis, in order to track the signature of fire on global geography and human history. In this context, spatio-temporal patterns of fire and related impacts on ecosystems and landscapes are usually described

by means of the fire regime concept (Bradstock et al., 2002, Whitlock et al., 2010, Bowman et al., 2011 and McKenzie et al., 2011). A wide set of fire regime definitions exists depending on the aspects considered, the temporal and spatial scale of analysis and related choice of descriptors (Krebs et al., 2010). In this review we consider Megestrol Acetate the fire regime as the sum of all the ecologically and socially relevant characteristics and dimensions of fire occurrence spanning human history in specific geographical areas. With this line of reasoning, special attention is paid to the ignition source (natural or anthropogenic) and, within anthropogenic fires, to the different fire handling approaches (active fire use vs. fire use prohibition) in land management. Beside the overall global variability of biomes and cultures, common evolutionary patterns of fire regimes can be detected worldwide in relation to the geographical extension and intensification of human pressure on the land (Hough, 1932, Goudsblom, 1992, Pausas and Keeley, 2009 and Bowman et al., 2011).

G R 1322/2006), based on the ratio between the volume of the dis

G.R. 1322/2006), based on the ratio between the volume of the discharge and the volume of the input rainfall ( Puppini, 1923 and Puppini, 1931). The storage PARP activity method connects the delay of the discharge peak with the full capacity of the basin to accumulate the incoming rainfall volume within

the hydraulic network, and it uses as main parameter the storage capacity per unit area of the basin ( Puppini, 1923 and Puppini, 1931). Aside from the rainfall patterns, the basin area and the capacity of the basin to retain or infiltrate a part of the precipitation, the delay and dispersion between the precipitation and the transit of the outflows at the outlet are due to the variety of hydraulic paths, and to the availability of volumes invaded that delays the flood wave ( Puppini, 1923 and Puppini,

1931). Given this preface, to quantify the effects of network changes we developed a new indicator named Network Saturation Index (NSI) that provide a measure of how long it takes for a designed rainfall to saturate the available storage volume. Given a designed rainfall duration and rainfall amount, we simulated a hyetograph to describe the behavior of the rainfall during time. We assume that the amount of rainfall is homogeneous over the surface, and at every time step we computed the percentage of storage volume that is filled by the rainfall. The NSI is then the first time step at which the available storage volume is 100% reached (Fig. 6). The NSI has one basic assumption, also main assumption of

the Puppini, selleck products 1923 and Puppini, 1931 method, that is the synchronous and autonomous filling of volumes stored in the network: the water does not flow in the channels – null slopes–, and each storage volume is considered as an independent unit that gets filled Vasopressin Receptor only by the incoming rainfall. With reference to the mechanisms of formation of the discharge, the idea is that in the considered morphological and drainage condition, the water flows in the channels are entirely controlled by the work of pumping stations, and we assume a critical condition where the pumps are turned off. One must note that the NSI is an index that is not meant to be read as an absolute measurement, nor with a modelistic claim, rather it is defined to compare situations derived for different network conformations. To compute the index, as in many drainage design approaches (Smith, 1993), we based the evaluation on synthetic rather than actual rainfall events, and we considered some Depth–Duration Frequency curves (DDF). A DDF curve is graphical representation of the probability that a given average rainfall intensity will occur, and it is created with long term rainfall records collected at a rainfall monitoring station. DDF curves are widely used to characterize frequency of rainfall annual maxima in a geographical area (Uboldi et al., 2014). Stewart et al. (1999) reviewed actual applications of estimates of rainfall frequency and estimation methods.

Needless to say, this view of morality is strongly at odds with t

Needless to say, this view of morality is strongly at odds with traditional

ethical views and common intuitions. It is also a highly demanding moral view, requiring us, on some views, to make very great personal sacrifices, such as giving most of our income to help needy strangers in distant countries (Kagan, 1989 and Singer, 1972). A great deal of recent research has focused on hypothetical moral dilemmas in which participants must decide whether to sacrifice the life of one person in order to save the lives of GSK J4 datasheet a greater number. In this large and growing literature, when individuals endorse this specific type of harm they are described (following Greene, Sommerville, Nystrom, Darley, & Cohen, 2001) as making utilitarian judgments; when they reject it, they are said to be making non-utilitarian (or deontological) judgments. 2 This terminology suggests that such ‘utilitarian’ judgments express the kind of general impartial concern for the greater good that is at the heart of utilitarian ethics. This is a widely held assumption. For example, it has been argued that this research shows that utilitarian judgment

is uniquely based in deliberative processing involving a cost-benefit analysis of the act that would lead to the greatest good, while, by contrast, non-utilitarian judgment is driven by instinctual emotional aversion to causing ‘up-close-and-personal’ harm CDK inhibitors in clinical trials to another person ( Greene, 2008). It has even been argued that this empirical evidence about the psychological sources of utilitarian and non-utilitarian judgment can help explain the historical debate between utilitarians and their opponents ( Greene, Nystrom, Engell, Darley, & Cohen, 2004) and, more radically, even that it should lead us to adopt a utilitarian approach to ethics ( Greene, 2008 and Singer, 2005). However, as we have pointed out in earlier work, these large

theoretical claims are problematic. Olopatadine This is because endorsing harm in the unusual context of sacrificial dilemmas need not express anything resembling an impartial concern for the greater good (Kahane, 2014 and Kahane and Shackel, 2010). Indeed, the sacrificial dilemmas typically used in current research represent only one, rather special, context in which utilitarian considerations happen to directly conflict with non-utilitarian rules or intuitions. To be willing to sacrifice one person to save a greater number is merely to reject (or overrule) one such non-utilitarian rule. Such rejection, however, is compatible with accepting extreme non-utilitarian rules in many other contexts—rules about lying, retribution, fairness or property, to name just a few examples, not to mention non-impartial moral norms permitting us give priority to ourselves, and to our family or compatriots, over others.

Macquarie Island is a United Nations Education and Scientific Org

Macquarie Island is a United Nations Education and Scientific Organisation (UNESCO) Biosphere Reserve and World Heritage listed for its outstanding geological and natural significance (UNESCO, 2013). Macquarie Island is geologically unique as it

is entirely composed of uplifted oceanic crust (Williamson, 1988). Hence, much of the Island is composed of volcanic, sulphur-rich bedrock (primarily pillow basalts) and associated sediments (Cumpston, 1968). Since Antiinfection Compound Library clinical trial its discovery in AD 1810 it has experienced extensive and on-going environmental impacts from exploitation of its native wildlife and from deliberate and inadvertent introductions of invasive species, particularly vertebrates that have developed feral populations. Human activities were initially focused on exploiting the abundant seal and penguin populations for oil, leading to their near extinction by the end of the nineteenth century (Cumpston, 1968). During this time a number of non-indigenous animals were introduced including cats (in the early nineteenth century as pets); rabbits (in AD 1879 as an additional human food source); and rats and mice, which were inadvertently introduced (Cumpston, 1968). Together they have had devastating

environmental impacts across the Island (PWS, 2007) including degradation of the vegetation, with resulting widespread slope instability and erosion. Secondary impacts also occurred on burrowing seabirds that require vegetation cover around their nesting sites (PWS, 2007). Rodents

have also had significant impacts, with ship OSI-906 datasheet rats in particular eating the eggs PAK6 and chicks of burrow-nesting petrels (PWS, 2007). Therefore, the unique natural values that led to Macquarie Island’s World Heritage listing were increasingly being threatened (PWS, 2007). Since AD 1974 the focus on management of both invasive and threatened species has changed from collection of baseline data, to integrated control, and now the eradication of feral populations and the development of a natural environment recovery programme (Copson and Whinham, 2001). Control and/or eradication of invasive species began with attempts to control the feral cat population in AD 1975. This was followed by a cat eradication programme which began in AD 1985 and ended in AD 2000 (PWS, 2007). The control of rabbits using the Myxamatosis virus started in AD 1978–79 when the rabbit population was estimated at 150,000 ( Copson and Whinham, 2001). By the AD 1980s–1990s numbers dropped to approximately 10% of the AD 1970 population. From AD 1999 to 2003, however, their numbers rapidly increased due to the absence of cats, successively warmer winters and growing resistance to the virus which ceased to be deployed in AD 1999 ( PWS, 2007 and PWS, 2013). This significantly increased the damage caused by rabbits across the Island. The eradication of rabbits and other rodents is now the highest management priority (PWS, 2007).

A full review of the evidence for these impacts from throughout P

A full review of the evidence for these impacts from throughout Polynesia is beyond the scope of this article. Here we limit our review to the archeological and paleoecological evidence for transformation—from pristine ecosystems to anthropogenic landscapes—of three representative Polynesian islands and one archipelago: Tonga, Tikopia, Mangaia, and Hawai’i. Burley et al. (2012) pinpointed the initial human colonization of Tongatapu Island, using high-precision U–Th dating, to 880–896 B.C. From this base on the largest island

of the Tongan archipelago, Lapita peoples rapidly explored and established small settlements throughout the Ha’apai and Vava’u islands to the north, and on isolated Niuatoputapu (Kirch, 1988 and Burley et al., 2001). This rapid phase of discovery and colonization is archeologically attested by small hamlet sites containing distinctive Early Eastern Lapita pottery. Excavations in these hamlet sites and in the more check details extensive middens that succeeded them in the Ancestral Polynesian period (marked by distinctive Polynesian Plain Ware ceramics) reveal a sequence of rapid impacts on the indigenous and endemic birds and reptiles (Pregill and Dye, 1989), including the local extinction of an iguanid lizard, megapodes, and other birds (Steadman, 2006). Burley (2007) synthesized settlement-pattern data from Tongatapu, Ha’apai,

and Vava’u to trace the steady growth of human populations, demonstrating that by the Polynesian Plainware phase (700 B.C. to A.D. 400) these islands were densely settled. The O-methylated flavonoid intensive dryland agricultural systems necessary to support such large populations MAPK Inhibitor Library screening would have transformed much of the raised limestone landscapes of these “makatea” type islands into a patchwork of managed gardens and secondary growth. Historically, native forest is restricted to very small areas on these islands, primarily on steep terrain not suitable for agriculture.

The prehistory and ecology of Tikopia, a Polynesian Outlier settled by a Lapita-pottery making population at approximately the same time as Tongatapu (ca. 950 B.C.), was intensively studied by Kirch and Yen (1982). As in the Tongan case, the initial phase of colonization on this small island (4.6 km2) was marked by a significant impact on the island’s natural biota, including extirpation of a megapode bird, introduction of rats, pigs, dogs, and chickens, and presumably a suite of tuber, fruit, and tree crop plants. The zooarchaeological record exhibits dramatic declines in the quantities of fish, mollusks, sea turtles, and birds over the first few centuries, the result of intensive exploitation (Kirch and Yen, 1982 and Steadman et al., 1990). Pigs, which were introduced at the time of initial colonization, became a major food source during the first and early second millennia A.D., but were extirpated prior to European contact.

, 2006 and Soulières et al , 2009;

Figure 3) As in the D

, 2006 and Soulières et al., 2009;

Figure 3). As in the DG, environmental chronic stress impairs neurogenesis and reduces the population of newborn neurons in the olfactory bulb granule cell layer (Hitoshi et al., 2007). These findings suggest that chronic stress may also impair olfactory bulb pattern separation and odor acuity for highly similar odors. Olfactory impairments are associated with a wide range of disorders including mild cognitive impairment, Alzheimer’s disease, Parkinson’s disease, and schizophrenia. Normal aging can also both reduce OB neurogenesis and impair fine odor discrimination (Enwere et al., 2004). Although the level of olfactory bulb neurogenesis in humans is still debated, it is unclear why olfactory BGB324 dysfunction would be comorbid with disorders having such diverse etiologies.

Thus, investigation of olfactory pattern separation in these disorders is warranted. Here, we propose a common role in pattern separation for adult neurogenesis in the olfactory bulb and hippocampus. Specifically, in both regions, new granule cells may modulate inhibition of principal cells either directly (OB) or via interneurons (DG) and this inhibition may contribute to pattern separation. We also propose that different levels of neurogenesis represent an adaptation to environmental changes in cognitive demands such as those that take place with changing seasons, exposure to enriched environment, or in response to stress and adversity. When

exaggerated, these adaptive changes may lead to pathologies associated with dysregulated Rigosertib mouse pattern separation. For example, the excessive generalization observed in anxiety disorders may stem from impaired pattern separation while the excessive attention to details seen in individuals with autism spectrum disorders Avelestat (AZD9668) may result from excessive pattern separation. Major questions remain unanswered. For example, if adult neurogenesis is such an effective strategy for promoting pattern separation, why is it not more widespread in the brain? Is neurogenesis the privilege of neural circuits devoted to encoding but not storage? Are there costs (such as erosion of memories) that preclude its inclusion in other circuits, or is adult neurogenesis in the OB and DG simply an evolutionary holdover not available to other regions (Kaslin et al., 2008)? Is the potential for neurogenesis latent in other parts of the brain? Addressing these questions will undoubtedly continue to transform our ideas regarding the regenerative potential of the adult mammalian brain. We thank Susanne Ahmari and Mazen Kheirbek for comments on the manuscript. The work was supported by NIMH Grant 5K99MH086615-02 (A.S.), NIDCD Grant R01-DC003906 (D.A.W.) and NARSAD, the New York Stem Cell Initiative (NYSTEM), NIH R01 MH068542 Grants (R.H.).

Our experiments do not address how subsets of particular presynap

Our experiments do not address how subsets of particular presynaptic organelles, such as individual synaptic vesicles, may be specifically targeted by mTOR-dependent axonal macroautophagy. Clues might be offered if alternative modes of vesicle recycling are identified that

could partake in or avoid endocytic compartments that might fuse with AVs (Voglmaier et al., 2006). Starvation, injury, oxidative stress, toxins, including methamphetamine, and infection by neurotropic viruses trigger autophagy in neurons, which is further associated with protein aggregate-related disorders, including Huntington’s, Parkinson’s, and Alzheimer’s diseases (Cheng et al., 2011, Koga et al., 2011, Larsen et al., 2002, Tallóczy et al., 2002 and Tooze

Doxorubicin and Schiavo, 2008). mTOR activity is regulated by multiple endogenous pathways involved in synaptic activity and stress, including tuberous sclerosis complex, Rheb, AKT, NF1, and PTEN (Malagelada et al., 2010). Alterations in mTOR activity are associated with neuropathological conditions such as epilepsy, tuberous sclerosis, and autism. Regulation of presynaptic function by mTOR activity and macroautophagy could thus contribute to manifestations of neurological disorders. Details Selisistat purchase of the experimental procedures can be found in the Supplemental Experimental Procedures. We thank Ana Maria Cuervo, Zsolt Tallóczy, and Steven Siegelbaum for discussion. We thank Maksaaki Komatsu for providing the floxed Atg7 line. This work was funded by Udall Center of Excellence for Parkinson’s Disease Research, NIH DA07418 and DA10154, the Parkinson’s Disease Foundation, and the Picower Foundation. “
“Genetics is a major contributor to autism spectrum disorders. The genetic 17-DMAG (Alvespimycin) HCl component can be transmitted or

acquired through de novo (“new”) mutation. Analysis of the de novo mutations has demonstrated a large number of potential autism target genes (Gilman et al., 2011, Levy et al., 2011, Marshall et al., 2008, Pinto et al., 2010, Sanders et al., 2011 and Sebat et al., 2007). Previously cited studies have focused on large-scale de novo copy number events, either deletions or duplications. Because such events typically span many genes, discerning which of the genes in the target region, alone or in combination, contribute to the disorder becomes a matter of educated guessing or network analysis (Gilman et al., 2011). However, with high-throughput DNA sequencing we can readily search for new mutation in single genes by comparing children to both parents. Such mutation is fairly common, on the order of a hundred new mutations per child, with only a few—on the order of one per child—falling in coding regions (Awadalla et al., 2010 and Conrad et al., 2011).