8,125 research outputs found
Restoration of star-field images using high-level languages and core libraries
Research into the use of FPGAs in Image Processing began in earnest at the beginning of the 1990s. Since then, many thousands of publications have pointed to the computational capabilities of FPGAs. During this time, FPGAs have seen the application space to which they are applicable grow in tandem with their logic densities. When investigating a particular application, researchers compare FPGAs with alternative technologies such as Digital Signal Processors (DSPs), Application-Specific Integrated Cir-cuits (ASICs), microprocessors and vector processors. The metrics for comparison depend on the needs of the application, and include such measurements as: raw performance, power consumption, unit cost, board footprint, non-recurring engineering cost, design time and design cost. The key metrics for a par-ticular application may also include ratios of these metrics, e.g. power/performance, or performance/unit cost. The work detailed in this paper compares a 90nm-process commodity microprocessor with a plat-form based around a 90nm-process FPGA, focussing on design time and raw performance. The application chosen for implementation was a minimum entropy restoration of star-field images (see [1] for an introduction), with simulated annealing used to converge towards the globally-optimum solution. This application was not chosen in the belief that it would particularly suit one technology over another, but was instead selected as being representative of a computationally intense image-processing application
Recommended from our members
A uniform Time Trade Off method for states better and worse than dead: feasibility study of the ‘lead time’ approach
The way Time Trade Off (TTO) values are elicited for states of health considered ‘worse than being dead’ has important implications for the mean values used in economic evaluation. Conventional approaches to TTO, as used in the UK’s ‘MVH’ value set, are problematic because they require fundamentally different tradeoffs tasks for the valuation of
states better and worse than dead. This study aims to refine and test the feasibility of a new approach described by Robinson and Spencer (2006), and to explore the characteristics of the valuation data it generates. The approach introduces a ‘lead time’ into the TTO, producing a uniform procedure for generating values either >0 or <0. We used this lead time TTO to value 10 moderate to severe EQ-5D states using a sample of the general public (n=109). We conclude that the approach is feasible for use in valuation studies, and appears to overcome the discontinuity in values around 0 evident in conventional methods. However, further research is required to resolve the issue of how to handle participants who ‘use up’ all lead time; to develop ways of controlling for individual time preferences; and to better understand the implications for valuations of states better than dead
Recommended from our members
Source-specific Fine Particulate Using Spatiotemporal Concentration Fields Developed using Chemical Transport Modelling and Data Assimilation
Recommended from our members
Does the value of quality of life depend on duration?
The aims of this study are to investigate the feasibility of eliciting Time Trade Off (TTO) valuations using short durations; to determine the effect of contrasting durations on individuals’ responses to the TTO; to examine variations within and between respondents’ values with respect to duration; and to consider the insights provided by participants’ comments and explanations regarding their reaction to duration in the valuation task. 27 participants provided TTO values using short and long durations for three EQ-5D states. Feedback was sought using a series of open ended questions. Of the 81 opportunities to observe it, strict constant proportionality was satisfied twice. 11 participants had no systematic relationship between duration and value; 11 provided consistently lower valuations in long durations, while 5 had higher valuations in long durations. Comments provided by participants were consistent with the values they provided. Mean TTO values did not differ markedly between alternative durations. We conclude that it is feasible to elicit TTO values for short durations. There is considerable heterogeneity in individuals’ responses to the time frames used to elicit values. Further research is required to ensure that the values used in cost effectiveness analysis adequately represent preferences about quality and length of life
Integrating visual and tactile information in the perirhinal cortex
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual–tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual–tactile) than unimodal (visual–visual or tactile–tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual–tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices
Minimum entropy restoration using FPGAs and high-level techniques
One of the greatest perceived barriers to the widespread use of FPGAs in image processing is the difficulty for application specialists of developing algorithms on reconfigurable hardware. Minimum entropy deconvolution (MED) techniques have been shown to be effective in the restoration of star-field images. This paper reports on an attempt to implement a MED algorithm using simulated annealing, first on a microprocessor, then on an FPGA. The FPGA implementation uses DIME-C, a C-to-gates compiler, coupled with a low-level core library to simplify the design task. Analysis of the C code and output from the DIME-C compiler guided the code optimisation. The paper reports on the design effort that this entailed and the resultant performance improvements
Blood lactate clearance after maximal exercise depends on active recovery intensity
AIM: High-intensity exercise is time-limited by onset of fatigue, marked by accumulation of blood lactate. This is accentuated at maximal, all-out exercise that rapidly accumulates high blood lactate. The optimal active recovery intensity for clearing lactate after such maximal, all-out exercise remains unknown. Thus, we studied the intensity-dependence of lactate clearance during active recovery after maximal exercise.<p></p>
METHODS: We constructed a standardized maximal, all-out treadmill exercise protocol that predictably lead to voluntary exhaustion and blood lactate concentration >10 mM. Next, subjects ran series of all-out bouts that increased blood lactate concentration to 11.5±0.2 mM, followed by recovery exercises ranging 0% (passive)-100% of the lactate threshold.<p></p>
RESULTS: Repeated measurements showed faster lactate clearance during active versus passive recovery (P<0.01), and that active recovery at 60-100% of lactate threshold was more efficient for lactate clearance than lower intensity recovery (P<0.05). Active recovery at 80% of lactate threshold had the highest rate of and shortest time constant for lactate clearance (P<0.05), whereas the response during the other intensities was graded (100%=60%>40%>passive recovery, P<0.05).<p></p>
CONCLUSION: Active recovery after maximal all-out exercise clears accumulated blood lactate faster than passive recovery in an intensity-dependent manner, with maximum clearance occurring at active recovery of 80% of lactate threshold
Protocols for TTO valuations of health states worse than dead: A literature review and framework for systematic analysis
- …
