444 research outputs found
Wavelet-based encoding for HD applications
In the past decades, most of the research on image and video compression has focused on addressing high bandwidth- constrained environments. However, for high resolution and high quality image and video compression, as in the case of High Definition Television (HDTV) or Digital Cinema (DC), the primary constraints are related to quality and flexibility. This paper presents a comparison between scalable wavelet-based video codecs and the state of the art in single point encoding and it investigates the obtainable compression efficiency when using temporal correlation with respect to pure intra coding
Real-Time Rough Extraction of Foreground Objects in MPEG1,2 Compressed Video
This paper describes a new approach to extract foreground objects in MPEG1,2 video streams, in the framework of “rough indexing paradigm”, that is starting from rough data obtained by only partially decoding the compressed stream. In this approach we use both P-frame motion information and I-frame colour information to identify and extract foreground objects. The particularity of our approach with regards to the state of the art methods consists in a robust estimation of camera motion and its use for localisation of real objects and filtering of parasite zones.
Secondly, a spatio-temporal filtering of roughly segmented objects at DC resolution is fulfilled using motion trajectory and gaussian-like shape characteristic function. This paradigm results in content description in real time, maintaining a good level of details
Performance evaluation of wavelet-based HD video coding
This paper is intended as a complement of document n3954 and presents some preliminary evaluation of coding efficiency of a scalable wavelet-based encoder. Two HD video sequences have been encoded according to test conditions derived from those used to evaluate the coding efficiency of JSVM and VidWav[4]. Assuming that exploiting temporal redundancy in video coding improves compression efficiency, the aim of this work is to further investigate advantage and disadvantage of applying a motion compensated temporal filtering, in term of compression gain, with respect to pure intra coding
How to locate services optimizing redundancy: A comparative analysis of K-Covering Facility Location models
Redundancy aspects related to covering facility location problems are of extreme importance for many applications, in particular those regarding critical services. For example, in the healthcare sector, facilities such as ambulances or first-aid centers must be located robustly against unpredictable events causing disruption or congestion. In this paper, we propose different modeling tools that explicitly address coverage redundancy for the underlying service. We also evaluate, both theoretically and experimentally, the properties and behavior of the models, and compare them from a computational and managerial point of view. More precisely, by starting from three classical double-covering models from the literature (BACOP1, BACOP2, and DSM), we define three parametric families of models (namely, K-BACOP1, K-BACOP2, and K-DSM) which generalize the former to any possible Kth coverage level of interest. The study of such generalizations allows us to derive interesting managerial insights on location decisions at the strategic level. The CPU performance and the quality of the solutions returned are assessed through ad-hoc KPIs collected over many representative instances with different sizes and topological characteristics, and also by dynamically simulating scenarios involving possible disruption for the located facilities. Finally, a real case study concerning ambulance service in Morocco is analyzed. The results show that, in general, K-BACOP1 performs very well, even if intrinsic feasibility issues limit its broad applicability. Instead, K-DSM achieves the best coverage and equity performances for lower levels of redundancy, while K-BACOP2 seems the most robust choice when high redundancy is required, showing smoother and more predictable trends
The multi-stage dynamic stochastic decision process with unknown distribution of the random utilities
We consider a decision maker who performs a stochastic decision process over a multiple number of stages, where the choice alternatives are characterized by random utilities with unknown probability distribution. The decisions are nested each other, i.e. the decision taken at each stage is affected by the subsequent stage decisions. The problem consists in maximizing the total expected utility of the overall multi-stage stochastic dynamic decision process. By means of some results of the extreme values theory, the probability distribution of the total maximum utility is derived and its expected value is found. This value is proportional to the logarithm of the accessibility of the decision maker to the overall set of alternatives in the different stages at the start of the decision process. It is also shown that the choice probability to select alternatives becomes a Nested Multinomial Logit model
The multi-stage dynamic stochastic decision process with unknown distribution of the random utilities
We consider a decision maker who performs a stochastic decision process over a multiple number of stages, where the choice alternatives are characterized by random utilities with unknown probability distribution. The decisions are nested each other, i.e. the decision taken at each stage is affected by the subsequent stage decisions.
The problem consists in maximizing the total expected utility of the overall multi-stage stochastic dynamic decision process.
By means of some results of the extreme values theory, the probability distribution of the total maximum utility is derived and its expected value is found. This value is proportional to the logarithm of the accessibility of the decision maker to the overall set of alternatives in the different stages at the start of the decision process.
It is also shown that the choice probability to select alternatives becomes a Nested Multinomial Logit model
Smart Steaming: A New Flexible Paradigm for Synchromodal Logistics
Slow steaming, i.e., the possibility to ship vessels at a significantly slower speed than their nominal one, has been widely studied and implemented to improve the sustainability of long-haul supply chains. However, to create an efficient symbiosis with the paradigm of synchromodality, an evolution of slow steaming called smart steaming is introduced. Smart steaming is about defining a medium speed execution of shipping movements and the real-time adjustment (acceleration and deceleration) of traveling speeds to pursue the entire logistic system’s overall efficiency and sustainability. For instance, congestion in handling facilities (intermodal hubs, ports, and rail stations) is often caused by the common wish to arrive as soon as possible. Therefore, smart steaming would help avoid bottlenecks, allowing better synchronization and decreasing waiting time at ports or handling facilities. This work aims to discuss the strict relationships between smart steaming and synchromodality and show the potential impact of moving from slow steaming to smart steaming in terms of sustainability and efficiency. Moreover, we will propose an analysis considering the pros, cons, opportunities, and risks of managing operations under this new policy
How to locate services optimizing redundancy: A comparative analysis of K-Covering Facility Location models
Redundancy aspects related to covering facility location problems are of extreme importance for many applications, in particular those regarding critical services. For example, in the healthcare sector, facilities such as ambulances or first -aid centers must be located robustly against unpredictable events causing disruption or congestion. In this paper, we propose different modeling tools that explicitly address coverage redundancy for the underlying service. We also evaluate, both theoretically and experimentally, the properties and behavior of the models, and compare them from a computational and managerial point of view. More precisely, by starting from three classical double -covering models from the literature (BACOP1, BACOP2, and DSM), we define three parametric families of models (namely, K-BACOP1, K-BACOP2, and K-DSM) which generalize the former to any possible Kth coverage level of interest. The study of such generalizations allows us to derive interesting managerial insights on location decisions at the strategic level. The CPU performance and the quality of the solutions returned are assessed through ad -hoc KPIs collected over many representative instances with different sizes and topological characteristics, and also by dynamically simulating scenarios involving possible disruption for the located facilities. Finally, a real case study concerning ambulance service in Morocco is analyzed. The results show that, in general, K-BACOP1 performs very well, even if intrinsic feasibility issues limit its broad applicability. Instead, K-DSM achieves the best coverage and equity performances for lower levels of redundancy, while K-BACOP2 seems the most robust choice when high redundancy is required, showing smoother and more predictable trends
Synchromodal logistics: An overview of critical success factors, enabling technologies, and open research issues
Abstract As supply chain management is becoming demand driven, logistics service providers need to use real-time information efficiently and integrate new technologies into their business. Synchromodal logistics has emerged recently to improve flexibility in supply chains, cooperation among stakeholders, and utilization of resources. We survey the existing scientific literature and real-life developments on synchromodality. We focus on the critical success factors of synchromodality and six categories of enabling technologies. We identify open research issues and propose the introduction of a new stakeholder, which takes on the role of orchestrator to coordinate and provide services through a technology-based platform
- …
