1,290 research outputs found
Bounds on the Capacity of the Relay Channel with Noncausal State Information at Source
We consider a three-terminal state-dependent relay channel with the channel
state available non-causally at only the source. Such a model may be of
interest for node cooperation in the framework of cognition, i.e.,
collaborative signal transmission involving cognitive and non-cognitive radios.
We study the capacity of this communication model. One principal problem in
this setup is caused by the relay's not knowing the channel state. In the
discrete memoryless (DM) case, we establish lower bounds on channel capacity.
For the Gaussian case, we derive lower and upper bounds on the channel
capacity. The upper bound is strictly better than the cut-set upper bound. We
show that one of the developed lower bounds comes close to the upper bound,
asymptotically, for certain ranges of rates.Comment: 5 pages, submitted to 2010 IEEE International Symposium on
Information Theor
On Link Estimation in Dense RPL Deployments
The Internet of Things vision foresees billions of
devices to connect the physical world to the digital world. Sensing
applications such as structural health monitoring, surveillance or
smart buildings employ multi-hop wireless networks with high
density to attain sufficient area coverage. Such applications need
networking stacks and routing protocols that can scale with
network size and density while remaining energy-efficient and
lightweight. To this end, the IETF RoLL working group has
designed the IPv6 Routing Protocol for Low-Power and Lossy
Networks (RPL). This paper discusses the problems of link quality
estimation and neighbor management policies when it comes
to handling high densities. We implement and evaluate different
neighbor management policies and link probing techniques in
Contiki’s RPL implementation. We report on our experience
with a 100-node testbed with average 40-degree density. We show
the sensitivity of high density routing with respect to cache sizes
and routing metric initialization. Finally, we devise guidelines for
design and implementation of density-scalable routing protocols
Generalized Inpainting Method for Hyperspectral Image Acquisition
A recently designed hyperspectral imaging device enables multiplexed
acquisition of an entire data volume in a single snapshot thanks to
monolithically-integrated spectral filters. Such an agile imaging technique
comes at the cost of a reduced spatial resolution and the need for a
demosaicing procedure on its interleaved data. In this work, we address both
issues and propose an approach inspired by recent developments in compressed
sensing and analysis sparse models. We formulate our superresolution and
demosaicing task as a 3-D generalized inpainting problem. Interestingly, the
target spatial resolution can be adjusted for mitigating the compression level
of our sensing. The reconstruction procedure uses a fast greedy method called
Pseudo-inverse IHT. We also show on simulations that a random arrangement of
the spectral filters on the sensor is preferable to regular mosaic layout as it
improves the quality of the reconstruction. The efficiency of our technique is
demonstrated through numerical experiments on both synthetic and real data as
acquired by the snapshot imager.Comment: Keywords: Hyperspectral, inpainting, iterative hard thresholding,
sparse models, CMOS, Fabry-P\'ero
Multiaccess Channels with State Known to One Encoder: Another Case of Degraded Message Sets
We consider a two-user state-dependent multiaccess channel in which only one
of the encoders is informed, non-causally, of the channel states. Two
independent messages are transmitted: a common message transmitted by both the
informed and uninformed encoders, and an individual message transmitted by only
the uninformed encoder. We derive inner and outer bounds on the capacity region
of this model in the discrete memoryless case as well as the Gaussian case.
Further, we show that the bounds for the Gaussian case are tight in some
special cases.Comment: 5 pages, Proc. of IEEE International Symposium on Information theory,
ISIT 2009, Seoul, Kore
Improved quality of experience of reconstructed H.264/AVC encoded video sequences through robust pixel domain error detection
The transmission of H.264/AVC encoded sequences over noisy wireless channels generally adopt the error detection capabilities of the transport protocol to identify and discard corrupted slices. All the macroblocks (MBs) within each corrupted slice are then concealed. This paper presents an algorithm that does not discard the corrupted slices but tries to detect those MBs which provide major visual artefacts and then conceal only these MBs. Results show that the proposed solution, based on a set of image-level features and two Support Vector Machines (SVMs), manages to detect 94.6% of those artefacts. Gains in Peak Signal-to-Noise Ratios (PSNR) of up to 5.74 dB have been obtained when compared to the standard H.264/AVC decoder.peer-reviewe
Resilient video coding using difference expansion and histogram modification
Recent advances in multimedia technology have paved the way to the development of several applications, including digital TV broadcasting, mobile TV, mobile gaming and telemedicine. Nonetheless, real time multimedia services still provide challenges as reliable delivery of the content cannot be guaranteed. The video compression standards incorporate error resilient mechanisms to mitigate this effect. However, these methods assume a packet-loss scenario, where corrupted slices are dropped and concealed by the decoder. This paper presents the application of reversible watermarking techniques to facilitate the detection of corrupted macroblocks. A variable checksum is embedded within the coefficient levels and motion vectors, which is then used by the decoder to detect corrupted macroblocks which are concealed. The proposed method employs difference expansion to protect the level values while histogram modification was employed to protect the motion vectors. Unlike previous published work by the same author, this scheme does not need the transmission of side information to aid the recovery of the original level and motion vector values. Simulation results have indicated that significant gains in performance can be achieved over the H.264/AVC standard.peer-reviewe
A statistical bit error generator for emulation of complex forward error correction schemes
Forward error correction (FEC schemes are generally used in wireless communication systems to maintain an acceptable quality of service. Various models have been proposed in literature to predict the end-to-end quality of wireless video systems. However, most of these models utilize simplistic error generators which do not accurately represent any practical wireless channel. A more accurate way is to evaluate the quality of a video system using Monte Carlo techniques. However these necessitate huge computational times, making these methods unpractical. This paper proposes an alternative method that can be used in modeling of complex communications systems with minimal computational time. The proposed three random variable method was used to model two FEC schemes adopted by the digital video broadcasting (DVB) standard. Simulation results confirm that this method closely matches the performance of the considered communication systems in both bit error rate (BER) and peak signal-to-noise ratio (PSNR).peer-reviewe
- …
