152 research outputs found
Decision Stream: Cultivating Deep Decision Trees
Various modifications of decision trees have been extensively used during the
past years due to their high efficiency and interpretability. Tree node
splitting based on relevant feature selection is a key step of decision tree
learning, at the same time being their major shortcoming: the recursive nodes
partitioning leads to geometric reduction of data quantity in the leaf nodes,
which causes an excessive model complexity and data overfitting. In this paper,
we present a novel architecture - a Decision Stream, - aimed to overcome this
problem. Instead of building a tree structure during the learning process, we
propose merging nodes from different branches based on their similarity that is
estimated with two-sample test statistics, which leads to generation of a deep
directed acyclic graph of decision rules that can consist of hundreds of
levels. To evaluate the proposed solution, we test it on several common machine
learning problems - credit scoring, twitter sentiment analysis, aircraft flight
control, MNIST and CIFAR image classification, synthetic data classification
and regression. Our experimental results reveal that the proposed approach
significantly outperforms the standard decision tree learning methods on both
regression and classification tasks, yielding a prediction error decrease up to
35%
DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks
Despite a rapid rise in the quality of built-in smartphone cameras, their
physical limitations - small sensor size, compact lenses and the lack of
specific hardware, - impede them to achieve the quality results of DSLR
cameras. In this work we present an end-to-end deep learning approach that
bridges this gap by translating ordinary photos into DSLR-quality images. We
propose learning the translation function using a residual convolutional neural
network that improves both color rendition and image sharpness. Since the
standard mean squared loss is not well suited for measuring perceptual image
quality, we introduce a composite perceptual error function that combines
content, color and texture losses. The first two losses are defined
analytically, while the texture loss is learned in an adversarial fashion. We
also present DPED, a large-scale dataset that consists of real photos captured
from three different phones and one high-end reflex camera. Our quantitative
and qualitative assessments reveal that the enhanced image quality is
comparable to that of DSLR-taken photos, while the methodology is generalized
to any type of digital camera
Virtually Enriched NYU Depth V2 Dataset for Monocular Depth Estimation: Do We Need Artificial Augmentation?
We present ANYU, a new virtually augmented version of the NYU depth v2
dataset, designed for monocular depth estimation. In contrast to the well-known
approach where full 3D scenes of a virtual world are utilized to generate
artificial datasets, ANYU was created by incorporating RGB-D representations of
virtual reality objects into the original NYU depth v2 images. We specifically
did not match each generated virtual object with an appropriate texture and a
suitable location within the real-world image. Instead, an assignment of
texture, location, lighting, and other rendering parameters was randomized to
maximize a diversity of the training data, and to show that it is randomness
that can improve the generalizing ability of a dataset. By conducting extensive
experiments with our virtually modified dataset and validating on the original
NYU depth v2 and iBims-1 benchmarks, we show that ANYU improves the monocular
depth estimation performance and generalization of deep neural networks with
considerably different architectures, especially for the current
state-of-the-art VPD model. To the best of our knowledge, this is the first
work that augments a real-world dataset with randomly generated virtual 3D
objects for monocular depth estimation. We make our ANYU dataset publicly
available in two training configurations with 10% and 100% additional
synthetically enriched RGB-D pairs of training images, respectively, for
efficient training and empirical exploration of virtual augmentation at
https://github.com/ABrain-One/ANY
Multiple coupling of silanes with imido complexes of Mo
The bis(imido) complexes (tBuNv)2Mo(PMe3)(L) (L = PMe3, C2H4) react with up to three equivalents of
silane PhSiH3 to give the imido-bridged disilyl silyl Mo(VI) complex (tBuN){μ-tBuN(SiHPh)2}Mo(H)(SiH2Ph)-
(PMe3)2 (3) studied by NMR, IR and X-ray diffraction. NMR data supported by DFT calculations show that
complex 3 is an unusual example of a silyl hydride of Mo(VI), without significant Si⋯H interaction. Mechanistic
NMR studies revealed that silane addition proceeds in a stepwise manner via a series of Si–H⋯M
agostic and silanimine complexes whose structures were further elucidated by DFT calculation
- …
