2,253 research outputs found
Weakly-supervised Caricature Face Parsing through Domain Adaptation
A caricature is an artistic form of a person's picture in which certain
striking characteristics are abstracted or exaggerated in order to create a
humor or sarcasm effect. For numerous caricature related applications such as
attribute recognition and caricature editing, face parsing is an essential
pre-processing step that provides a complete facial structure understanding.
However, current state-of-the-art face parsing methods require large amounts of
labeled data on the pixel-level and such process for caricature is tedious and
labor-intensive. For real photos, there are numerous labeled datasets for face
parsing. Thus, we formulate caricature face parsing as a domain adaptation
problem, where real photos play the role of the source domain, adapting to the
target caricatures. Specifically, we first leverage a spatial transformer based
network to enable shape domain shifts. A feed-forward style transfer network is
then utilized to capture texture-level domain gaps. With these two steps, we
synthesize face caricatures from real photos, and thus we can use parsing
ground truths of the original photos to learn the parsing model. Experimental
results on the synthetic and real caricatures demonstrate the effectiveness of
the proposed domain adaptation algorithm. Code is available at:
https://github.com/ZJULearning/CariFaceParsing .Comment: Accepted in ICIP 2019, code and model are available at
https://github.com/ZJULearning/CariFaceParsin
Scene Parsing with Global Context Embedding
We present a scene parsing method that utilizes global context information
based on both the parametric and non- parametric models. Compared to previous
methods that only exploit the local relationship between objects, we train a
context network based on scene similarities to generate feature representations
for global contexts. In addition, these learned features are utilized to
generate global and spatial priors for explicit classes inference. We then
design modules to embed the feature representations and the priors into the
segmentation network as additional global context cues. We show that the
proposed method can eliminate false positives that are not compatible with the
global context representations. Experiments on both the MIT ADE20K and PASCAL
Context datasets show that the proposed method performs favorably against
existing methods.Comment: Accepted in ICCV'17. Code available at
https://github.com/hfslyc/GCPNe
Large-scale event extraction from literature with multi-level gene normalization
Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license
One-pot synthesis of poly (3,4-ethylenedioxythiophene)-Pt nanoparticle composite and its application to electrochemical H2O2 sensor.
RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.Poly(3,4-ethylenedioxythiophene)-Pt nanoparticle composite was synthesized in one-pot fashion using a photo-assisted chemical method, and its electrocatalytic properties toward hydrogen peroxide (H2O2) was investigated. Under UV irradiation, the rates of the oxidative polymerization of EDOT monomer along with the reduction of Pt4+ ions were accelerated. In addition, the morphology of PtNPs was also greatly influenced by the UV irradiation; the size of PtNPs was reduced under UV irradiation, which can be attributed to the faster nucleation rate. The immobilized PtNPs showed excellent electrocatalytic activities towards the electroreduction of hydrogen peroxide. The resultant amperometric sensor showed enhanced sensitivity for the detection of H2O2 as compared to that without PtNPs, i.e., only with a layer of PEDOT. Amperometric determination of H2O2 at -0.55 V gave a limit of detection of 1.6 μM (S / N = 3) and a sensitivity of 19.29 mA cm-2 M-1 up to 6 mM, with a response time (steady state, t95) of 30 to 40 s. Energy dispersive X-ray analysis, transmission electron microscopic image, cyclic voltammetry (CV), and scanning electron microscopic images were utilized to characterize the modified electrode. Sensing properties of the modified electrode were studied both by CV and amperometric analysis
Inhibition effect of a custom peptide on lung tumors
Cecropin B is a natural antimicrobial peptide and CB1a is a custom, engineered modification of it. In vitro, CB1a can kill lung cancer cells at concentrations that do not kill normal lung cells. Furthermore, in vitro, CB1a can disrupt cancer cells from adhering together to form tumor-like spheroids. Mice were xenografted with human lung cancer cells; CB1a could significantly inhibit the growth of tumors in this in vivo model. Docetaxel is a drug in present clinical use against lung cancers; it can have serious side effects because its toxicity is not sufficiently limited to cancer cells. In our studies in mice: CB1a is more toxic to cancer cells than docetaxel, but dramatically less toxic to healthy cells
- …
