1,847 research outputs found
Reordering Method and Hierarchies for Quantum and Classical Ordered Binary Decision Diagrams
We consider Quantum OBDD model. It is restricted version of read-once Quantum
Branching Programs, with respect to "width" complexity. It is known that
maximal complexity gap between deterministic and quantum model is exponential.
But there are few examples of such functions. We present method (called
"reordering"), which allows to build Boolean function from Boolean Function
, such that if for we have gap between quantum and deterministic OBDD
complexity for natural order of variables, then we have almost the same gap for
function , but for any order. Using it we construct the total function
which deterministic OBDD complexity is and present
quantum OBDD of width . It is bigger gap for explicit function that was
known before for OBDD of width more than linear. Using this result we prove the
width hierarchy for complexity classes of Boolean functions for quantum OBDDs.
Additionally, we prove the width hierarchy for complexity classes of Boolean
functions for bounded error probabilistic OBDDs. And using "reordering" method
we extend a hierarchy for -OBDD of polynomial size, for .
Moreover, we proved a similar hierarchy for bounded error probabilistic
-OBDD. And for deterministic and probabilistic -OBDDs of superpolynomial
and subexponential size.Comment: submitted to CSR 201
OBDD-Based Representation of Interval Graphs
A graph can be described by the characteristic function of the
edge set which maps a pair of binary encoded nodes to 1 iff the nodes
are adjacent. Using \emph{Ordered Binary Decision Diagrams} (OBDDs) to store
can lead to a (hopefully) compact representation. Given the OBDD as an
input, symbolic/implicit OBDD-based graph algorithms can solve optimization
problems by mainly using functional operations, e.g. quantification or binary
synthesis. While the OBDD representation size can not be small in general, it
can be provable small for special graph classes and then also lead to fast
algorithms. In this paper, we show that the OBDD size of unit interval graphs
is and the OBDD size of interval graphs is $O(\
| V \ | \log \ | V \ |)\Omega(\ | V \ | \log
\ | V \ |)O(\log \ | V \ |)O(\log^2 \ | V \ |)$ operations and
evaluate the algorithms empirically.Comment: 29 pages, accepted for 39th International Workshop on Graph-Theoretic
Concepts 201
On the Error Resilience of Ordered Binary Decision Diagrams
Ordered Binary Decision Diagrams (OBDDs) are a data structure that is used in
an increasing number of fields of Computer Science (e.g., logic synthesis,
program verification, data mining, bioinformatics, and data protection) for
representing and manipulating discrete structures and Boolean functions. The
purpose of this paper is to study the error resilience of OBDDs and to design a
resilient version of this data structure, i.e., a self-repairing OBDD. In
particular, we describe some strategies that make reduced ordered OBDDs
resilient to errors in the indexes, that are associated to the input variables,
or in the pointers (i.e., OBDD edges) of the nodes. These strategies exploit
the inherent redundancy of the data structure, as well as the redundancy
introduced by its efficient implementations. The solutions we propose allow the
exact restoring of the original OBDD and are suitable to be applied to
classical software packages for the manipulation of OBDDs currently in use.
Another result of the paper is the definition of a new canonical OBDD model,
called {\em Index-resilient Reduced OBDD}, which guarantees that a node with a
faulty index has a reconstruction cost , where is the number of nodes
with corrupted index
On the Complexity of Optimization Problems based on Compiled NNF Representations
Optimization is a key task in a number of applications. When the set of
feasible solutions under consideration is of combinatorial nature and described
in an implicit way as a set of constraints, optimization is typically NP-hard.
Fortunately, in many problems, the set of feasible solutions does not often
change and is independent from the user's request. In such cases, compiling the
set of constraints describing the set of feasible solutions during an off-line
phase makes sense, if this compilation step renders computationally easier the
generation of a non-dominated, yet feasible solution matching the user's
requirements and preferences (which are only known at the on-line step). In
this article, we focus on propositional constraints. The subsets L of the NNF
language analyzed in Darwiche and Marquis' knowledge compilation map are
considered. A number of families F of representations of objective functions
over propositional variables, including linear pseudo-Boolean functions and
more sophisticated ones, are considered. For each language L and each family F,
the complexity of generating an optimal solution when the constraints are
compiled into L and optimality is to be considered w.r.t. a function from F is
identified
Ordered bicontinuous double-diamond morphology in subsaturation nuclear matter
We propose to identify the new "intermediate" morphology in subsaturation
nuclear matter observed in a recent quantum molecular dynamics simulation with
the ordered bicontinuous double-diamond structure known in block copolymers. We
estimate its energy density by incorporating the normalized area-volume
relation given in a literature into the nuclear liquid drop model. The
resulting energy density is higher than the other five known morphologies.Comment: 4 pages, 4 figures, published in Phys. Rev.
SDDs are Exponentially More Succinct than OBDDs
Introduced by Darwiche (2011), sentential decision diagrams (SDDs) are
essentially as tractable as ordered binary decision diagrams (OBDDs), but tend
to be more succinct in practice. This makes SDDs a prominent representation
language, with many applications in artificial intelligence and knowledge
compilation. We prove that SDDs are more succinct than OBDDs also in theory, by
constructing a family of boolean functions where each member has polynomial SDD
size but exponential OBDD size. This exponential separation improves a
quasipolynomial separation recently established by Razgon (2013), and settles
an open problem in knowledge compilation
- …
