6,758 research outputs found

    Investigating the Relation between Galaxy Properties and the Gaussianity of the Velocity Distribution of Groups and Clusters

    Full text link
    We investigate the dependence of stellar population properties of galaxies on group dynamical stage for a subsample of Yang catalog. We classify groups according to their galaxy velocity distribution into Gaussian (G) and Non-Gaussian (NG). Using two totally independent approaches we have shown that our measurement of Gaussianity is robust and reliable. Our sample covers Yang's groups in the redshift range 0.03 \leq z \leq 0.1 having mass \geq 1014M^{14} \rm M_{\odot}. The new method, Hellinger Distance (HD), to determine whether a group has a velocity distribution Gaussian or Non-Gaussian is very effective in distinguishing between the two families. NG groups present halo masses higher than the G ones, confirming previous findings. Examining the Skewness and Kurtosis of the velocity distribution of G and NG groups, we find that faint galaxies in NG groups are mainly infalling for the first time into the groups. We show that considering only faint galaxies in the outskirts, those in NG groups are older and more metal rich than the ones in G groups. Also, examining the Projected Phase Space of cluster galaxies we see that bright and faint galactic systems in G groups are in dynamical equilibrium which does not seem to be the case in NG groups. These findings suggest that NG systems have a higher infall rate, assembling more galaxies which experienced preprocessing before entering the group.Comment: 55 pages, 5 Tables and 12 Figures. Accepted for publication in Astronomical Journa

    On the multiplicity of the hyperelliptic integrals

    Full text link
    Let I(t)=δ(t)ωI(t)= \oint_{\delta(t)} \omega be an Abelian integral, where H=y2xn+1+P(x)H=y^2-x^{n+1}+P(x) is a hyperelliptic polynomial of Morse type, δ(t)\delta(t) a horizontal family of cycles in the curves {H=t}\{H=t\}, and ω\omega a polynomial 1-form in the variables xx and yy. We provide an upper bound on the multiplicity of I(t)I(t), away from the critical values of HH. Namely: $ord\ I(t) \leq n-1+\frac{n(n-1)}{2}if if \deg \omega <\deg H=n+1.Thereasoninggoesasfollows:weconsidertheanalyticcurveparameterizedbytheintegralsalong. The reasoning goes as follows: we consider the analytic curve parameterized by the integrals along \delta(t)ofthe of the nPetrovformsof ``Petrov'' forms of H(polynomial1formsthatfreelygeneratethemoduleofrelativecohomologyof (polynomial 1-forms that freely generate the module of relative cohomology of H),andinterpretthemultiplicityof), and interpret the multiplicity of I(t)astheorderofcontactof as the order of contact of \gamma(t)andalinearhyperplaneof and a linear hyperplane of \textbf C^ n.UsingthePicardFuchssystemsatisfiedby. Using the Picard-Fuchs system satisfied by \gamma(t),weestablishanalgebraicidentityinvolvingthewronskiandeterminantoftheintegralsoftheoriginalform, we establish an algebraic identity involving the wronskian determinant of the integrals of the original form \omegaalongabasisofthehomologyofthegenericfiberof along a basis of the homology of the generic fiber of H.Thelatterwronskianisanalyzedthroughthisidentity,whichyieldstheestimateonthemultiplicityof. The latter wronskian is analyzed through this identity, which yields the estimate on the multiplicity of I(t).Still,insomecases,relatedtothegeometryatinfinityofthecurves. Still, in some cases, related to the geometry at infinity of the curves \{H=t\} \subseteq \textbf C^2,thewronskianoccurstobezeroidentically.Inthisalternativeweshowhowtoadapttheargumenttoasystemofsmallerrank,andgetanontrivialwronskian.Foraform, the wronskian occurs to be zero identically. In this alternative we show how to adapt the argument to a system of smaller rank, and get a nontrivial wronskian. For a form \omegaofarbitrarydegree,weareledtoestimatingtheorderofcontactbetween of arbitrary degree, we are led to estimating the order of contact between \gamma(t)andasuitablealgebraichypersurfacein and a suitable algebraic hypersurface in \textbf C^{n+1}.Weobservethat. We observe that ord I(t)growslikeanaffinefunctionwithrespectto grows like an affine function with respect to \deg \omega$.Comment: 18 page

    Synthesizing and tuning chemical reaction networks with specified behaviours

    Full text link
    We consider how to generate chemical reaction networks (CRNs) from functional specifications. We propose a two-stage approach that combines synthesis by satisfiability modulo theories and Markov chain Monte Carlo based optimisation. First, we identify candidate CRNs that have the possibility to produce correct computations for a given finite set of inputs. We then optimise the reaction rates of each CRN using a combination of stochastic search techniques applied to the chemical master equation, simultaneously improving the of correct behaviour and ruling out spurious solutions. In addition, we use techniques from continuous time Markov chain theory to study the expected termination time for each CRN. We illustrate our approach by identifying CRNs for majority decision-making and division computation, which includes the identification of both known and unknown networks.Comment: 17 pages, 6 figures, appeared the proceedings of the 21st conference on DNA Computing and Molecular Programming, 201

    Checking Interaction-Based Declassification Policies for Android Using Symbolic Execution

    Get PDF
    Mobile apps can access a wide variety of secure information, such as contacts and location. However, current mobile platforms include only coarse access control mechanisms to protect such data. In this paper, we introduce interaction-based declassification policies, in which the user's interactions with the app constrain the release of sensitive information. Our policies are defined extensionally, so as to be independent of the app's implementation, based on sequences of security-relevant events that occur in app runs. Policies use LTL formulae to precisely specify which secret inputs, read at which times, may be released. We formalize a semantic security condition, interaction-based noninterference, to define our policies precisely. Finally, we describe a prototype tool that uses symbolic execution to check interaction-based declassification policies for Android, and we show that it enforces policies correctly on a set of apps.Comment: This research was supported in part by NSF grants CNS-1064997 and 1421373, AFOSR grants FA9550-12-1-0334 and FA9550-14-1-0334, a partnership between UMIACS and the Laboratory for Telecommunication Sciences, and the National Security Agenc

    Machine-Checked Proofs For Realizability Checking Algorithms

    Full text link
    Virtual integration techniques focus on building architectural models of systems that can be analyzed early in the design cycle to try to lower cost, reduce risk, and improve quality of complex embedded systems. Given appropriate architectural descriptions, assume/guarantee contracts, and compositional reasoning rules, these techniques can be used to prove important safety properties about the architecture prior to system construction. For these proofs to be meaningful, each leaf-level component contract must be realizable; i.e., it is possible to construct a component such that for any input allowed by the contract assumptions, there is some output value that the component can produce that satisfies the contract guarantees. We have recently proposed (in [1]) a contract-based realizability checking algorithm for assume/guarantee contracts over infinite theories supported by SMT solvers such as linear integer/real arithmetic and uninterpreted functions. In that work, we used an SMT solver and an algorithm similar to k-induction to establish the realizability of a contract, and justified our approach via a hand proof. Given the central importance of realizability to our virtual integration approach, we wanted additional confidence that our approach was sound. This paper describes a complete formalization of the approach in the Coq proof and specification language. During formalization, we found several small mistakes and missing assumptions in our reasoning. Although these did not compromise the correctness of the algorithm used in the checking tools, they point to the value of machine-checked formalization. In addition, we believe this is the first machine-checked formalization for a realizability algorithm.Comment: 14 pages, 1 figur

    A Formalization of the Theorem of Existence of First-Order Most General Unifiers

    Full text link
    This work presents a formalization of the theorem of existence of most general unifiers in first-order signatures in the higher-order proof assistant PVS. The distinguishing feature of this formalization is that it remains close to the textbook proofs that are based on proving the correctness of the well-known Robinson's first-order unification algorithm. The formalization was applied inside a PVS development for term rewriting systems that provides a complete formalization of the Knuth-Bendix Critical Pair theorem, among other relevant theorems of the theory of rewriting. In addition, the formalization methodology has been proved of practical use in order to verify the correctness of unification algorithms in the style of the original Robinson's unification algorithm.Comment: In Proceedings LSFA 2011, arXiv:1203.542

    Proving Safety with Trace Automata and Bounded Model Checking

    Full text link
    Loop under-approximation is a technique that enriches C programs with additional branches that represent the effect of a (limited) range of loop iterations. While this technique can speed up the detection of bugs significantly, it introduces redundant execution traces which may complicate the verification of the program. This holds particularly true for verification tools based on Bounded Model Checking, which incorporate simplistic heuristics to determine whether all feasible iterations of a loop have been considered. We present a technique that uses \emph{trace automata} to eliminate redundant executions after performing loop acceleration. The method reduces the diameter of the program under analysis, which is in certain cases sufficient to allow a safety proof using Bounded Model Checking. Our transformation is precise---it does not introduce false positives, nor does it mask any errors. We have implemented the analysis as a source-to-source transformation, and present experimental results showing the applicability of the technique

    Automatic Abstraction in SMT-Based Unbounded Software Model Checking

    Full text link
    Software model checkers based on under-approximations and SMT solvers are very successful at verifying safety (i.e. reachability) properties. They combine two key ideas -- (a) "concreteness": a counterexample in an under-approximation is a counterexample in the original program as well, and (b) "generalization": a proof of safety of an under-approximation, produced by an SMT solver, are generalizable to proofs of safety of the original program. In this paper, we present a combination of "automatic abstraction" with the under-approximation-driven framework. We explore two iterative approaches for obtaining and refining abstractions -- "proof based" and "counterexample based" -- and show how they can be combined into a unified algorithm. To the best of our knowledge, this is the first application of Proof-Based Abstraction, primarily used to verify hardware, to Software Verification. We have implemented a prototype of the framework using Z3, and evaluate it on many benchmarks from the Software Verification Competition. We show experimentally that our combination is quite effective on hard instances.Comment: Extended version of a paper in the proceedings of CAV 201
    corecore