24 research outputs found

    SBL for Multiple Parameterized Dictionaries

    No full text
    SBL for Multiple Parameterized Dictionaries This repository contains the python-code used in [1]. Included are (i) the code for the proposed sparse Bayesian learning (SBL)-based algorithm, (ii) the code for the newtonized orthogonal matching pursuit (NOMP) algorithm [2] used for comparison, (iii) additional files such as for the generalized subpattern assignment (OSPA) metric [3]. The repositor contains three demo examples (example_.py). The script example_crossing.py reproduces Figure 2 from [1]. Repository structure: |- pySBL/pySBL.py Code for the proposed algorithm |- pyOMP/pyOMP.py Implementation of the NOMP method for comparison |- dictionary_functions.py Implementation of the (parameterized) dictionary functions |- gopsa.py Implementation of the OSPA used to evaluate the results. |- example_radar.py Demo example with a single radar comparing SBL and OMP |- example_multiradar.py Demo Example applying SBL to multiple radars \ example_crossing.py Demo example of two crossing targets (Fig. 2) The code was tested using python 3.13 and numpy 2.2.3. The full spec of the miniconda environment is found in python_env.txt References [1] Moederl J., Westerkam, A. M., Venus, A. and Leitinger, E., "A Block-Sparse Bayesian Learning Algorithm with Dictionary Parameter Estimation for Multi-Sensor Data Fusion", submitted to the IEEE 28th International Conference on Information Fusion, Rio de Janeiro, Brazil, Jul 7-11, 2025. [2] B. Mamandipoor, D. Ramasamy, and U. Madhow, "Newtonized orthogonal matching pursuit: Frequency estimation over the continuum," IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5066-5081, Oct. 2016. [3] A. S. Rahmathullah, A. F. Garcia-Fernandez, and L. Svensson,"Generalized optimal sub-pattern assignment metric," in 20th Int. Conf. Inf. Fusion, Xi'an, China, Jul. 10-13, 2017

    ComPara: A Corpus Linguistics Dataset of Computation in Architecture

    No full text
    A corpus linguistics built to study the language of computational architecture, or architecture which focuses on technology developments. The corpus includes (1) the volume titles, titles of articles, and keywords associated with the Introduction article of the journal Architectural Design (AD) to retrieve the language in the theoretical discourse around computation in architecture, and (2) titles and abstracts of winning and honorable mentions of the eVolo Skyscraper competition to retrieve words in conceptual project titles and their descriptions. This dataset has around 100.000 words and can serve as a basis for quantitative, qualitative, or mixed-method analysis of the language used in AD and the eVolo skyscraper competition between 2005 and 2019. As AD is recognized as one of the journals focusing on the 'digital turns' in architecture, and eVolo is arguably the most prestigious architectural competition which focuses on technological advances in architecture, ComPara can be considered representative of the language of computational architecture between 2005 and 2019. It includes .txt and .csv files as well as .svg wordclouds

    Approximate Bayesian Computation method for calibrating the Propagation Graph model using Summaries

    No full text
    This code is for learning the parameters of the polarimetric propagation graph model from summaries called temporal moments using approximate Bayesian computation

    Deep Learning method for calibrating the polarimetric Propagation graph model

    No full text
    This code is for learning the parameters of the polarimetric propagation graph model from summaries called temporal moments using a deep neural network

    TWikiL - Twitter Wikipedia Link Dataset

    No full text
    The Twitter Wikipedia Link (TWikiL) dataset contains all Tweets posted on Twitter that contain a Wikipedia URL. The data was collected via Twitters academic research access and spans 15 years of Tweets from March 2006 to January 2021. TWikiL comes in two versions: TWikiL_raw is a list of Tweet IDs in CSV format. TWikiL_curated is an SQLite database, which is a curated version of TWikiL containing only links to Wikipedia articles. The curated version has been augmented with the language edition that the URL in the Tweet links to, the Wikidata identifier and a Wikipedia topic category. TWikiL raw contains 44,945,098 Tweet IDs TWikiL curated contains 35,252,782 URLs/Wikidata concepts with 34,543,612 unique Tweets and 474,577 Tweets linking to multiple Wikipedia articles

    Annotated point clouds of buildings: a segmented dataset of single-family houses

    No full text
    The dataset contains 2.904 geometries of single-family houses in the form of annotated Point Clouds, and was developed in order to train 3D Generative Adversarial Networks. The geometries are segmented within 3 classes: wall, roof, floor. The points of the point clouds are saved in .pts files while their labels are saved in .seg files. The creation of the dataset was done in a semi-automated way that consists of two stages: a) creation of module geometries representing building components (done in Rhino3D) b) the conversion of the geometries into Point Clouds with the Cockroach plug-in. 25 wall modules and 35 roof modules were created. Each wall module was combined with each roof module. Data augmentation methods were applied to maximize the size of the dataset: the modules were scaled in 3 ranges, and rotated 90 degrees for a wider feature space. The dataset can be used to train 3D GANs with architecturally relevant data. Connected publication describing a use case of this dataset to follow

    The Influence of Data Storytelling on the Ability to Recall Information — Auxiliary materials

    No full text
    This repository contains auxiliary materials for the CHIIR 2022 paper "The Influence of Data Storytelling on the Ability to Recall Information" by Dominyk Zdanovic, Tanja Julie Lembcke and Toine Bogers (= corresponding author) Published in: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval (CHIIR '22), March 14--18, 2022, Regensburg, Germany. The paper presents the results of an experimental comparison of influence data storytelling and traditional data visualizations on the ability to recall information contained in the visualizations. There are two types of auxiliary materials: Questions posed to participants in the two condition, both post-task and post-exeriment along with the correct answers. PDF versions of the six visualizations used in the experimen
    corecore