92 research outputs found

    New regime of droplet generation in a T-shape microfluidic junction

    No full text
    International audienceWe present an experimental study of a new regime of monodisperse micro-droplet generation that we named the balloon regime. A dispersion of oil in water in a T-junction microfluidic system was studied. Several microfluidic devices having different cross-sections of the continuous and the dispersed phases micro-channels were tested. This new regime appears only for low- dispersed phase velocity. The micro-droplet size is mainly related to the geometry of the T-junction micro-channels especially its width and depth, and independent of the continuous and dispersed phases velocities. In our experiments, the velocities of the continuous and the dispersed phases vc and vd respectively, have been varied in a wide range: vc from 0.5 to 500 mm/s, and vd from 0.01 to 30 mm/s. We show that the continuous phase only controls the micro-droplet density, while the dispersed phase linearly changes the frequency of the micro-droplet generation. Another particularity of the present regime, which differentiates it from all other known regimes, is that the micro-droplet retains its circular shape throughout its formation at the T junction, and undergoes no deformation due to the drag forces. We propose a mechanism to explain the formation of microdroplets in this new regime

    KerasCV and KerasNLP: Vision and Language Power-Ups

    Full text link
    We present the Keras domain packages KerasCV and KerasNLP, extensions of the Keras API for Computer Vision and Natural Language Processing workflows, capable of running on either JAX, TensorFlow, or PyTorch. These domain packages are designed to enable fast experimentation, with a focus on ease-of-use and performance. We adopt a modular, layered design: at the library's lowest level of abstraction, we provide building blocks for creating models and data preprocessing pipelines, and at the library's highest level of abstraction, we provide pretrained ``task" models for popular architectures such as Stable Diffusion, YOLOv8, GPT2, BERT, Mistral, CLIP, Gemma, T5, etc. Task models have built-in preprocessing, pretrained weights, and can be fine-tuned on raw inputs. To enable efficient training, we support XLA compilation for all models, and run all preprocessing via a compiled graph of TensorFlow operations using the tf.data API. The libraries are fully open-source (Apache 2.0 license) and available on GitHub.Comment: Submitted to Journal of Machine Learning Open Source Softwar

    Automatic analysis of facilitated taste-liking

    Get PDF
    This paper focuses on: (i) Automatic recognition of taste-liking from facial videos by comparatively training and evaluating models with engineered features and state-of-the-art deep learning architectures, and (ii) analysing the classification results along the aspects of facilitator type, and the gender, ethnicity, and personality of the participants. To this aim, a new beverage tasting dataset acquired under different conditions (human vs. robot facilitator and priming vs. non-priming facilitation) is utilised. The experimental results show that: (i) The deep spatiotemporal architectures provide better classification results than the engineered feature models; (ii) the classification results for all three classes of liking, neutral and disliking reach F1 scores in the range of 71%-91%; (iii) the personality-aware network that fuses participants’ personality information with that of facial reaction features provides improved classification performance; and (iv) classification results vary across participant gender, but not across facilitator type and participant ethnicity.EPSR

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG- bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood develop- ment, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google- internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Functional outcomes in symptomatic versus asymptomatic patients undergoing incisional hernia repair: Replacing one problem with another? A prospective cohort study in 1312 patients

    Get PDF
    Background: Incisional hernias can be associated with pain or discomfort. Surgical repair especially mesh reinforcement, may likewise induce pain. The primary objective was to assess the incidence of pain after hernia repair in patients with and without pre-operative pain or discomfort. The secondary objectives were to determine the preferred mesh type, mesh location and surgical technique in minimizing postoperative pain or discomfort. Materials and methods: A registry-based prospective cohort study was performed, including patients undergoing incisional hernia repair between September 2011 and May 2019. Patients with a minimum follow-up of 3–6 months were included. The incidence of hernia related pain and discomfort was recorded perioperatively. Results: A total of 1312 patients were included. Pre-operatively, 1091 (83%) patients reported pain or discomfort. After hernia repair, 961 (73%) patients did not report pain or discomfort (mean follow-up = 11.1 months). Of the pre-operative asymptomatic patients (n = 221), 44 (20%, moderate or severe pain: n = 14, 32%) reported pain or discomfort after mean follow-up of 10.5 months. Of those patients initially reporting pain or discomfort (n = 1091), 307 (28%, moderate or severe pain: n = 80, 26%) still reported pain or discomfort after a mean follow-up of 11.3 months postoperatively. Conclusion: In symptomatic incisional hernia patients, hernia related complaints may be resolved in the majority of cases undergoing surgical repair. In asymptomatic incisional hernia patients, pain or discomfort may be induced in a considerable number of patients due to surgical repair and one should be aware if this postoperative complication

    Deep learning with Python

    No full text
    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural networks since 2012. Francois is currently doing deep learning research at Google. He blogs about deep learning at blog.keras.io

    Synthese de nouveaux triazoles-1, 2, 4; etude de leur systemie phloemienne et de leurs proprietes phytosanitaires

    No full text
    SIGLECNRS T Bordereau / INIST-CNRS - Institut de l'Information Scientifique et TechniqueFRFranc
    corecore