IVADO Digital October – 2020

Share the event

The 2020 edition is now over, stay tuned for news about 2021! In the meantime, we invite you to review the presentations of 2020 on our dedicated Youtube page.

Language

This event will be presented in English and French.

Description

This first-ever edition of IVADO Digital October will showcase student-led research in our IVADO community. All month long, digital intelligence will be front and centre, starting with a distinguished panel of experts on October 1 and followed by presentations of multidisciplinary projects by our scholarship recipients.

Target audience

Our IVADO community.

Call for student entries

Would you like to spotlight your research results as part of IVADO Digital October? Fill out this form by September 15, 2020, if you qualify under one of the following criteria:

Goals

  • Learn more about the various research projects conducted by our IVADO scholarship recipients;
  • Get a preview of the new IVADO student support and guidance structure for the Fall 2020 semester;
  • Provide networking opportunities for students with various research labs.

Organizers

The IVADO team, in collaboration with the IVADO intersectoral student committee.

Guest speakers

Program

Keynote conference (October 1, from 4PM to 6PM)

These activities will take place online.

Quebec at the forefront: Machine learning and operations research, a winning combination (conference in English)

How did Québec become one of the world leaders in digital intelligence?
This discussion aims to highlight recent distinctions and advances in the fields of machine learning and operational research. The advantages of combining these two disciplines will also be discussed. A historical approach, but also a very concrete one!

Panelists:

  • Yoshua Bengio
    Professor, Université de Montréal; Scientific Director, Mila and IVADO;
    Holder of the Canada Research Chair in Statistical Learning Algorithms
  • Emma Frejinger
    Professor, Université de Montréal; member of the IVADO Scientific Committee;
    Holder of the CN Chair in Optimization of Railway Operations
  • Andrea Lodi
    Professor, Polytechnique Montréal; co–Scientific Director, IVADO;
    Holder of the Canada Excellence Research Chair in Data Science for Real-Time Decision-Making
  • Moderated by Nathalie Sanon, Head of the IVADO Training Program

Presentations of the student projects of October 8th 4PM-6PM : Smart Cities and Societies

These activities will take place online.

Theme : Smart Cities and Societies

  • 16h00-16h10 : Accueil et Présentation IVADO
  • 16h10-16h25 : Présentation de Léa Ricard

Sujet : Optimiser la planification des transports collectifs pour l’amélioration de la qualité du service

Résumé : In order to increase the ridership and attract new users, public transportation agencies put increasing emphasis on improving reliability. Service unreliability affects negatively the passenger perception of public transport and impacts its travel behaviour. Studies showed that a majority of passengers put more value on the reduction of travel time variability than over the reduction of travel time itself (Bates et al., 2001). This study explores approaches to increase reliability during the planning phase, more precisely during vehicle scheduling. We propose a reliable extension of the Multi-Depot Vehicle Scheduling Problem with stochastic travel times (R-MDVSP). The reliability of a schedule is assessed according to the discrete probability mass function of its trips’ departure time, for which we have developed a method to calculate the exact convolution. In a test with real data collected by buses in Montréal we address the trade-off between the two objectives, namely minimizing planned costs and reducing the risk of unreliability. We show that the R-MDVSP provides more reliable schedules for a negligible increase in planned costs.

  • 16h25-16h40 : Présentation de Joshua Stipancic

Sujet : Modelling Safety with Usage-Based Insurance GNSS Data

Résumé : Conventional car insurance premiums are based on rating variables related to client (e.g. age, sex, claims history) and vehicle (e.g. mileage, type). In usage-based insurance (UBI) programs, premiums are further calibrated using driver-level events (braking, accelerating, speeding, etc.) observed by continuously tracking drivers using GNSS data. Considering that most crashes (and therefore, insurance claims) are attributed to either driver or environment, including contextual data from clients’ routes would further improve discount calibration. The primary purpose of this research is to work towards the inclusion contextual safety information in the UBI program of Intact Insurance. A first study quantified the relationships between several contextual surrogate safety measures (SSMs) and historical crashes, and modelled crashes at the link-level for three Canadian cities (Quebec City, Montreal, and Ottawa). Observed correlations between the SSMs (related to congestion and speed) and historical crash data mirrored previous results and intuition. Results from the spatial Bayesian crash model demonstrated the statistical significance of the proposed SSMs and, importantly for large-scale implementation, the model was observed to be largely consistent between the considered cities. Recognizing that the reliance on an external source of historical crash data is a potential roadblock in this process, a new study examines how claims data (which are readily available to the insurance provider) can supplement or replace crash data in this process. The goal of future work is to distinguish the effects of safety measured at the driver-level (using individual events) and safety measured at the link-level (using contextual measures) on perceived risk.

  • 16h40-16h55 : Présentation de Gabrielle Verreault

Sujet : Citoyenneté numérique : Autonomisation des utilisateurs.trices par l’alphabétisation numérique

Résumé : La protection et la gestion des données des utilisateurs.trices est devenu un enjeu socio-économique de taille. Parmi de nombreux acteurs.trices, qu’ils/elles soient du domaine gouvernemental, privé ou académique, tou.te.s veulent leur part des données de l’utilisateur.trice pour enrichir leurs bases de données pour améliorer leurs services, leur efficacité ou leur potentiel de développement. Alors que la plupart des usages de ces données sont de natures inoffensives, un risque existe quant à la possibilité qu’une utilisation abusive mette en péril la vie privée des usager.ère.s. Beaucoup de nations sont à pied d’œuvre pour créer des cadres législatifs dans l’intérêt de protéger leurs citoyen.ne.s des agrégateurs de données et des conséquences désastreuses attachées à leur pratique. Cependant, légiférer trop sévèrement pour protéger l’utilisateur.trice du mésusage des données complexifie la pratique des concepteurs.trices, et il peut même en résulter un ralentissement dans certains domaines de production, mais aussi d’innovation, notamment en recherche dans le secteur technoscientifique. En portant le concept d’alphabétisation numérique au niveau populationnel, notamment par le développement de compétences communes d’utilisation, de compréhension et de création – la base de la formation de la citoyenneté numérique – on pourrait promouvoir l’autonomisation de l’utilisateur.trice afin qu’il/elle puisse évoluer en toute sécurité dans le monde numérique, et être l’acteur.trice principal.e de la protection de leur vie privée. Ainsi, le besoin de légiférer sur la protection des données s’avère abaissé, et les usager.ère.s peuvent profiter d’une expérience-utilisateur.trice bonifiée, tout en évoluant et en profitant d’une société où le milieu technoscientifique se développe durablement.

  • 16h55-17h10 : Présentation de Dmytro Humeniuk

Sujet : Wireless sensor network optimization strategies

Résumé : IoT (Internet of Things) has become an important industry today. IoT is powered by wireless sensor networks (WSN), which are mostly of mesh type (e.g. Zig-bee. Z-wave, Wi-fi mesh). It is therefore important to ensure the reliability and optimal functioning of WSN. The research works done in this domain are commonly related to one of the three categories: optimal WSN deployment, WSN testing and simulation, WSN runtime monitoring for anomaly detection. However, we surmise that all three steps are necessary in the network design and maintenance process. Firstly, the nodes should be deployed optimally to ensure connectivity and to avoid the loss of sent commands. This problem is known to be NP-hard, therefore, for node placement, polynomial-time approximation algorithms or genetic algorithms should be used with particular optimization criteria. Secondly, the network should be tested to uncover potentially harmful execution scenarios. The testing should be done in two directions: ensuring network quality requirements and cyber-physical system (CPS) requirements. The network quality testing can be related to minimal packet delays, connection quality and node load balancing. Turning to CPS, in WSN, apart from sensing functions, oftentimes, the devices perform control functions, representing a CPS. They are different from a typical CPS as the network component such as delay or a probability of a command loss should be taken into account. A prospective way to test such CPS is by using model-based approaches and genetic algorithm based input generating techniques. Finally, to ensure the correct functioning, the network should be monitored during the runtime. By monitoring, we mean a daemon that will be constantly inspecting network execution log files and uncovering failures. Here we surmise that the most prospective approaches are using automatic rule generation and Bayesian networks. Our project is dedicated to developing a framework for WSN deployment which implements all the three steps.

  • 17h10-17h25 : Présentation de Sébastien Henwood

Sujet : Pour une IA à faible consommation d’énergie

Résumé : L’apprentissage profond participe à la révolution de l’intelligence artificielle, mais cette technologie demeure énergivore : cela pose des défis pour les usages décentralisés, hors-ligne et surtout mobiles. Le projet “Coded Neural Networks” s’intéresse à diminuer à la source l’énergie allouée au matériel faisant les calculs nécessaire pour les réseaux profonds. L’environnement de fonctionnement devient alors incertain, mais ce n’est pas pour autant que le réseau profond cesse de fonctionner. On cherche alors à garder un maximum de performance du réseau profond, en diminuant l’énergie allouée au minimum.

  • 17h25-17h40 : Présentation d’Élodie Deschaintres

Sujet : Développement d’une méthode de fusion de données entre une enquête de déplacements quinquennale et des flux de données passifs: vers un suivi longitudinal de la mobilité

Résumé : Deux grands types de données sont généralement distingués dans le domaine des transports: les données émergentes, apparues récemment grâce à de nouvelles méthodes de collecte de données opérationnelles, et les enquêtes traditionnelles, utilisées depuis longtemps par les planificateurs. Cette coexistence soulève des questions sur le rôle de chacune de ces sources d’informations et leur potentielle complémentarité génère de nombreux défis méthodologiques. Dans cette perspective, ce projet vise à combiner l’enquête régionale de déplacements de Montréal (dite Origine-Destination) avec des flux de données passifs sur l’achalandage de différents modes de transport. L’enquête Origine-Destination de Montréal est une enquête ménage organisée tous les 5 ans dans la Grande Région de Montréal. Bien qu’elle permette de dresser un portrait détaillé de la mobilité montréalaise au cours d’une journée type d’automne, elle souffre de sa transversalité et de la sous-représentativité des modes alternatifs. Par conséquent, ce projet cherchera à compenser ces lacunes en exploitant différentes bases de données opérationnelles et massives, telles que des données transactionnelles et des comptages. Une procédure de fusion de données, basée sur une méthode de décomposition de séries temporelles, sera notamment proposée. Les parts modales, calculées à partir de l’enquête Origine-Destination pour un jour moyen de semaine, seront ainsi annualisées et projetées en leur appliquant les tendances et les saisonnalités observées dans les données passives utilisées. Finalement, ce projet contribuera à combler le vide entre deux enquêtes quinquennales afin de permettre un suivi longitudinal des comportements de déplacement et des interactions entre divers modes de transport.

  • 17h40-18h00 : Buffer et clôture

Presentations of the student projects of October 15th 4PM-6PM : Health and digital intelligence

These activities will take place online.

Theme : Health and digital intelligence

  • 16h00-16h10 : Accueil et Présentation IVADO
  • 16h10-16h25 : Présentation de Laura Gagliano

Sujet : Seizure forecasting to improve management of refractory epilepsy

Résumé : Epilepsy, one of the most prevalent and stigmatized neurological conditions, affects 1% of Canadians and is characterized by recurrent seizures as a result of excessive neuronal discharges. The first line of treatment to control seizures consists of chronic antiepileptic drug therapy; however, over a third of patients remain unresponsive and only 5% are candidates for brain surgery. The most disabling aspect of epilepsy is the unpredictable nature of seizures, which creates a constant source of worriment and danger for patients. Recent research from our group and others have demonstrated the feasibility of seizure forecasting based on intracranial electroencephalography (iEEG) recordings. However, state-of-the-art prediction techniques are much too computationally complex to be implemented into a feasible real-time clinical forecasting device. This presentation will overview my research project which aims to use continuous iEEG recordings from the University of Montreal Hospital Center’s Epilepsy Monitoring Unit to explore new avenues of advanced signal processing techniques, notably higher order spectral analysis, combined with artificial neural networks for the development of a real-time seizure forecasting algorithm which, if achieved would offer a life-changing solution to patients with uncontrolled epilepsy.

  • 16h25-16h40 : Présentation de Brice Rauby

Sujet : Sous échantillonnage temporel en apprentissage profond pour la microscopie de localisation ultrasonore

Résumé : L’imagerie par microscopie de localisation ultrasonore permet d’imager la circulation sanguine dans des organes de manière rapide, non-invasive et non-ionisante. En se basant sur la détection de microbulles injectées dans le sang, il est possible d’obtenir des angiogrammes ultra-résolus d’organes. La méthode classique cherche à détecter des microbulles isolées et requiert donc que leur injection soit faite en faible quantité. Afin de pouvoir imager tout le réseau sanguin, il est alors nécessaire d’attendre que même les plus petits vaisseaux aient été parcourus par des microbulles. Pour l’imagerie 2D, l’utilisation de réseaux de neurones permet de détecter les trajectoires des microbulles en concentration plus élevée et donc de réduire le temps d’acquisition par deux. Cependant, avant de pouvoir appliquer ces travaux en 3D , il est nécessaire de réduire la dimension temporelle des volumes utilisés lors de l’entraînement du réseau. En effet, l’ajout d’une nouvelle dimension spatiale est incompatible avec l’échelle de temps actuellement utilisée du fait de coûts mémoires prohibitifs. Dans ce projet, la dimension temporelle du réseau a donc été réduite tout en préservant la qualité des images, la méthode d’inférence a aussi été optimisée afin d’être plus rapide et donc applicable en 3D. Les résultats obtenus montrent la faisabilité de l’approche par réseaux de neurones pour l’imagerie par microscopie de localisation ultrasonore en 3D.

  • 16h40-16h55 : Présentation de Marie-Eve Picard

Sujet : Le visage de la douleur: identifier une signature cérébrale de l’expression faciale de la douleur via l’utilisation d’approches d’apprentissage machine

Résumé : L’expression faciale de la douleur nous permet de communiquer notre état de douleur, un possible besoin d’aide et la présence d’un danger potentiel dans notre environnement. Les mouvements du visage liés à l’expérience de la douleur sont associés aux dimensions sensorielle (i.e. l’intensité de la douleur) et affective de la douleur (i.e. le caractère désagréable de la douleur). Des études précédentes utilisant le modèle linéaire généralisé (GLM) ont identifié différentes régions cérébrales impliquées dans l’expressivité faciale en réponse à des stimuli nocifs, dont les aires motrices du visage, les aires associées au traitement de la douleur et des aires préfrontales. Cependant, les limitations statistiques des analyses univariées de données de neuroimagerie ne permettent pas d’évaluer avec précision la distribution spatiale de l’activité cérébrale. L’utilisation d’analyses multivariées dans ce projet permettra d’identifier des patrons précis permettant de prédire l’expression faciale de la douleur à partir de l’activité cérébrale acquise via l’imagerie par résonance magnétique fonctionnelle (IRMf). L’expression faciale (évaluée par l’analyse du Facial Action Coding System) de 34 participant.e.s soumis.e.s à des stimuli thermiques douloureux phasiques non-dommageables (condition Douleur) et à des stimuli thermiques non-douloureux (condition Contrôle) a été enregistrée alors qu’ils/elles étaient dans un appareil d’IRMf. L’analyse multivariée de ces données à l’aide de classificateurs supervisés nous permettra d’approfondir notre compréhension des mécanismes fondamentaux de la douleur via l’identification d’une signature cérébrale de l’expression faciale de la douleur.

  • 16h55-17h10 : Présentation de Caroline Labelle

Sujet : MCMC to study chemical compounds efficacy when developing new drugs

Résumé : Chemical compounds are tested in various assays from which Efficacy Metrics (EM) can be estimated. Compounds are selected with the aim of identifying at least one sufficiently potent and efficient to go into preclinical testing. Selection is based on EM meeting a specific threshold or by comparison to other compounds.

Current analysis methods only suggest estimates of EM and hardly consider the inevitable experimental noise, thus failing to quantify the uncertainty on EM on which conclusions are based. We propose to extend our previously introduced rigorous statistical methods (EM inference) to a panel of compounds. Given an efficacy criteria, we aim at identifying the compounds with the highest probability of meeting that criteria.

We use a hierarchical Bayesian model to infer EM from dose-response assays. Given the empirical values distributions for an EM of interest, our novel ranking method returns the probability that each compounds within a set is able to achieve a given rank. We are able to identify all compounds of an experimental dose-response set with at least 1% chance of being amongst the best for a given EM. To further analysis, we generate DAGs where path between two compounds identifies which is statistically better.

This novel methodology is developed and applied to the identification of novel compounds able to inhibit cellular growth of leukaemia cell and for large-scale analysis of dose-response datasets.

  • 17h10-17h25 : Présentation de Larry Dong

Sujet : Evaluating the use of generalized dynamic weighted ordinary least squares for individualized HIV treatment strategies

Résumé : Dynamic treatment regimes (DTR) are a statistical paradigm in precision medicine which aim to optimize patient outcomes by individualizing treatments. At its simplest, a DTR may require only a single decision to be made; this special case is called an individualized treatment rule (ITR) and is often used to maximize short-term rewards. Generalized dynamic weighted ordinary least squares (G-dWOLS), a DTR estimation method that offers theoretical advantages such as double robustness of parameter estimators in the decision rules, has been recently extended to now accommodate categorical treatments. In this work, G-dWOLS is applied to longitudinal data to estimate an optimal ITR, which is demonstrated in simulations. This novel method is then applied to a population affected by HIV whereby an ITR for the administration of Interleukin 7 (IL-7) is devised to maximize the duration where the CD4 load is above a healthy threshold (550 cells/μL) while preventing the administration of unnecessary injections.

  • 17h25-17h40 : Présentation d’Ammar Alsheghri

Sujet : Deep Learning Approach to generate patient-specific teeth

Résumé : Dental offices tackle thousands of dental reconstructions every year. Each crown requires a professional to manually design and input the properties of the tooth to be reconstructed. Consequently, this time-consuming process is difficult to reproduce and leads to variability in quality. This project will use deep learning to provide automatic initial tooth digital models personalized to an input dental arch where some teeth are missing. To accomplish that, a dataset of 5,000 digitalized arches will be used to train convolutional neural networks (CNNs). To utilize CNNs, 3D arches will be encoded into graphs, and new convolution operators will be developed taking graphs as input.

CNNs for graphs will be used to segment and label teeth. One hundred arches will be initially annotated to identify teeth, gingiva, and teeth-gingiva boundary. From that set, 80% will be used for training and validation and the remaining 20% will be used as a test set to evaluate the performance of the developed segmentation architecture. Next, a CNN will be trained to generate a missing tooth given the 31 teeth on an arch. Once this initial training is completed, the network will be further trained using systematic tooth removal. The regenerated teeth will be validated against the original tooth that was digitally removed.

The proposed architecture will be designed to continuously learn such that teeth generated by the system can be modified by a dental professional making a restoration. The resulting modification will then be used to retrain the network and increase its effectiveness.

  • 17h40-17h55 : Présentation de Marco Bonizzato

Sujet : Versatile Bayesian optimization of functional neurostimulation

Résumé : We are developing neuroprosthetic systems for motor control that are based on intracortical microstimulation (ICMS). Here, we tackled the problem of controlling multi-dimensional neurostimulation intervention.

In practice, in ICMS as other functional electrical stimulation applications, stimulus parameters are often arbitrary and the strategies for optimizing these stimulations poorly developed. We propose that learning algorithms could be used to quickly optimize the stimulation parameters of an implant according to the specific effects evoked by its electrodes and to be able to rapidly adapt these parameters according to the changes that may occur in the nervous system, for example after injury. This would maximize the effectiveness and durability of motor neuroprostheses.

We implanted rats and non-human primates with multi-electrode cortical arrays. Under ketamine anesthesia and in resting awake subjects, ICMS through different electrodes generates a variety of motor outputs. We used a Bayesian learning algorithm based on Gaussian processes to explore the space of stimulation parameters. The algorithm found optimal cortical locations for stimulus delivery in less than half a minute and during active usage of the neuroprosthesis, largely outperforming the capacity of a human operator. After spinal cord injury, the learning routine optimized stimulation parameters to maximize foot clearance and alleviated leg dragging.
We extended our framework to other neurostimulation treatments to maximize their therapeutic benefits. Furthermore, intelligent algorithmic approaches to “motor mapping” can provide new optimized tools to support studies in neural control of movement, skilled motor learning and effects of neurological damage and rehabilitation.

  • 17h55-18h00 : Clôture

Presentations of the student projects of October 22th 4PM-6PM : Tomorrow's algorithms

These activities will take place online.

Theme : Tomorrow’s algorithms

  • 16h00-16h10 : Accueil et Présentation IVADO
  • 16h10-16h25 : Gaétan Raynaud

Sujet : An extension of Physics Informed Neural Networks to solve Partial Differential Equations with a modal approach.

Résumé : “Physics-Informed Neural Network (PINN) is a deep-learning-based method used to solve partial differential equations (PDE). This method, introduced in early 2019, has been of growing interest as it has shown great potential in cases where data is scarce and equations are incomplete.

Fluid mechanics fits well since classical simulations can be time-expensive, and tedious if boundary conditions or physical parameters are not completely known. On the other hand, experimental data are limited to a small area and generally suffer from noises. PINN helps bridging the gap by approximating the physical solution with a neural network that takes as input the space-time coordinates. Neural network training aims at minimizing distance to sampled data as well as the residuals of PDE obtained with automatic differentiation. It allows given parameters of PDE to be optimized in the same time as the approximated solution.

We propose an extent to this method by considering oscillatory phenomena such as free vibrations of plates and laminar vortex shedding of a flow over a cylinder. A new architecture is introduced in order to reduce complexity in PINN models and provides a better interpretability. Finally, some perspectives of application will be presented.”

  • 16h25-16h40 : Sékou-Oumar Kaba

Sujet : Deep learning for material discovery.

Résumé : “There is no longer any doubt that machine learning algorithms are superior to human cognition for a variety of problems. Could they also do better than us on some challenging questions of physics and chemistry? It seems like it could be the case, and this could be leveraged to accelerate the process of material discovery.

A first line of work revolves around the fact that AI methods have shown their usefulness in making accurate predictions on the properties of an object based on training examples. Common applications are image and speech recognition. An algorithm could thus be designed to assess and screen potential materials before even having to build them in the laboratory.

Another idea is to use the generative algorithms that have been built in fields like language modeling, image processing, and arts. By using similar techniques, we could come up with new materials to answer specific needs.

These exciting research avenues will be the subject of this presentation. We will also present an ongoing research project at Mila. It aims to design an algorithm able to predict the properties of materials at the smallest scales from the characteristics of their constituent atoms. Such methods are also interesting from a fundamental point of view as they could also help us to better understand the physics of ordered atomic systems.”

  • 16h40-16h55 : Simon Verret

Sujet : Inverse problem in quantum materials: analytic continuation of response functions with neural networks.

Résumé : “Inverse problems are special problems in which we seek the causes, or inputs, producing certain results once processed through a physical system, or a set of complicated equations. Sometimes, computing the outcome from the cause is easy, but computing the cause given the outcomes is close to impossible; these are ill-defined inverse problems.

In condensed matter physics, there exist one very special case of such ill-defined problem: the analytic continuation of response functions. Although the details of the problem are very technical—we want to recover the frequency dependence of response functions, such as the optical conductivity of materials, given an imaginary time representation of these function—in essence, it consists of inverting an integral equation.

With such a task, we can generate infinite numbers of examples to train machine learning models, but this produce models which are heavily biased towards the generated data. In this project, we discuss how to produce data as complicated as possible to prevent this bias, and how this affects the performance of deep learning models on simpler datasets, more complex datasets, and realistic datasets. We also demonstrate that one can exploit some properties of the problem to constrain the data generation process without loss of generality. Our goal is to find the right data to train realistic and unbiased models for the analytic continuation of response function in condensed matter physics.”

  • 16h55-17h10 : Sébastien Lachapelle

Sujet : Differentiable Causal Discovery from Interventional Data.

Résumé : In this talk I will give a short introduction to causal discovery: the task of inferring a causal graph from data. Then, I will present a recent project in which we demonstrate how neural networks and constrained optimization can be used to learn causal structures from interventional data, i.e. data coming from controlled experiments. Our method relies on a recent work that reformulates the discrete problem of learning a causal structure into a continuous-constrained optimization problem. We give a theoretical justification for our approach and demonstrate it is competitive and sometimes superior to other state-of-the-art methods on various settings including when the targets of the interventions are unknown. Moreover, we demonstrate our approach can leverage powerful density estimators such as normalizing flows.

  • 17h10-17h25 : Hamed Hojatian

Sujet : Unsupervised deep learning for massive MIMO hybrid beamforming.

Résumé : Hybrid beamforming is a promising technique to reduce the complexity and cost of massive multiple-input multiple-output (MIMO) systems while providing high data rate. However, the hybrid precoder design is a challenging task requiring channel state information (CSI) feedback and solving a complex optimization problem. We proposes a novel RSSI-based unsupervised deep learning method to design the hybrid beamforming in massive MIMO systems. Furthermore, we propose i) a method to design the synchronization signal (SS) in initial access (IA); and ii) a method to design the codebook for the analog precoder. We also evaluate the system performance through a realistic channel model in various scenarios. We show that the proposed method not only greatly increases the spectral efficiency especially in frequency-division duplex (FDD) communication by using partial CSI feedback, but also has near-optimal sum-rate and outperforms other state-of-the-art full-CSI solutions.

  • 17h25-17h40 : Bouterfif Salah Eddine

Sujet : Exact branch-and-bound algorithm with machine learning for the open-shop scheduling problem.

Résumé : The Open Shop Scheduling Problem (OSSP) has been widely studied in the field of Operations Research (OR) for several decades. The work carried out in this area consists mostly of heuristic methods tailored to specific problems. Although there has been a lot of research on the subject, most exact methods are still limited to solving relatively small instances. In addition, very few researchers have taken into consideration recent advances in machine learning (ML) when improving exact methods for these problems. Our objective is to explore the latest exact OR approaches for solving OSSP and enrich them with ML techniques in order to combine the advantages of both OR and ML techniques. We propose a ML framework for an exact branch and bound algorithm based on the disjunctive graph model, to solve OSSP with makespan minimization. Our approach consists in using the data resulting from solving a set of randomly generated problem instances to train a Neural Network (NN) that predicts the new arcs of a “good” disjunctive graph on the branch and bound tree. The NN predictions are used to derive a new branching strategy. Computational experiments are then conducted on several sets of well-known benchmark problem instances to compare the ML enriched algorithm’s performances to the raw exact method.

  • 17h40-18h00 : Buffer et clôture

Presentations of the student projects of October 29th 4PM-6PM. : From Earth to Planets

These activities will take place online.

Theme : From Earth to Planets

  • 16h00-16h10 : Accueil et Présentation IVADO
  • 16h10-16h25 : Myriam Prasow-Émond

Sujet : Des exoplanètes autour de trous noirs?

Résumé : X-ray binaries, systems composed of a donor star and a compact object (black hole, neutron star or white dwarf) accreting material from the donor star, are unique laboratories for studying a variety of astronomical phenomena under extreme conditions. Some recent studies indicate that sub-stellar companions such as exoplanets and brown dwarfs can exist in a variety of environments, and it was recently argued that X-ray binaries could host planetary systems. However, in high-mass X-ray binaries – where the donor star is a massive star (at least twice the mass of the Sun) – the system is generally way too bright to detect sub-stellar companions. Using the instrument NIRC2 on the W. M. Keck Observatory on the Mauna Kea in Hawai’i and processing with high-contrast and angular differential imaging techniques, we therefore recently obtained the first images for nine near high-mass X-ray binaries: RX J1744.7-2713, IGR J18483-0311, SAX J1818.6-1703, 1H2202+501, IGR J17544-2619, 4U1700-37, 4U2206+543, RX J2030.5+4751 and γ Cassiopeiae. In this talk, we present impressive preliminary results: we found evidence of one or many sub-stellar companions around most of those extreme systems. A statistical study will soon be conducted to better understand, among others, the formation of exoplanets and brown dwarfs, the frequency of the sub-stellar companions in X-ray binaries, as well as their impact on the system. This exciting research opens a brand-new sub-domain in astrophysics and could lead to fascinating results.

  • 16h25-16h40 : Carter Rhea

Sujet : L’Application de l’Apprentissage Automatique aux Spectres Astrophysiques.

Résumé : Machine learning is rapidly becoming another tool in an astronomer’s statistical toolbox. Recent papers have demonstrated machine learning’s ability to recover degraded data, estimate important photometric parameters, and decode spectra. In this talk, I will be focusing on the use of convolutional neural networks in decoding optical emission-line spectra. The imaging Fourier Transform Spectrometer, SITELLE, at the Canada-France-Hawai’i telescope creates exquisite data cubes with spectral resolution reaching R~10000 and over 4 million spatial pixels. While standard fitting techniques to parse out the kinematic parameters, such as the velocity and broadening of the emission lines, exist, the number of spectra in each cube makes these techniques computationally expensive. Using a convolutional neural network, we have demonstrated a recoverability rate comparable to traditional methods; more importantly, the computation time has been reduced from ~11 days to ~4 hours. Subsequent papers focus on extracting additional information, such as critical line flux ratios, from the optical spectra.

  • 16h40-16h55 : Verreault, Gabrielle et B-Leblanc, Antoine

Sujet : Du bios au DOS.

Résumé : De plus en plus, des termes, concepts et métaphores provenant de l’écologie sont utilisés pour expliquer des écosystèmes autres que ceux provenant des milieux naturels. L’utilisation de ce comparatif par la sociologie remonte à un certain nombre d’années. Par exemple, l’écologie urbaine de l’école de Chicago a donné lieu à une vision innovante de la criminologie. L’objectif de cet présentation est d’exposer des liens théoriques entre les espaces naturels (l’environnement) et les espaces numériques. Quelques notions seront introduites pour faire une entrée en matière : écosystème, biodiversité et complexité. Ceci permettra de présenter deux changements de phases dans la vision actuelle à l’égard du numérique : de l’environnement numérique à l’écosystème numérique et de la gestion du numérique et de ces technologies à son aménagement adaptatif. Appuyé sur ces notions fréquemment utilisées, nous présenterons une série de concepts propres au cadre théorique de l’écologie et de l’évolution qui apparaissent pertinentes de regarder à la lumière de la nouvelle réalité du numérique. Cette présentation, qui servira de préliminaire à la parution d’un plus long plaidoyer, vise à s’inscrire dans le vaste dialogue pour sophistiquer la vision du développement numérique alors que nos vies physiques et numériques tendent à s’émuler de plus en plus dans ce champ pour instituer et exercer un mode de gouvernance communautaire des sciences et des technologies en société.

  • 16h55-17h10 : Francis Banville

Sujet : Prédire les interactions entre espèces dans un réseau écologique.

Résumé : Network ecology is an emerging field of study describing species interactions (e.g., predation, pollination, and parasitism) in a biological community. Ecological networks are at the core of the functioning and stability of our environment, and their emergent structure is the result of many ecological and evolutionary processes. Under the very scarce data regime characterizing network ecology, predicting the emergent structure of ecological networks is an important task that would make the large-scale analysis of ecological networks accessible. In this talk, I will show how to predict the total number of interactions in food webs using a beta-binomial model. This model outperforms previous attempts to predict the number of interactions in food webs while respecting biological constraints. I will then show how to use this model and the principle of maximum entropy to predict other aspects of food-web structure, with a strong emphasis on the degree distribution. I will conclude my presentation with potential applications of this method, including the simulation of the impacts of climate change and habitat loss on the structure of ecological networks.

  • 17h10-17h25 : Karine Ung

Sujet : Data-driven approach for the development and validation of an Evidence- Based Training (EBT) program for airline pilots.

Résumé : Evidence-Based Training (EBT) arose from the concern that current pilot training programs were no longer meeting the needs of airline operations. Training by repetitively being tested in the execution of maneuvers (the “tick box” approach) isn’t enough in this developing and dynamic industry. Current curricula are training pilots at managing situations that they have previously practiced during their training when they should instead be training pilot competencies in order to enable adapting to situations that they have never practiced or encountered before. The objective of this study is to develop a pilot EBT training program based on data analysis and test it against the current training program. With the required testing resources from the partner(s), it will be possible to detect training gaps between training curricula as well as identify their safety-related advantages and disadvantages.

  • 17h25-17h40 : Jacinthe Pilette

Sujet : Recherche de nouvelle physique au détecteur ATLAS à l’aide d’un auto-encodeur variationnel.

Résumé : “Mon projet de recherche a pour objectif d’utiliser des techniques d’intelligence artificielle afin de trouver de la nouvelle physique au détecteur ATLAS du Grand collisionneur de hadrons (LHC). Le LHC est la plus grande expérience de physique jamais construite. Elle a entre autres permis de découvrir le boson de Higgs en 2012, menant à l’obtention du prix Nobel de physique en 2013. Depuis cette découverte, la priorité actuelle en physique des particules est de découvrir la nouvelle physique au-delà du modèle standard, le modèle décrivant toutes les particules connues jusqu’à présent. Le LHC est un énorme producteur de données. En effet, il produit des collisions à toutes les 25 nanosecondes, créant des centaines de particules à la fois. ll s’agit donc du parfait terrain d’application pour les techniques d’apprentissage profond.

J’effectue une recherche générale de nouvelle physique dans les données d’ATLAS en utilisant un auto-encodeur variationnel. D’abord, j’essaie d’optimiser l’algorithme en ciblant la recherche de nouvelle physique dans ce qu’on appelle les jets de particules. L’algorithme est ensuite testé avec des données de jets simulées, afin de voir s’il réussit à identifier la présence de nouvelle physique artificielle. Puis, lorsque cette optimisation sera satisfaisante, j’appliquerai l’algorithme sur les vraies données du détecteur ATLAS dans l’espoir d’y découvrir un signe de nouvelle physique.”

  • 17h40-18h00 : Buffer et clôture