Francesco is a Research Scientist in the Quantum Computational Sciences group at IBM Research in Zurich, where he works on the development and implementation of quantum algorithms for physics, chemistry, and machine learning applications. Francesco received a PhD in Physics and a MSc in Theoretical Physics from the University of Pavia, Italy.

Title: Quantum machine learning: technology and applications in the natural sciences and beyond Over the last few decades, quantum information processing has emerged as a gateway towards new, powerful approaches to scientific computing. Quantum technologies are nowadays experiencing a rapid development and could lead to effective solutions in different domains including physics, chemistry, and artificial intelligence. In this talk, I will review the state-of-the-art and recent progress in the field, with a focus on quantum machine learning and its applications to problems in the domain of natural sciences. More specifically, I will discuss the use of advanced parametrised quantum circuit models for the generation of molecular force fields and for the analysis of quantum data in high-energy physics. I will also consider the practical implementation of paradigmatic quantum machine learning algorithms on superconducting quantum processors and present dedicated error suppression strategies based on pulse-efficient gate transpilation.

Amira is a postdoctoral researcher at the University of Amsterdam. Her research focuses on the intersection of quantum computing and machine learning in order to solve problems deemed hard to compute classically. She is also a former Google PhD fellow, intern at Google Quantum AI, and predoctoral researcher at IBM Quantum.

Title: On quantum backpropagation, information reuse, and cheating measurement collapse The success of modern deep learning hinges on the ability to train neural networks at scale. Through clever reuse of intermediate information, backpropagation facilitates training through gradient computation at a total cost roughly proportional to running the function, rather than incurring an additional factor proportional to the number of parameters - which can now be in the trillions. Naively, one expects that quantum measurement collapse entirely rules out the reuse of quantum information as in backpropagation. But recent developments in shadow tomography, which assumes access to multiple copies of a quantum state, have challenged that notion. Here, we investigate whether parameterized quantum models can train as efficiently as classical neural networks. We show that achieving backpropagation scaling is impossible without access to multiple copies of a state. With this added ability, we introduce an algorithm with foundations in shadow tomography that matches backpropagation scaling in quantum resources while reducing classical auxiliary computational costs to open problems in shadow tomography. These results highlight the nuance of reusing quantum information for practical purposes and clarify the unique difficulties in training large quantum models, which could alter the course of quantum machine learning.

Michele Grossi is a senior fellow in quantum computing at CERN. He received his industrial PhD in High Energy Physics from the University of Pavia. Michele has been working as Quantum Technical Ambassador at IBM and a Hybrid Cloud solution Architect. In his current role he co-supervisions Quantum Machine Learning projects at CERN. His focus is the development of QML pipelines for HEP problems and their usage in different fields. He is actively collaborating with different research institutions and companies.

Title: Figure of merit for quantum machine learning tasks: explicit and implicit models and losses

Antonio Mezzacapo is a principal research scientist and technical lead for applied quantum computing at the IBM T. J. Watson Research Center, working on quantum algorithms to address hard problems in different areas, such as simulation of physics, chemistry and optimization.

Title: Recent algorithms for the ground state problem on quantum computers Finding extremal eigenstates of quantum systems is a fundamental challenge for the simulation of many-body problems and quantum machine learning. In this talk, I will discuss some recent developments and results for this problem

Dr Amir Pourabdollah (PhD, MSc, MEng, BEng) is a senior lecturer in computer science and the program leader of the BSc Artificial Intelligence at Nottingham Trent University (NTU), UK, with research interests in Ambient Intelligence, Fuzzy Logic Systems, Cloud-based AI, and Quantum Computing. He leads the quantum intelligence research projects at NTU focusing on quantum computer simulators' hardware, quantum annealers and their applications in AI systems. Amir is an editor of the Springer's Journal of Quantum Intelligence. He also works with Microsoft as Program Advisor to develop educational materials and certifications in the areas of AI, Cloud Computing and Azure Quantum. Prior to joining NTU in 2017, Amir was a research fellow in computational intelligence at the University of Nottingham where he did his PhD in Computer Science (2009).

Title: Quantum Annealers for Computational Intelligence The adiabatic model of quantum computing, also known as quantum annealing, is an alternative method to the circuit model. This is particularly useful for solving certain classes of optimisation problems, while avoiding the complexities of quantum circuits design. It has been shown that if an optimisation problem can be formulated as minimising a quadratic polynomial with binary variables, there is a high chance that a quantum annealer can reach an optimised solution much more efficiently than its classical counterpart. However, two main challenges exist in using this model in computational intelligence: Firstly, identifying which AI algorithm can be converted to an optimisation problem, and secondly, formulating the problem into binary quadratic optimisation (QBO). Practically, there is no straight-forward way to address the challenges, and thus, creative solutions are to be developed on case-by-case basis. In this talk, I will review developing exemplar solutions for formulating an AI domain problem into a QBO formulation and ultimately developing a quantum adiabatic algorithm to tackle the problem. Particularly, I will focus on fuzzy logic systems as an AI domain problem, and will review the challenges of how a rule-based fuzzy logic system is actually modelled in QBO and implemented on quantum annealers.

Prof. Giovanni Acampora is Full Professor in Artificial Intelligence and Quantum Computing at the University of Naples Federico II. Previously, he was Reader in Computational Intelligence (Sept. 2013 - June 2016) at the School of Science and Technology, Nottingham Trent University, Nottingham, U.K. From July 2012 to August 2013, he was in a Hoofddocent Tenure Track in Process Intelligence with the School of Industrial Engineering, Information Systems, Eindhoven University of Technology, Eindhoven, the Netherlands. He is the Chair of IEEE-SA 1855WG and he serves as an Editor in Chief of Springer Quantum Machine Intelligence, Associate Editor of Springer Soft Computing and Editorial Board Member of several international journals. In 2017, he acted as General Chair of IEEE International Conference on Fuzzy Systems, the top leading conference in fuzzy logic. He is member of the scientific board of the Interdepartmental Center for Advanced RObotics in Surgery (ICAROS). His main research interests include computational intelligence, fuzzy modeling, evolutionary computation, and ambient intelligence. Prof. Giovanni Acampora was a recipient of three prestigious awards, the IEEE-SA Emerging Technology Award in 2016, the 2019 Canada-Italy Innovation Award for Emerging Technologies and the IBM Quantum Experience Academic Research Program Award, as well as two best papers, the first one at the United Kingdom Workshop on Computational Intelligence, UKCI 2012 (Edinburgh, Scotland, UK), and the second one at the 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2021).

Title: Quantum Evolutionary Algorithms The world of computing is going to shift towards new paradigms able to provide better performance in solving hard problems than classical computation. In this scenario, quantum computing is assuming a key role thanks to the recent technological enhancements achieved by several big companies in developing computational devices based on fundamental principles of quantum mechanics: superposition, entanglement, and interference. These computers will be able to yield performance never seen before in several application domains, and the area of bio-inspired optimization may be the one most affected by this revolution. Indeed, on the one hand, the intrinsic parallelism provided by quantum computers could support the design of efficient algorithms for evolutionary computation by enabling the definition of novel concepts such as quantum chromosomes, entangled crossover, and quantum elitism; on the other hand, evolutionary algorithms could be used to reduce the effect of decoherence in quantum computation and make this more reliable. This talk aims at introducing the attendees with this new research area and pave the way towards the design of innovative computing infrastructure where both quantum computing and evolutionary computation take a key role in overcoming the performance of conventional approaches.

Dr. Autilia Vitiello received the M.S. degree cum laude in Computer Science at the University of Salerno in July 2009, defending a thesis in Time Sensitive Fuzzy Agents: formal model and implementation. She got Ph.D. in Computer Science at the same university on April 15th, 2013, defending a thesis titled Memetic Algorithms for Ontology Alignment. From 2018, she is an Assistant Professor at the Department of Physics ''Ettore Pancini'' of the University of Naples Federico II. She is vice-chair of the IEEE CIS Standards Committee and chair of the Task Force named Datasets for Computational Intelligence Applications. She is part of the IEEE Standard Association 1855 Working Group for Fuzzy Markup Language Standardization where she also serves as Secretary. Moreover, she is chair of the P2976 Working group. She is also member of the Editorial board of Springer Quantum Machine Intelligence. Her main research area is Computational Intelligence, and in particular, Fuzzy Logic and Evolutionary Algorithms, and the integration of these research areas with quantum computing. Dr. Autilia Vitiello was a recipient of the Best Paper Award at the United Kingdom Workshop on Computational Intelligence, UKCI 2012 (Edinburgh, Scotland, UK), of the Best Paper Award at the 2021 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2021, and of the IBM Quantum Experience Academic Research Program Award.

Title: Quantum Measurement Error Mitigation through Computational Intelligence Quantum computing is a new computation paradigm whose foundations are placed in different disciplines such as computer science, physics, and engineering. Recently, Quantum computing is entered in the so-called Noisy Intermediate-Scale Quantum (NISQ) era, where devices characterized by a few numbers of qubits are potentially able to overcome classical computers in performing specific tasks. However, noise in quantum operators still limits the size of quantum circuits that can be run in a reliable way. Consequently, there is a strong need for error mitigation approaches aimed at increasing reliability in quantum computation and making this paradigm useful and productive in real world applications. This talk aims at introducing attendees with this new research area and pave the way towards the exploitation of computational intelligence techniques such as evolutionary algorithms and fuzzy clustering for designing mitigation approaches capable of significantly reducing quantum error with respect to traditional methods.

Title: Trainability and generalization of quantum machine learning models Abstract:Quantum machine learning (QML) is a leading proposal for near-term quantum advantage. However, recent progress in understanding the training landscapes for QML models paints a concerning picture. Exponentially vanishing gradients - barren plateaus - occur for circuits that are deep or noisy or generate much entanglement. On the flip side, some QML architectures are immune to barren plateaus. The dynamical Lie algebra has been connected to both barren plateaus and overparameterization, suggesting that we could use algebraic properties to engineer favorable training landscapes. Training is only half of the story for QML, as good generalization to testing data is needed as well. We have found surprising good generalization properties for QML models in two ways: (1) In-distribution generalization is guaranteed when the training data size is roughly equal to the number of model parameters, and (2) Out-of-distribution generalization is guaranteed for locally scrambled ensembles, allowing for product-state to entangled-state generalization. In this talk, I will attempt to overview our current understanding of both the trainability and generalization of QML models.

Title: Information scrambling for navigating the learning landscape of a quantum machine learning model Abstract:In this talk, I will focus on quantum machine learning, particularly the Restricted Boltzmann Machine (RBM), as it emerged to be a promising alternative approach leveraging the power of quantum computers. The workhorse of our technique is a shallow neural network encoding the desired state of the system with the amplitude computed by sampling the Gibbs oltzmann distribution using a quantum circuit and the phase information obtained classically from the nonlinear activation of a separate set of neurons. In Addition to present the successful applications for electronic structure of two-dimensional materials I will discuss and illustrate that the imaginary components of out-of-time correlators can be related to conventional measures of correlation like mutual information. Such an analysis offers important insights into the training dynamics by unraveling how quantum information is scrambled through such a network introducing correlation among its constituent sub-systems. This approach not only demystifies the training of quantum machine learning models but can also explicate the capacitive quality of the model.

Title: Combining Gradient Ascent and Feedback Control Abstract:Optimal control algorithms are essential for improving modern quantum devices.
While model-based gradient techniques like GRAPE (gradient-ascent pulse engineering)
are powerful tools for efficiently finding control pulses, they are not
applicable to feedback scenarios, where the control must depend on
measurement results. Conversely, modern model-free reinforcement learning
techniques can easily deal with feedback, but they are not very efficient,
since they do not make use of our knowledge of the underlying physics model.
In this talk, I will present our new approach (termed feedback-GRAPE) that
enables us to combine model-based techniques with quantum feedback. I will
give examples of several tasks that can be efficiently solved using that new
approach.

Title: Introduction to quantum (statistical) learning theory Abstract: Given the key position of Machine Learning in everyday life, it is crucial to (formally) understand what can be
efficiently learned or not. Learning theory is thus the field that studies learning problems from a complexity
theory point of view, where we define formal models of learning and prove (in)feasibility results.
With the advent of quantum computing, quantum speedups on learning tasks is a sought-after application. As
in the classical setting, we are also interested in having a clear picture of what can be efficiently learned or not,
especially when quantum advantage in such a setting is achieved. This is the goal of quantum learning theory.
In this tutorial, I will present the field of learning theory, describing the models and main results. In particular, I will
focus on quantum statistical learning theory, a model where data is accessed by its statistics.
No previous knowledge of learning theory is expected.

Title: Data mining the output of quantum simulators - from critical behavior to algorithmic complexity Abstract: Recent experiments with quantum simulators and noisy intermediate-scale quantum devices have demonstrated unparalleled capabilities of probing many-body wave functions, via directly probing them at the single quantum level via projective measurements. However, very little is known about to interpret and analyse such huge datasets. In this talk, I will show how it is possible to provide such characterisation of many-body quantum hardware via a direct and assumption-free data mining. The core idea of this programme is the fact that the output of quantum simualtors and computers can be construed as a very high-dimensional manifold. Such manifold can be characterised via basic topological concepts, in particular, by their intrinsic dimension. Exploiting state of the art tools in non-parametric learning, I will discuss theoretical results for both classical and quantum many-body spin systems that illustrate how data structures undergo structural transitions whenever the underlying physical system does, and display universal (critical) behavior in both classical and quantum mechanical cases. I will conclude with remarks on the applicability of our theoretical framework to synthetic quantum systems (quantum simulators and quantum computers), and emphasize its potential to provide a direct, scalable measure of Kolmogorov complexity of output states.

Title: Exploiting machine learning for quantum dynamics and viceversa Abstract: This talk is about the recent theoretical research of our group on the reciprocal link between machine learning and the dynamics of quantum systems. First, we will review how artificial neural networks can be used as variational trial wavefunctions to simulate closed and open quantum systems. Then, we will show how the dynamics of quantum hardware can be exploited to create kernel machines to perform advanced tasks.

Title: Machine learning quantum states and operations: from neural networks to optimization on manifolds Abstract: Machine learning techniques have found recent applications in quantum tomography. The underlying idea is to use an efficient ansatz to represent a quantum state or process and learn it from data. Neural network architectures from Restricted Boltzmann Machines to Recurrent Neural Networks have been proposed as ansatzes for quantum states. Such ansatzes can be trained using standard gradient-based optimization to directly estimate a quantum state's density matrix or allow efficient sampling of measurement outcomes. Similar ideas can be applied to learn a quantum process. In this talk, we will discuss several such machine learning methods for quantum state and process tomography. We will elucidate the necessary ingredients to apply machine learning to the tomography problem - from using physics-based constraints on the ansatzes to constraints on the training itself such as gradient-descent on a manifold. We will also compare machine learning to existing standard techniques such as maximum likelihood estimation, compressed sensing or projection-based algorithms to show how ideas from machine learning can enhance the set of tools for quantum characterization.

Title: Abstract:

Title: Towards an Artificial Muse for new Ideas in Quantum Physics Abstract: Articial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientic understanding or inspire new surprising ideas. I will talk about how AI can be used as an articial muse in quantum physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize.
[1] Krenn, Kottmann, Tischler, Aspuru-Guzik, Conceptual understanding through efficient automated design of quantum optical experiments. Physical Review X 11(3), 031044 (2021).
[2] Krenn, Pollice, Guo, Aldeghi, Cervera-Lierta, Friederich, Gomes, Hase, Jinich, Nigam, Yao, Aspuru-Guzik, On scientic understanding with articial intelligence. arXiv:2204.01467 (2022).
[3] Krenn, Zeilinger, Predicting research trends with semantic and neural networks with an application in quantum physics. PNAS 117(4), 1910-1916 (2020).

Title: Abstract:

Title: Attractor Neural Networks: storage capacity and learning Abstract: One way to understand quantum neural networks is to adapt classical cases into the quantum regime. Attractor neural networks are able to retrieve different configurations after they are applied several times allowing to associate each initial state with the closest stable configuration of the network. The quantum case is obtained by studying which are the completely positive trace preserving (CPTP) maps that hold the larger number of stationary states. I will show that in this case, the When talking states This is done by can use a classical attractor neural network of We study the storage capacity of quantum neural networks (QNNs), described by the attractor associated to an arbitrary input state is the one minimizing their relative entropy. We will discuss why this networks outperform the classical ones

Title: Variational Quantum Imaginary Time Evolution: A discussion on error bounds, efficient approximations, and application examples Abstract: Variational quantum imaginary time evolution (VarQITE) allows us to simulate imaginary time dynamics of quantum systems with near-term compatible quantum circuits. These types of dynamics can be used to tackle a variety of important tasks such as searching for ground states, preparing Gibbs states, and optimizing black box binary optimization functions. In this talk, I will discuss several properties of this method and present examples executed with numerical simulations as well as actual quantum hardware. While variational approaches have the advantage of being compatible with short-depth quantum circuits, the underlying approximation errors are typically difficult to understand. I will explain how to find and efficiently evaluate a posteriori bounds for the approximation error of this variational technique.
Furthermore, knowing the computational cost of a method as well as the related scaling behavior is important to understand what resources are required for the execution of an algorithm. This talk will also present an efficient approximation of VarQITE that reduces the cost significantly. Finally, I will present examples of VarQITE with a special focus on a real-world optimization problem, i.e. feature selection.