Quantum computers have the potential to solve some complex problems much faster than classical equivalents. Significant research efforts are aimed at determining which algorithms give which advantages for different use-cases. An application where quantum computers can possibly bring great advantages is solving (partial) differential equations, which has a broad set of applications, for instance in wave-propagation models. Few differential equations admit an analytical solution. For most heuristic methods are required to approximate a solution. Well-known heuristic techniques include the finite element and finite difference method, where the considered space is partitioned and systems of linear equations resulting from this partitioning need to be solved. Quantum computing puts forward new methods to solve these systems of linear equations and hence these differential equations. First, the HHL algorithm gives an efficient way to solve a linear system of equations. The HHL algorithm comes with drawbacks, but in this specific use-case some of these objections might be circumvented. Quantum computers furthermore offer the variational approach: an optimization-based path to solving differential equations. With this approach, the devices might even ‘learn’ the noise patterns emerging in present day quantum computers and compensate for it. In this work we revisit quantum methods for solving partial differential equations and consider how well they work in solving differential equations in practice. We also discuss possible caveats and bottlenecks of the approaches.
Radar and sonar information processing is a promising application area of quantum computing in the near future. Many use cases in this area can are computational heavy and might benefit greatly from a quantum approach. In this paper, an overview of use cases in this application area is given and scored on quantum readiness, added value and expected horizon. From this overview acoustic localisation, generative models, compressive sensing, normalisation and classification are selected as the most promising application areas.
Due to the distance limitation of quantum communication via ground-based fibre networks, space-based quantum key distribution (QKD) is a viable solution to extend such networks over continental and, ultimately, over global distances. Compared to LEO, QKD from GEO would offer substantial advantages, i.e. large coverage, continuous link to ground stations (cloud cover limited), 24/24h operation (background limited), no tracking required, however, coming with large link losses seen the space-ground distance. TNO, together with Eutelsat and CGI-NL, performed a detailed study on the feasibility of QKD from GEO, including a high level system design of payload and ground segment. We conclude that QKD from GEO is technically feasible, and a favourable solution if the satellite needs to act as an untrusted node (that is, no security assumptions required for the space segment). However, the optimal solution, generating a higher value-for-money, is to use a hybrid system, implementing an untrusted and trusted mode BBM92 QKD protocol. In order to arrive at a minimum required secure bit rate of ~1 bit/s in untrusted mode, a 2x ~0.5m diameter telescope in the space segment is required with <0.25µrad pointing accuracy, a <1GHz entangled photon source, in combination with ~2m diameter telescopes on ground. Details on our assumptions and results and drawings of the high level system design will be presented, as well as the development roadmap describing the required technology improvements and building blocks for the overall hybrid approach, which is applicable to non-GEO applications as well.
Running general quantum algorithms on quantum computers is hard, especially in the early stage of development of the quantum computer that we are in today. Many resources are required to transform a general problem to be run on a quantum computer, for instance to satisfy the topology constraints of the quantum hardware. Furthermore, quantum computers need to operate at temperatures close to absolute zero, and hence resources are required to keep the quantum hardware at that level. Therefore, simulating small instances of a quantum algorithm is often preferred over running it on actual quantum hardware. This is both cheaper and gives debugging capabilities which are unavailable on actual quantum hardware, such as the evaluation of the full quantum state, at intermediate points in the algorithm as well as at the end of the algorithm. By simulating small instances of quantum algorithms, the quantum algorithm can be checked for errors and be debugged before implementing and running it on actual quantum hardware for larger instances. There are multiple initiatives to create quantum simulators and while looking alike, there are difference among them. In this work we compare seven often used quantum simulators offered by various parties by implementing the Shor-code, an error-correcting technique. The Shor-code can detect and correct all single qubit errors in a quantum circuit. For most multi-qubit errors, correct detection and correction is not possible. We compare the seven quantum simulators on different aspects, such as how easy it is to implement the Shor-code, what its capabilities are regarding translation to actual quantum hardware and what the possibilities of simulating noise are. We also discuss aspects such as topology restrictions and the programming interface.
Algorithms for the detection and tracking of (moving) objects can be combined into a system that automatically extracts relevant events from a large amount of video data. Such a system (data pipeline), can be particularly useful in video surveillance applications, notably to support analysts in retrieving information from hours of video while working under strict time constraints. Such data pipelines entail all sort of uncertainties, however, which can lead to erroneous detections being presented to the analyst. In this paper we present a novel method to attribute a confidence of correct detection to the output of a computer vision data pipeline. The method relies on a datadriven approach. A machine learning-based classifier is built to separate correct from erroneous detections. It is trained on features extracted from the pipeline; The features relate to both raw data properties, such as image quality, and to video content properties, such as detection characteristics. The validation of the results is done using two full motion video datasets from airborne platforms; the first being of the same type (same context) as the training set, the second being of a different type (new context). We conclude that the result of this classifier could be used to build a confidence of correct detection, separating the True Positives from the False Positives. This confidence can furthermore be used to prioritize the detections in order of reliability. This study concludes by identifying additional work measures needed to improve the robustness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.