AI Designs Quantum Physics Experiments Beyond What Any Human Has Conceived

The Purge

Platinum Member
Aug 16, 2018
17,881
7,896
400
Scientific American ^ | 7/2/2021 | Anil Ananthaswamy

Originally built to speed up calculations, a machine-learning system is now making shocking progress at the frontiers of experimental quantum physics

0820F6BA-1235-45C5-AD7EB379E058A13C_source.jpg


Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

“The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn, Anton Zeilinger of the University of Vienna and their colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

“When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, working with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.

“It is amazing work,” says theoretical quantum physicist Renato Renner of the Institute for Theoretical Physics at the Swiss Federal Institute of Technology Zurich, who reviewed a 2020 study about THESEUS but was not directly involved in these efforts.

Krenn stumbled on this entire research program somewhat by accident when he and his colleagues were trying to figure out how to experimentally create quantum states of photons entangled in a very particular manner: When two photons interact, they become entangled, and both can only be mathematically described using a single shared quantum state. If you measure the state of one photon, the measurement instantly fixes the state of the other even if the two are kilometers apart (hence Einstein’s derisive comments on entanglement being “spooky”).

In 1989 three physicists—Daniel Greenberger, the late Michael Horne and Zeilinger—described an entangled state that came to be known as “GHZ” (after their initials). It involved four photons, each of which could be in a quantum superposition of, say, two states, 0 and 1 (a quantum state called a qubit). In their paper, the GHZ state involved entangling four qubits such that the entire system was in a two-dimensional quantum superposition of states 0000 and 1111. If you measured one of the photons and found it in state 0, the superposition would collapse, and the other photons would also be in state 0. The same went for state 1. In the late 1990s Zeilinger and his colleagues experimentally observed GHZ states using three qubits for the first time.

Krenn and his colleagues were aiming for GHZ states of higher dimensions. They wanted to work with three photons, where each photon had a dimensionality of three, meaning it could be in a superposition of three states: 0, 1 and 2. This quantum state is called a qutrit. The entanglement the team was after was a three-dimensional GHZ state that was a superposition of states 000, 111 and 222. Such states are important ingredients for secure quantum communications and faster quantum computing. In late 2013 the researchers spent weeks designing experiments on blackboards and doing the calculations to see if their setups could generate the required quantum states. But each time they failed. “I thought, ‘This is absolutely insane. Why can’t we come up with a setup?’” says Krenn says.

To speed up the process, Krenn first wrote a computer program that took an experimental setup and calculated the output. Then he upgraded the program to allow it to incorporate in its calculations the same building blocks that experimenters use to create and manipulate photons on an optical bench: lasers, nonlinear crystals, beam splitters, phase shifters, holograms, and the like. The program searched through a large space of configurations by randomly mixing and matching the building blocks, performed the calculations and spat out the result. MELVIN was born. “Within a few hours, the program found a solution that we scientists—three experimentalists and one theorist—could not come up with for months,” Krenn says. “That was a crazy day. I could not believe that it happened.”

Then he gave MELVIN more smarts. Anytime it found a setup that did something useful, MELVIN added that setup to its toolbox. “The algorithm remembers that and tries to reuse it for more complex solutions,” Krenn says.

It was this more evolved MELVIN that left Krenn scratching his head in a Viennese cafĂ©. He had set it running with an experimental toolbox that contained two crystals, each capable of generating a pair of photons entangled in three dimensions. Krenn’s naive expectation was that MELVIN would find configurations that combined these pairs of photons to create entangled states of at most nine dimensions. But “it actually found one solution, an extremely rare case, that has much higher entanglement than the rest of the states,” Krenn says.

Eventually, he figured out that MELVIN had used a technique that multiple teams had developed nearly three decades ago. In 1991 one method was designed by Xin Yu Zou, Li Jun Wang and Leonard Mandel, all then at the University of Rochester. And in 1994 Zeilinger, then at the University of Innsbruck in Austria, and his colleagues came up with another. Conceptually, these experiments attempted something similar, but the configuration that Zeilinger and his colleagues devised is simpler to understand. It starts with one crystal that generates a pair of photons (A and B). The paths of these photons go right through another crystal, which can also generate two photons (C and D). The paths of photon A from the first crystal and of photon C from the second overlap exactly and lead to the same detector. If that detector clicks, it is impossible to tell whether the photon originated from the first or the second crystal. The same goes for photons B and D.

A phase shifter is a device that effectively increases the path a photon travels as some fraction of its wavelength. If you were to introduce a phase shifter in one of the paths between the crystals and kept changing the amount of phase shift, you could cause constructive and destructive interference at the detectors. For example, each of the crystals could be generating, say, 1,000 pairs of photons per second. With constructive interference, the detectors would register 4,000 pairs of photons per second. And with destructive interference, they would detect none: the system as a whole would not create any photons even though individual crystals would be generating 1,000 pairs a second. “That is actually quite crazy, when you think about it,” Krenn says.

MELVIN’s funky solution involved such overlapping paths. What had flummoxed Krenn was that the algorithm had only two crystals in its toolbox. And instead of using those crystals at the beginning of the experimental setup, it had wedged them inside an interferometer (a device that splits the path of, say, a photon into two and then recombines them). After much effort, he realized that the setup MELVIN had found was equivalent to one involving more than two crystals, each generating pairs of photons, such that their paths to the detectors overlapped. The configuration could be used to generate high-dimensional entangled states.

Quantum physicist Nora Tischler, who was a Ph.D. student working with Zeilinger on an unrelated topic when MELVIN was being put through its paces, was paying attention to these developments. “It was kind of clear from the beginning [that such an] experiment wouldn’t exist if it hadn’t been discovered by an algorithm,” she says.

Besides generating complex entangled states, the setup using more than two crystals with overlapping paths can be employed to perform a generalized form of Zeilinger’s 1994 quantum interference experiments with two crystals. Aephraim Steinberg, an experimentalist at the University of Toronto, who is a colleague of Krenn’s but has not worked on these projects, is impressed by what the AI found. “This is a generalization that (to my knowledge) no human dreamed up in the intervening decades and might never have done,” he says. “It’s a gorgeous first example of the kind of new explorations these thinking machines can take us on.”

In one such generalized configuration with four crystals, each generating a pair of photons, and overlapping paths leading to four detectors, quantum interference can create situations where either all four detectors click (constructive interference) or none of them do so (destructive interference).

But until recently, carrying out such an experiment remained a distant dream. Then, in a March preprint paper, a team led by Lan-Tian Feng of the University of Science and Technology of China , in collaboration with Krenn, reported that they had fabricated the entire setup on a single photonic chip and performed the experiment. The researchers collected data for more than 16 hours: a feat made possible because of the photonic chip’s incredible optical stability, something that would have been impossible to achieve in a larger-scale tabletop experiment. For starters, the setup would require a square meter’s worth of optical elements precisely aligned on an optical bench, Steinberg says. Besides, “a single optical element jittering or drifting by a thousandth of the diameter of a human hair during those 16 hours could be enough to wash out the effect,” he says.

During their early attempts to simplify and generalize what MELVIN had found, Krenn and his colleagues realized that the solution resembled abstract mathematical forms called graphs, which contain vertices and edges and are used to depict pairwise relations between objects. For these quantum experiments, every path a photon takes is represented by a vertex. And a crystal, for example, is represented by an edge connecting two vertices. MELVIN first produced such a graph and then performed a mathematical operation on it. The operation, called “perfect matching,” involves generating an equivalent graph in which each vertex is connected to only one edge. This process makes calculating the final quantum state much easier, although it is still hard for humans to understand.

That changed with MELVIN’s successor THESEUS, which generates much simpler graphs by winnowing the first complex graph representing a solution that it finds down to the bare minimum number of edges and vertices (such that any further deletion destroys the setup’s ability to generate the desired quantum states). Such graphs are simpler than MELVIN’s perfect matching graphs, so it is even easier to make sense of any AI-generated solution.

Renner is particularly impressed by THESEUS’s human-interpretable outputs. “The solution is designed in such a way that the number of connections in the graph is minimized,” he says. “And that’s naturally a solution we can better understand than if you had a very complex graph.”

Eric Cavalcanti of Griffith University in Australia is both impressed by the work and circumspect about it. “These machine-learning techniques represent an interesting development. For a human scientist looking at the data and interpreting it, some of the solutions may look like ‘creative’ new solutions. But at this stage, these algorithms are still far from a level where it could be said that they are having truly new ideas or coming up with new concepts,” he says. “On the other hand, I do think that one day they will get there. So these are baby steps—but we have to start somewhere.”

Steinberg agrees. “For now, they are just amazing tools,” he says. “And like all the best tools, they’re already enabling us to do some things we probably wouldn’t have done without them.”

--------------------

I, for one , welcome our new AI overlords.
 
I wonder if AI can come up with a practical solution to faster than light travel.
 
I wonder if AI can come up with a practical solution to faster than light travel.

Where would we go?
We are descendants of explorers. This planet will one day die. It may not be for millions of years, or just a few hundred years, so our descendants have the right to survive and by spreading out to other planets, it will ensure their continued existence. However, if you are of the mind that we should just enjoy what we've got and any descendants should just stay on this rock and accept their fate, I don't agree with it.
 
This planet will one day die. It may not be for millions of years, or just a few hundred years,

That is untrue. Our planet will continue to orbit our sun, completely oblivious to the parasitical life that infests its skin for the next five and a half BILLION years.
 
Originally built to speed up calculations, a machine-learning system is now making shocking progress at the frontiers of experimental quantum physics .........
The authors don't say what kind of algorithm they used but there is a hint at the end of the article:
For a human scientist looking at the data and interpreting it, some of the solutions may look like ‘creative’ new solutions. But at this stage, these algorithms are still far from a level where it could be said that they are having truly new ideas or coming up with new concepts,”

It seems that they are using an adaptation of a method called "genetic programming". It uses a "gene" pool of random "creatures" - configurations of parameters. Closeness to the end scientific goal is the criterion for "survival fitness". The computer starts splicing and trading genes (sets of parameters) in the pool and drops in occasional mutations. The fitness is tested for each new configuration. Those configurations that are far from the goal are eventually dropped and the configurations closest to the goal survive.

The tool box referred to in the article are the genes. A configuration of tools are the chromosomes. As time goes on the pool becomes richer with configurations that are meaningful to the end goal.

Genetic programming has been very important in designing aircraft turbine blades. I don't know this as a fact, but it is a guess that they used genetic programming to solve their experimental problem.

.
 
One could say the AI proved and revealed Kohn's &
David Bohm's Theorems on cause and affect connectivity.
 
Sorry to disappoint, but any one using terms like Artificial Intelligence, or Machine Learning is a PR marketing hype, not work listening to.
Neither of these actually exist or will ever exist.

Artificial Intelligence is sold as if some day computers will be sentient, and that can never happen.
The reason why is that we would have to first know exactly what sentience was, have a reason to try to implement it, and then succeed at it.
None of those 3 will ever be true.

What Artificial Intelligence is really, is any program humans can figure out how to write, that does something complicated and impresses people, as if it were really the computer instead of all the human programmers.

With Machine Learning, machines can't know anything so can't learn.
What Machine Learning really is about is just that humans hate looking at large data results, so write programs to look for patterns in the data, so they don't have to.
But the human programmer has to first come up with the idea of looking for a particular pattern in the data, before the computer can run it and verify that the pattern existed.
So the claim the computer came up with data analysis results that were unanticipated, has to just be a lie.
The computer program is not going to find any data patterns it was not specifically programmed to look for.
It would not know how.
 
Sorry to disappoint, but any one using terms like Artificial Intelligence, or Machine Learning is a PR marketing hype, not work listening to.
Neither of these actually exist or will ever exist.

Artificial Intelligence is sold as if some day computers will be sentient, and that can never happen.
The reason why is that we would have to first know exactly what sentience was, have a reason to try to implement it, and then succeed at it.
None of those 3 will ever be true.
Artificial Intelligence as it is today is not ever meant to exhibit sentience.
With Machine Learning, machines can't know anything so can't learn.
What Machine Learning really is about is just that humans hate looking at large data results, so write programs to look for patterns in the data, so they don't have to.
That is not true in all cases. One example is to train a neural network system to find the position and orientation of a semiconductor chip to high accuracy. It can happen in a dozen or so milliseconds. That beyond humans hating to look at large data sets.
What Artificial Intelligence is really, is any program humans can figure out how to write, that does something complicated and impresses people, as if it were really the computer instead of all the human programmers.
Yes, that is often used by a marketing department, but there are many applications where a trained multilayer neural network has "synapse weights" that are beyond understanding.
But the human programmer has to first come up with the idea of looking for a particular pattern in the data, before the computer can run it and verify that the pattern existed.
That is called supervised learning. That isn't the only application. There is also unsupervised learning where patterns are not known and several data sets are analyzed to discover if there are any patterns in common. Also in post #7 I mentioned genetic algorithms where there is a goal and the purpose of the algorithm is find out how to best achieve that goal

It is unfortunate that marketing hype gives AI a bad name to those who know a bit about AI.
.
 
We can improve our computer code writing. People are harder to fix. The bar to be better than a human at decision making is pretty low.

But you just pointed out the contradiction.
Humans are flawed, so then the code we write is going to always be even MORE flawed than we are.
So computers are always going to be worse at everything than we are.

The mechanisms that made humans what we are took hundreds of millions of years of evolution.
Which is essentially field trials.
The code humans write is always going to be garbage in comparison.
 
Artificial Intelligence as it is today is not ever meant to exhibit sentience.

That is not true in all cases. One example is to train a neural network system to find the position and orientation of a semiconductor chip to high accuracy. It can happen in a dozen or so milliseconds. That beyond humans hating to look at large data sets.

Yes, that is often used by a marketing department, but there are many applications where a trained multilayer neural network has "synapse weights" that are beyond understanding.

That is called supervised learning. That isn't the only application. There is also unsupervised learning where patterns are not known and several data sets are analyzed to discover if there are any patterns in common. Also in post #7 I mentioned genetic algorithms where there is a goal and the purpose of the algorithm is find out how to best achieve that goal

It is unfortunate that marketing hype gives AI a bad name to those who know a bit about AI.
.

Before a neural net can start to assign weights, human programmers first have to design and implement the decision tree.
So then again it is the humans setting it all up. The final data accumulation and decisions based on that is done by the computer, but completely under human established routines and parameters.
Humans have to do all the organization because we are the ones who define success.
Computers have no intrinsic values other than arbitrary mathematical operations.
 
So computers are always going to be worse at everything than we are.
Before a neural net can start to assign weights, human programmers first have to design and implement the decision tree.
So then again it is the humans setting it all up. The final data accumulation and decisions based on that is done by the computer, but completely under human established routines and parameters.
Humans have to do all the organization because we are the ones who define success.
Computers have no intrinsic values other than arbitrary mathematical operations.
In a chip location program there is no decision tree. The network is programmed to handle any chip. No further programing is necessary. In the factory a worker simply presses a button so the same algorithm learns the pattern (changes the synaptic weights) for recognition of the new chip.
 
Humans are flawed, so then the code we write is going to always be even MORE flawed than we are

Perfect solution ... we get computers to write their own code ... nothing could possibly go wrong...

H6FhPNgioJ7qAtyEkZk4aA.jpg
 

Forum List

Back
Top