Quantum Bayesian Networks

June 20, 2013

Approximate Quantum Compiling (and “Efficient Quantum Circuits for Diagonal Unitaries Without Ancillas” by Jonathan Welch, Daniel Greenbaum, Sarah Mostame, Alán Aspuru-Guzik)

Filed under: Uncategorized — rrtucci @ 7:18 pm

dishonest scientists, lying, liar, cheating, stealing, unethical, plagiarism, plagiarist, fake, fraud
Check out

“Efficient Quantum Circuits for Diagonal Unitaries Without Ancillas”, by
Jonathan Welch, Daniel Greenbaum, Sarah Mostame, Alán Aspuru-Guzik
http://arxiv.org/abs/1306.3991

Despite it having 4 authors, 3 from Harvard University and 1 from MIT Lincoln Laboratory, and despite having 21 references, this paper does not mention any of my work in this area so let me mention it myself, cause I don’t think anybody else will.

By quantum compiling a unitary matrix, I will mean decomposing that matrix into a sequence of “elementary gates” (i.e., one and two qubit gates). I’ve written several previous posts in this blog about quantum compiling. See here.

The subject of quantum compiling has been of interest to me since about 1998-1999 when I published at arXiv the following two papers and a C++ computer program called Qubiter

In these quantum compiling papers and software, I believe I was the first one to apply something known as the CSD (Cosine Sine Decomposition) to quantum computing, for the task of quantum compiling. The CSD is a very powerful and famous matrix decomposition from linear algebra. Previous papers like the famous one

“Elementary gates for quantum computation”,
by A. Barenco (Oxford), C. H. Bennett (IBM), R. Cleve (Calgary), D. P. DiVincenzo (IBM), N. Margolus (MIT), P. Shor (AT&T), T. Sleator (NYU), J. Smolin (UCLA), H. Weinfurter (Innsbruck)
http://arxiv.org/abs/quant-ph/9503016 (nine authors!, a cast of thousands for a theoretical paper, a paper written by committee?, but not bad at all)

used a different type of matrix decomposition method that is “one-sided” instead of “double-sided” like CSD.

The Harvard-MIT Lincoln Lab paper claims that this 2003 paper by Bullock Markov was the first one to give an exact decomposition of diagonal unitary matrices into elementary gates. That’s false. This Bullock Markov paper repeats the exact decomposition of diagonal unitary gates which was quite explicitly given 5 years earlier in my “Rudimentary Quantum Compiler” papers (see, for example, page 23 in quant-ph/9902062) and which already had been implemented in publicly available software 5 years earlier in Qubiter.

Of course, the idea of exact quantum compiling was only meant to be used for bottle neck steps involving only a small number of qubits in a quantum computer program. Exact compilation of a general unitary matrix acting on a large number n of qubits is untenable as it would require at least (2^n)^2 gates. This (2^n)^2 complexity bound was pointed out in the 1995 Barnett et al “cast of thousands” paper. This complexity bound was probably known much earlier than that and is fairly obvious in retrospect. If it weren’t true, there would be no advantage to quantum computing over classical computing.

The idea of finding approximations to an exact quantum compilation (by dropping some gates that are “not doing much”) is almost as old as the mountains, or at least as old as quantum computing itself. It already shows up prominently in the famous 1994 paper by Don Coppersmith, where, besides inventing his efficient, exact decomposition of the discrete Fourier transform matrix into elementary gates, he also pointed out that if one only wishes to approximately compile the discrete Fourier transform matrix, to within an a priori stipulated error \epsilon, then one can drop many gates from his exact compilation.

I personally have written at least two papers on approximate quantum compiling

  • “Quantum Compiling with Approximation of Multiplexors”
    by Robert R. Tucci, (arxiv abstract)

  • “Oracular Approximation of Quantum Multiplexors and Diagonal Unitary Matrices”, by Robert R. Tucci(arxiv abstract)

Note that the approximate compilation of diagonal unitary matrices is a large part of what I try to do in those papers (I also try to approximate some block-diagonal unitary matrices). Note also that truncating the Walsh transform series as a method of doing approximate quantum compiling is used in quant-ph/0412072 (see for example Eq.(42) in quant-ph/0412072). This is the main approximation method (besides Trotter-Suzuki, which they certainly are not the first to use) used by the Harvard-MIT Lincoln Lab paper. Despite what they imply, this idea is not new. I repeat, truncating the Walsh transform series as a method of doing approximate quantum compiling is not a new idea.

By the way, one kind of approximate compilation of diagonal unitary matrices, the one described in http://arxiv.org/abs/0901.3851, is already implemented in the publicly available software (see “quantum circuit generator” software, in particular, QuanSuite, Quibbs and QOperAv, at my website www-ar-tiste.com). Diagonal unitary matrices arise when one does quantum phase estimation, which is also already implemented in my quantum circuit generator software.

Advertisements

June 8, 2013

Valiant’s PAC Learning = Frequentist Hypothesis Testing and Confidence Intervals

Filed under: Uncategorized — rrtucci @ 9:40 am

I was Googling last night to see if I’m crazy. Well maybe I am, but specifically about my perception of the frequentist tendencies of complexity theorists. And I found this funny (in an understated way) blog post by Larry Wasserman in his blog “Normal Deviate”. Check it out:

Aaronson, COLT, Bayesians and Frequentists”(May 5, 2013)

Excerpts:

I am reading Scott Aaronson’s book “Quantum Computing Since Democritus”

Much of the material on complexity classes is tough going but you can skim over some of the details and still enjoy the book. (That’s what I am doing.) There at least 495 different complexity classes: see the complexity zoo. I don’t know how anyone can keep track of this.

So my claim is that computational learning theory is just the application of frequentist confidence intervals to classification.
There is nothing bad about that. The people who first developed learning theory were probably not aware of existing statistical theory so they re-developed it themselves and they did it right.

June 3, 2013

The Truth About Complexity Theorists Finally Beginning to Emerge

Filed under: Uncategorized — rrtucci @ 3:46 pm

Recently, someone asked Scott Aaronson in Scott’s blog the question

Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?

to which he replied

I’d ascribe maybe a 40% posterior probability to Bayesianism being true. (Up from my prior of 20%, after reading The Signal and the Noise by Nate Silver.)

With 60% probability, I think quantifying our uncertainty using probabilities is great whenever possible, but is unambiguously meaningful only when an event is sampled according to a probabilistic process that we know something about theoretically (e.g., in quantum mechanics), or in the limited domain of “bettable events” (i.e., events that belong to a large-enough ensemble of similar events that one could form a decent betting market around them). In other cases—including many of the ones people care about the most—I think we’re really in a state of Knightian uncertainty, where at most we can meaningfully give upper and lower bounds on the probability of something happening. And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning. Indeed, this might both explain and justify the phenomenon of risk-aversion in economics, as well as well-known “paradoxes” of decision theory such as the Ellsberg paradox.

Again, though, that’s only with 60% probability, and is susceptible to revision as new information comes in.

Let me repeat, taking some choice phrases out of context to better bolster my argument:

“In other cases—including many of the ones people care about the most—”
“And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning.”

In other words, Scott is a virulent, rabid anti-bayesian.

According to Wikipedia, “In economics, Knightian uncertainty is risk that is immeasurable, not possible to calculate.” Now, why would an alleged scientist put so much stock on an example from economics, the dismal science? Gag me with a spoon! Specially somebody who works almost exclusively in quantum mechanics, which according to Scott’s own admission, is a probabilistic theory very amenable to Bayesian analysis. Oh I see, just because he has concluded, either rightly or wrongly, that Bayesian thinking doesn’t fit nicely within his narrow field of vision, a field of vision which is severely limited by the blinders of some complexity theory party-line that was enunciated long ago by some Prince Valiant guy in his Pac-Man Model.

Number of times Scott mentions “D-wave” or “Bayesian Networks” or “Error Correction” in his book: “Quantum Computing Since Democritus, but skipping D-Wave, Bayesian Networks, and Error Correction” Zero, Zero and 3 times. Does the guy have some gigantic blind spots or what? Cataracts and tunnel vision at 32. Very sad.

For all its numerous faults, at least D-wave is doing some Bayesian modeling for AI under the direction of Hartmut Neven. Okay, Hartmut is not the world’s greatest authority in quantum mechanics—-he referred to the Heisenberg uncertainty principle as the Hindenburg uncertainty principle in this Video. But at least he is a true bayesian, as are most practical people in engineering and science today, with the possible exception of some complexity theorists with anti-bayesian crackpot ideas. (Scott should try convincing Israeli scientists to implement Iron Dome’s software without using a Kalman filter. See how long before they ship him to a loony bin).

Since Google, by its own admission, is a lover of Bayesian thinking, which includes Bayesian Networks, truth, justice and the American way. And since D-Wave/Neven are Bayesian freedom fighters just like Google is. And since Aaronson is speaking on behalf of all quantum complexity theorists when he utters this anti-bayesian hate language. Then do you think it is surprising that Google should prefer D-Wave/Neven to quantum complexity theorists for its quantum computing institute? Should Google hire any quantum complexity theorists at all for its QC institute? I think not! Let them eat cake and apply for a job at MIT under Aaronson (a temporary job with slave wages instead of a handsomely paid permanent job with free gourmet cafeteria food.) I would say, no Google jobs for complexity theorists unless they apologize for their ugly past anti-bayesian behavior.

Blacklists are considered to be a bad thing. Like Senator Joseph McCarthy’s blacklist of communists sympathizers. Or, in The Mikado, the Lord High Executioner’s list of people “who would not be missed” if they were executed “As some day it may happen”. But would a blacklist of anti-bayesians (not a list of poor, deluded, confused frequentists but one of outright anti-bayesian-racists like Scott) be such a bad thing? And would it be such a bad thing for Google to compile such a list? I mean, strictly speaking, doing so would not be doing evil, would it?

Blog at WordPress.com.

%d bloggers like this: