Quantum Bayesian Networks

March 30, 2014

The Inflationary Quantum Computer

Filed under: Uncategorized — rrtucci @ 9:53 am

Wow-wee! Check out the following article in The Guardian:

MIT Boffins Conjecture that the Inflationary Universe is a Quantum Computer (April 1, 2014)

an excerpt:

In the past, Seth Lloyd, MIT professor of Mechanical Engineering, has conjectured that the Universe is a quantum computer.

More recently, an experiment called BICEP2 that is being conducted in the South Pole, has detected evidence of gravitational waves in the cosmic microwave background. Boffins worldwide believe this to be a smoking gun for a universe with a Big Bang beginning, followed by a brief period of extremely rapid expansion called Inflation, followed by our current, slow as molasses, rate of expansion.

So now Seth Lloyd and his two collaborators, the identical twins A. Minion and B. Minion, are adding a new wrinkle to their theory. They now claim that the inflationary universe (or, alternatively, the inflationary Everett-Linde multiverse) is a quantum computer.

According to Seth Lloyd, his inflationary theory not only solves the 3 problems which the old, fuddy-duddy inflationary models solve (viz., the horizon problem, the flatness problem, and the magnetic-monopole scarcity problem), but it also solves the conundrum of why P!=NP in our present universe.

According to Lloyd, a field of Dark Pion particles exploded 10^(-36) seconds after the Big Bang. Pions are the quanta of the field of problems that are solvable in Polynomial time. Some scientists call them polynomions (you won’t find any pictures of them under Google Images because they are so difficult to visualize, neither waves nor particles). The Dark Pion explosion ended 10^(-32) seconds after the Big Bang. The duration of this explosion is called the Lloyd inflationary period. During the Lloyd inflationary period, many of the most sacred laws of physics were violated with impunity: information traveled faster than the speed of light, energy was not conserved, even P!=NP did not hold.

Let R to be the radius of the bubble of incompressible quantum information and quantum entanglement that is our universe. Lloyd believes that during the Lloyd inflationary period, R grew at a rate far higher than the speed of light. This was possible because during that period, P=NP so the universe was able to perform calculations at a prolific, promiscuous rate, called the “Twitter Rate Upper Bound” (named after Oxford U. Prof. Robin Twitter).

Yakov B. Zeldovich once said that the Universe is the poor man’s accelerator. Lloyd likes to add that the Universe is the poor man’s inflationary quantum computer.

Alan Guth often says that cosmic inflation is the ultimate free lunch. Lloyd reflects that cosmic inflation is the ultimate free, open-source, life-giving-app of the cosmic quantum computer.

March 29, 2014

“You have to think like a high energy physicist”

Filed under: Uncategorized — rrtucci @ 4:51 am

The title of this blog post is a quote of John Martinis, taken from the following talk he gave at Google LA on October of last year. The talk has only recently (Feb 28) been put on the web. Check it out. It’s about 1 hr long. I highly recommend it. It’s crystal clear and covers quite a lot of territory. If after you finish seeing it, you want even more detail, I recommend Martinis’ website at UCSB, where he has links to all the Doctoral theses of his past students. Doctoral theses are a treasure trove of information.

I predict that John Martinis’ QC designs will totally dominate quantum computer hardware R&D for the next 5-10 years.

D-Wave’s QC is fine and I wish them well. However, I specialize in writing QC software, and all the QC software I’ve ever written is for gate model QCs, not adiabatic QCs, so my heart is invested in the gate model. Perhaps history will prove that this is a good analogy:
D-Wave QC ~ analog computer,
Martinis’ QC ~ first vacuum tube computer like the Eniac.

There are other gate model QC designs out there besides Martinis’, but his design is already FAR more advanced than those of his competitors. Furthermore, his qubits are bigger (100 microns) and therefore easier to connect to with leads than those of competing QC designs (eg., ion trap). I never liked the fact that with the ion trap design, you have to physically move the qubits from location to location. Smells too mechanical to me. Mechanical means are always slower than electrical means. And they also break down more quickly.

Here are some brief notes I took from Martinis’ Google talk. I just wrote down some numerical values. I didn’t write down anything about all his wonderful explanations and diagrams of the physics and math that describes his QC.

What would M need to factor 2048 bit number?

2E8 physical qubits,
10 MWatt,
1 day runtime,
cost = $100-500 million,
length of machine = 1-10 meter

Fault Tolerant Error Correction With Surface Code

How efficient is surface code?
Let
r_n = (number of physical qubits)/(number of logical qubits)

r_p = p/p_max = (probability of error per physical gate)/ (maximum allowed probability of error). p_max = 1E-2 for surface code. Other, less forgiving codes have p_max = 1E-3.

r_e = number of errors per unit of time

Then
r_p= 1E-1, r_e= 1/age of universe ===> r_n=3000

Type of technology?
standard integrated circuit technology,
Al/Al-O/Al Josephson junctions on sapphire substrate
frequency = 5 GHz (= 240 mK, 2E-5 eV, 6E-2 meters)
temperature = 20 mK

His CNot quality?
40ns, 99.45% efficiency

Size of his current research group?

about 50 researchers, 1 Pomeranian dog

Why surface code?
“Surface code really looks quite ideal for building integrated circuits” because
(1) only 2D nearest neighbor interactions are required
(2) Has highest p_max, most forgiving, of all known error correction codes

Martinis’ 5 year plan?
Next 5 years, demonstrate 1000 physical qubit machine with control electronics at room temperature, outside of chip and cryostat.

Must bring control lines to qubits. Control lines separated by 100 microns. Have to bring 100-1000 control lines to edges of wafer.

Near 56:10 min mark, M says:”You have to think like a high energy physicist” (LHC detectors have a huge number of wires)

Martinis’ 10 year plan?
After achieve 1000 physical qubit machine, M plans to put control electronics right on the chip. Superconducting IC technology for doing this already exits, has existed for many decades. Recent advances in it made by D-Wave

March 15, 2014

Craig Venter Dreams About Quantum Computers

Filed under: Uncategorized — rrtucci @ 8:47 pm

Recently (Oct. 21, 2013), Craig Venter was interviewed by Charlie Rose on Rose’s TV show. They discussed the contents of Venter’s new book, “Life at the Speed of Light”. When asked by Rose which of his many scientific achievements he was proudest of, Venter said (I’m saying it in my own words, not his) that he was proudest of having pioneered a new, faster way of doing scientific research than the conventional way normally used by Academia and public research.

Venter certainly has a very good track record of doing just that: setting up large, highly effective, quick paced, privately and publicly funded, mini-Manhattan projects that bring together a vast array of very talented scientists and engineers.

Let me review some history in case you have forgotten it or never learned it. The privately funded company Celera, founded by Craig Venter when he couldn’t get funding from the NIH, put fire under the feet of the publicly funded Human Genome Project (HGP), causing HGP to finish mapping the human genome years ahead of schedule. Furthermore, HGP did so using the shotgun approach to mapping genomes, an approach which is today the de facto standard for doing this, but which the HGP had been deprecating before Venter’s competition caused HGP to adopt it. Celera and HGP published simultaneously (one day apart) the first ever human genome maps on Feb 2001.

To me, this is a great example of how private industry can accelerate scientific progress by providing competition and/or additional funding to publicly funded research.

Check out the following article in which Venter, now 68 years old, admits to having wet dreams about using quantum computers for genetic research.

Science: Can we extend healthy life?
(by Clive Cookson, FT magazine, March 14, 2014)

excerpts:

Craig Venter, who became a scientific celebrity by sequencing the human genome in the 1990s and then moved on to microbial synthesis, is returning to human genomics on a grand scale.

He has set up a company in San Diego called Human Longevity Inc or HLI, which “will build the largest human [DNA] sequencing operation in the world”. As the name suggests, HLI aims to discover how we age in order to improve health as the process takes hold.

The DNA reading technology comes from Illumina, the genetic instrumentation maker, whose latest HiSeq X Ten machines can sequence tens of thousands of human genomes a year, each containing three billion letters of genetic code. Illumina has joined a group of wealthy private investors in putting up $70m to fund HLI for its first 18 months.

To make sense of these various components – human genomes, microbiomes, metabolomes, stem cell science and, of course, participants’ health records – will require data analysis on an epic scale. Venter believes his computers will be up to the task but he is not overconfident. “That’s not clear yet,” he admits. “A quantum computer would be ideal for this but we can’t wait for quantum computing to solve these problems.”

Other organisations in the public and private sectors in the US, Europe and China are embarking on similar projects to sequence 100,000 or more genomes and relate these to participants’ health records. But none has the depth and breadth of HLI, Venter maintains.

The most mysterious venture is Calico, which Google launched last September with an apparently similar mission to HLI, to extend healthy lifespan. Google has released few details about Calico and Venter still knows little about its activities.

I think it’s a sure thing that QCs will eventually be indispensable to genomics. I believe this can occur in the next 10 years if Venter tactics are used to accelerate QC development. I advise Craig Venter, his coworkers, and Illumina workers to watch this YouTube video. It’s a QC talk given by John Martinis at Google LA on Oct 2013. I sure hope Martinis and Venter become friends if they aren’t already.

I’ve mentioned Martinis many times before in this blog. I will say more about this superb Martinis Video in a future post.

March 14, 2014

P, NP, Sad Tale

Filed under: Uncategorized — rrtucci @ 3:47 am

Recently, Scott Aaronson posted in his blog 2 nice, teaser posts about Computational Complexity Theory.

In his first post, Scott discusses 2 papers. One paper by Lenny Susskind proposes that the number of elementary operations in a quantum circuit be used as a clock, to measure time intervals. The second paper by Terry Tao uses complexity theory to study the solutions of the Navier Stokes equation.

In his second post, Scott tries to explain to the non-specialist why he believes that P\neq NP.

I know next to nothing about complexity theory, so you better take anything I say next with a grain of salt.

NP= problems that can be verified in poly time.

P= problems that can be solved in poly time. P\subset NP. P problems are the easiest to solve problems in NP.

P and NPcomplete are both subsets of NP. They are believed to be disjoint.

NPcomplete=NPhard \cap NP. NPcomplete problems are the hardest to solve problems in NP but their solution can be verified in poly time. 3-SAT problem is an element of NPcomplete.

If P and NPcomplete intersect, then they become equal to each other and to the whole NP set. This doomsday scenario is called P=NP.

In one of his posts, Scott compares the boundary between sets P and NPcomplete to an “electrified fence” for electrocuting frogs. Raoul Ohio observes in the comments that the P, NPcomplete separation reminds him of the phenomenon of “eigenvalue avoidance”, wherein a random matrix rarely has two degenerate eigenvalues. Raoul’s comment moved me to tears and inspired me to write the following very sad tale.

The Sad Tale Of The Two Sets That Once Kissed and Then Parted

Matrix M(t) had two eigenvalues x_1(t) and x_2(t) where t\in[0,1]. Let

Set_1 = \{ (t, x_1(t)) : t\in[0,1]\}
Set_2 = \{ (t, x_2(t)) : t\in[0,1]\}

Originally, everyone thought that there was no symmetry in the system that M(t) described, which meant that Set_1 and Set_2 did not intersect.

Then someone discovered an “accidental” symmetry that led to an “accidental degeneracy”, which led the two sets to kiss at a single point.

Then someone realized that the original model was too naive and that in real life there is a perturbation that breaks the accidental symmetry, so an eigenvalue “gap” develops, which led the two sets to stop kissing each other and part forever.

THE END

princess-np

March 2, 2014

US Patent Office and Rip Van Winkle

Filed under: Uncategorized — rrtucci @ 7:49 pm

RipVanWinkle
For my non-American readers, Rip Van Winkle, written by Washington Irving, is an American folk tale in which a man wakes up after sleeping for 20 years.

Last Thursday (Feb. 27, 2014), I submitted 4 patent applications to the US Patent and Trademarks Office (USPTO). I will soon (in the next two weeks) post all 4 of them (plus supporting software and its documentation) at my website. They cover what I have referred to in previous blog posts as Operation Lisbeth, or the goldfish with the dragon tattoo. They deal with the use of quantum computers to do artificial intelligence and big data. I won’t say any more about them here, in this blog post. I’ll do that in future blog posts over the next few weeks. Instead, I’d like to use this blog post to praise effusively the Patent Office for the enormous strides it has made in modernizing its online submission systems.

According to this article, the USPTO first launched its EFS (electronic filing system) in March 2006. Last Thursday, I had the pleasure of using it for the first time, and it worked flawlessly and painlessly for me. I love it.

I was able to do everything I needed to do for my 4 patent applications, file all necessary documents and pay all fees, completely electronically and online, without ever using any paper copies or snail-mail.

Basically, as long as you can turn a document into pdf or txt format, you can submit it (with some minor exceptions). I typed all my documents in LaTex and turned that into pdf using the windows application WinEdt. I drew all my figures using the application InkScape, which allows you to save your drawings as pdf.

In the past, to submit an appendix containing computer source code, you had to mail the Patent Office a CD (Compact Disc) with the stuff. Now you can create a single txt file containing all your source code and send them that electronically. Vast improvement. (I used a free application called TXTcollector to create the required single text file from all my separate .java files)

In the past, for what is called the Information Disclosure Statement, you had to mail to the Patent Office a paper copy of each of your references. Now you can just send them electronically a pdf copy of your references. Much, much easier.

It’s easy nowadays to convince oneself that the US government is declining dangerously. So I find some solace in the fact that the Patent Office appears to have bucked that trend and improved significantly in the past 7 years or so. An institution like Rip Van Winkle, that is waking up after being deeply asleep and behind the times for many years.

I have only one minor quibble. They still don’t allow LaTex submissions and generate the pdf themselves from that, the way arXiv does. This means that they still retype the patent from its pdf version. If they allowed LaTex submissions, they could do what most physics and engineering journals have done for the last 15 years: add a few reformatting commands to the LaTex and publish that, without any need to ask a human to retype things, which is boring for the re-typist and introduces a lot of typos. Of course adding this LaTex capability to their EFS is still possible, and would be a natural next step in their path towards improvement.

Blog at WordPress.com.

%d bloggers like this: