A writer for Wired asked for my comments a new Chinese paper on the EmDrive, which seems to refuse to die even after all these years. My response was too long for him to include, so here it is in full:

The most important thing that your readers need to understand is that the EmDrive is a perpetual motion machine: it violates the most fundamental law of physics, namely, the conservation of energy–momentum. If it were true, then all of physics—from Galileo and Newton through to Einstein, from nuclear and particle physics through to astrophysics and cosmology—would be overthrown. It would be far bigger news than the discovery of the Higgs boson—indeed, that discovery would be rendered invalid, because it relies crucially at each and every step on the conservation of energy–momentum.

Could such a world-changing claim be true? I’m tempted to invoke Monty Python; but for the benefit of readers on this side of the Atlantic, I think that it is arguably an understatement to say that it is “rather unlikely”.

The magnitude of the claim makes it both easier and more difficult to refute at the same time, both theoretically and experimentally.

On the theoretical side, a simple observation is that all of physics is based on the conservation of energy–momentum. Any calculation using the current laws of physics that violates this law must contain an error. It’s like someone starting with two slices of bread and a piece of pork, and then ending up with a chicken sandwich. You don’t need to follow each step of the process to know that something is not kosher.

Unless they argued that they had invented a way to transmute pork into chicken. Similarly, EmDrive proponents must explain how they are replacing the laws of physics with new laws that violate the conservation of energy–momentum.

Unfortunately, both Shawyer and the new Yang Juan et al. paper claim to use nothing more than the standard laws of electrodynamics—so we already know that each of their theoretical results must contain an error.

Finding the exact error, however, can be as difficult as finding a needle in a haystack. In the case of Shawyer’s original theoretical calculation, his argument was relatively clear and clean, and his error was consequently relatively simple to find: he neglected some of the forces on the device. When they are included, the claimed thrust disappears.

The Yang Juan et al. paper, however, is more akin to that haystack. They quote many valid equations of electrodynamics, but then stitch them together with numerous assumptions, and then use numerical simulation to compute a result. Without having a spare year to dig through their calculations and simulations, it’s impossible to know where they made their mistake. (Now you know why the Patent Office refuses to accept any more applications for perpetual motion machines.) I recommend that they submit their paper, and simulation code, to a reputable physics journal like the Physical Review, who might be able to find a graduate student with nothing better to do than debunk their submission.

A possible source of their error is their Fig. 1. In diagram (a) they show an open system, where microwaves are thrust into outer space. Such a system would indeed show a tiny amount of thrust: the microwave photons are the propellant. But they reject diagram (a) because the microwaves leak out (obviously), which prevents a standing wave (Shawyer’s claimed mechanism for getting amplification of the tiny thrust) from being maintained. They then replace this with diagram (b), which has placed on the exhaust a “matched load used to absorb the heat transferred from reflected microwaves”. This statement makes no sense at all: reflected microwaves would not transfer heat—only momentum, namely, the force that would prevent the system from getting any net thrust. If something more sophisticated is meant, then it is not explained, and certainly not modeled in their equations. It is possible that neglect of the momentum transfer to this “matched load” is the missing force in their calculations.

The experimental side of things is even more complicated. To provide a simple proof that a closed system violates the conservation of energy–momentum, one really needs to demonstrate the effect in space: if a completely closed EmDrive were to start accelerating in a particular direction, then that would be astonishing evidence that the laws of physics were done for.

It is almost impossible to measure and account for every force for an Earth-bound experimental arrangement: not only must the device be sitting on something (to stop it falling down to the center of the Earth), but it will (usually) be surrounded by air. Chinese experimenters may claim to have measured every single force and torque on the system, but it would carry far more weight if someone like NASA or an aerospace company were to confirm experimentally that the laws of physics had really been overturned.

If the history to date of perpetual motion machines is any guide, one must assume that such an outcome would, again, be “rather unlikely”.

That’s today’s question from wife and (non-physicist) friends alike. Or: “What did we really achieve?” Or: “What of benefit to mankind came out of it?”

Of course, I’m biased.

My response: “Was going to the Moon worth the cost?” (And, OK, for those who don’t believe we sent men there, we definitely at the least sent craft there, which is a good enough common denominator for me.)

It’s a judgment call. There’s no correct answer.

Personally, I’m glad that as a toddler I got to sit in front of a TV and watch the Moon landing. Likewise, I’m glad that my two boys got to sit with me yesterday afternoon in front of the 2012 equivalent (a laptop) and watch the CERN announcements live.

For those uninterested in the bigger issue of the achievements of mankind, the “economic spin-off” argument is usually where this ends up. Moon landing: Integrated circuits? Sort of essential today. Velcro? My wife appreciated that one more than the integrated circuits (while, er, browsing her laptop).

Particle physics: The World Wide Web? Kind of a useful invention from CERN, the same geekopolis that last night brought you the Higgs. Big data? Gigabytes, terabytes, petabytes … these have been the bread and butter of particle physicists for decades.

What about sit-coms? When I was a kid, Larry Hagman was the astronaut in I Dream of Jeannie. Today, I’m still stupefied that the prime-time sit-com of our day is about particle physicists. (And still get that “what-ya-talking-about-Willis” look from my wife when I laugh at the physics jokes.)

I guess that’s the cultural influence you get when a formerly purely technical field approaches its zenith of worldwide funding and attention.

Will the golden days of particle physics go down in history and folklore like the golden days of the space race?

I think it will be pretty similar.

As I sit here still listening to the second presentation at CERN announcing the undeniable discovery of the Higgs boson, I have an incredible sense of sadness.

Not only has the Higgs boson been found — the missing piece of the Standard Model of particle physics that was around long before I even had an inkling of being a particle physicist — but it has been found to be the plain, vanilla Higgs. No need for exotic explanations or “new physics”. It’s just the Higgs that was always suspected, hiding in a mass range that could never quite be ruled out.

And that means that it’s essentially “game over” for particle physics — both theoretical and experimental — for the rest of our lifetimes.

Twenty years ago, driving around the Olympic Peninsula near Seattle, I bet my then-Ph.D. supervisor that the Higgs didn’t even exist. I’ve now lost that bet. (I think it was for a tub of clam chowder.) But I don’t think either of us expected, back then, that the end would be so clean, so harsh. I think most physicists thought that either the Higgs would be ruled out — meaning new physics was required to explain the Universe — or that it would prove to be somewhat exotic.

But the reports from CERN this evening shut that gate. Good news for Peter Higgs: he finally gets that well-deserved trip to Sweden.

But it leaves a somewhat eerie hole in what was, until this evening, the most interesting search on the planet.

I am already sad for the thousands of physicists who will shortly find the funding taps turned off — some forever. I have no idea where they will go, or what they will do.

Good luck. 😦

I’ve recently thanked Steve Toub again for his clear explanation some time back of what I call the “parallel cache pattern”. He says that something similar is in his whitepaper, but I wanted to post a brief description here, if only so that I can return to it easily when wanting to remind myself of it!

The basic need: create a dictionary cache of objects, each of which is expensive to create, in a parallel environment.

You might think that this is built into the Task Parallel Library of .NET 4+. For example:

(Edit: The less-than and greater-than template symbols of C# have been replaced in this post by [ and ], because the former breaks the HTML of the WordPress page.)

ConcurrentDictionary[Foo, Bar] cache = new ConcurrentDictionary[Foo, Bar]();
Bar GetBar(Foo foo) {return cache.GetOrAdd(foo, f => CreateBar(f));}
Bar CreateBar(Foo foo) {return [expensive creation here];}

This is fine if multiple threads tend to initially hit the cache with different keys, because each thread will go off and create a Bar for a different key Foo, and eventually they’ll all be added to the cache (the blocking only occurs for these additions). In cases where two or more threads happen to have created the Bar for the same Foo key foo, all but the first will effectively be discarded (the ConcurrentDictionary doesn’t need a second opinion!).

There is a problem, however, if multiple threads all tend to want the Bar corresponding to the same Foo key foo at the same time. (I frequently find this to be the case, but maybe my use cases are abnormal.) Then all of the threads are told by the ConcurrentDictionary that it doesn’t yet exist, and they all go off and create it. Since we’re assuming that these Bar objects are expensive to create, this takes a long time (in machine cycles) to occur. What effectively happens is that every core gets tied up with a thread that is creating the same Bar object. When they all finish, all but one of them finds, to their disappointment (if threads can be disappointed), that their effort is all for naught: someone else has already given the ConcurrentDictionary the Bar object for that Foo key foo.

The net result is that the code appears to be running nicely in parallel (all the CPUs are lit), but it’s no faster than the non-parallel code. All that you’ve achieved is to burn your other CPUs for no net effect.

What you really want to occur is for only one thread to go off and create that Bar, with all the other threads blocking and yielding to other iterations in your Parallel.For or Parallel.ForEach statement.

Now, if your use case is such that each and every such iteration requires that one Bar object, then they will all eventually block, until that first thread is finished creating it. You haven’t achieved anything over a serial implementation, but at least your CPUs are free to do other things (in another parallel branch of your program, or for something else on your system), and you haven’t been fooled by lit CPUs into thinking that you’ve gained anything from the parallelism. And once that Bar has been created, all those blocked iterations will be resurrected, and can run off in parallel doing whatever they need to do with it. (Unless they all need a second one, in which case the process repeats.)

In my typical use cases, however, eventually some iterations are activated that require different Bar objects. As long as there exist as many iterations in the list that won’t be blocked by requiring that first Bar object as you have CPUs, then they will all eventually get to work creating different Bar objects. Your CPUs will be lit up, productively.

So what is Toub’s trick? Simply this observation: if the ConcurrentDictionary stores not Bar objects, but rather Lazy[Bar] objects, then the initializer of the Lazy[Bar] will automatically achieve what we want.

Basically, it works because the Lazy[Bar] constructor of the first thread returns very quickly: the Lazy[Bar] object is added to the collection almost instantaneously. So rather than all the other threads finding that the cupboard is bare, their wishes are instead fulfilled. They are all given the Lazy[Bar] object. (There is a vanishingly small probability that two or more will ask for the object in the short time it takes for the Lazy[Bar] constructor to complete.)

So how have cheated the expensive creation process? Basically, when all of those other threads “open their presents” (the Lazy[Bar] object), they effectively find a note telling them to wait until it’s ready, because (by default) Lazy[T] is thread-safe. So they all block, yielding to other parallel iterations, until the first thread has created the Bar object.

Magic! And it works really well.

So here’s the Toub generalization of the above example:

ConcurrentDictionary[Foo, Lazy[Bar]] cache = new ConcurrentDictionary[Foo, Lazy[Bar]]();
Bar GetBar(Foo foo) {return cache.GetOrAdd(foo, f => new Lazy[Bar](() => CreateBar(f))).Value;}
Bar CreateBar(Foo foo) {return [expensive creation here];}

Thanks Steve!

Prime Minister Gillard?

February 20, 2010

It may seem fanciful to believe that the Labor Caucus could “do a Hawkie” on Kevin Rudd during his first term in office, but it is less difficult to believe that Julia Gillard’s low public profile in recent weeks may be a brilliant Labor tactic to insulate her (no pun intended) from the messes that the Rudd Government has found itself in. That Labor may be keeping her in reserve as “Plan B” should not be such a stretch of the imagination.

Naïve extrapolation of current polling trends would suggest that Rudd is still a shoo-in to win this year’s election; but no one (other than a hard-line climate alarmist, perhaps) believes that naïve extrapolations are useful for anything significant in the real world. Labor optimists point out that the last one-term federal government was nearly eighty years ago; but the parallels are chilling: Scullin and Rudd won power from the only two Australian Prime Ministers to ever lose their seats. It has been reasonably argued that the surreal “out of sight, out of mind” conditions of the ensuing Parliaments created environments much more conducive to Australians rapidly forgetting the Prime Minister they voted out.

Labor desperately needs to rid itself of its ministerial dead wood — Garrett, Wong and Conroy — before the calling of the federal election; but for Kevin Rudd to do so this late in the game would destroy his credibility. Dumping Rudd in favour of Gillard, on the other hand, not only switches out a leader that Australians are increasingly unimpressed with, but moreover carries with it the implied prerogative of the new Prime Minister to determine her own Cabinet.

Whether such benefits would outweigh the embarrassment of dumping their own Prime Minister is a question that Labor’s power-brokers are undoubtedly assessing on a daily basis. But Julia Gillard has performed remarkably well while those around her have stumbled. Going slightly too far in industrial relations will not harm her; all she needs to do is fine-tune the system at the edges to restore clarity, certainty and fairness, and Australian employers and employees will quickly adapt. Tony Abbott, on the other hand, has surprised many by his treading into the dangerous territory of the ghost of John Howard — but at least he has shown that he has the guts that Australians expect of their leaders.

A Gillard versus Abbott election — with fresh faces on both sides — would be closer to the political battle that Australians have come to expect.