This article is more than 1 year old

Time to overcome Java misconceptions

Program, profile, repeat

Myths and legends It doesn't really matter which version of the Java platform you use, does it? Well I know for sure that thread context switching is expensive. Isn't it? OK, but there's no doubt that 32-bit Java Virtual Machines (JVMs) are faster than 64-bit JVMs. Right?

The truth is, most of the things we think we know about Java performance are actually pretty hard to quantify. That's the conclusion Google engineers Jeremy Manson and Paul Tyma have come to through what they humbly describe as "an unending search for the truth".

"We had been writing a lot of code - server code, fast code, Java code, C++ code - and we realized one day that there are a lot of myths out there about Java performance, beliefs that we just knew weren't true, but had never been tested," Manson said at this Spring's Software Development West.

"We came to see that these unexamined beliefs actually make everybody's life more difficult," Tyma added. "They make you write lousy, broken code, because you're working around something that might not even be true. And then your code becomes unreadable."

Not precisely myth-busters, Manson and Tyma took on the roles of assumption challengers and shared their insights with SD West attendees, during a joint preso that touched on some timeless themes.

"This isn't about optimizing your code," Manson said. "It's about challenging those assumptions you make while you're writing it."

And what are those assumptions?

Number one: versions don't matter.

"A lot of people think that they can keep on using 1.4 forever, and it won't affect performance," Tyma said. "Well, it turns out they've been improving Java steadily over the past 13 years, and 1.6 is a lot faster than 1.5, and 1.5 is a lot faster than 1.4. And remember, you can run 1.4 byte code on 1.6 VMs."

Number two: 64-bit code is slower than 32-bit code.

"I've heard this from a lot of people," Manson said. "And the reason those people seem absolutely convinced of this is that the pointers are twice as large, and all of a sudden you're taking up between one and two megs of memory... and now that you're using more memory, your code isn't going to fit into the L1 and L2 cache, which will slow down your code."

But this scenario doesn't always manifest in the x86 world, he said, because 64-bit code has twice as many registers, so there's less register spillage, and you can have longer pipelines in your processor. So code that is very intensive in terms of registers and integer performance can be a lot faster in 64-bit.

Number three: thread context switching is expensive.

In the Linux 2.6 NPTL library, which is where Tyma and Manson ran their tests, context switching was not expensive at all. Even with a thousand threads competing for Core Duo CPU cycles, the context-switching wasn't even noticeable.

Number four: locking is expensive.

If they're uncontended, no. Uncontended synchronization is cheap, and can even be free. Add a thread, and that picture changes. Synchronized locking with contention gets more expensive; the cost goes up significantly. But increase the number of threads, and the cost doesn't change much after that, Tyma noted. "So yes, contended synchronization does cost more, but [that cost] doesn't tend to scale," he said.

"The moral here," said Manson, "is don't avoid synchronization. When doing tricky things to avoid synchronization, you end up writing really bad code."

So what's Tyma and Manson's bottom-line advice to developers seeking to write faster Java code? "If you want your code to be faster, don't waste your time trying to take advantage of any of these kinds of performance myths," Manson said. "Just write the best code you can. Profile it to find the bottlenecks. Remove the bottlenecks. Rinse and repeat."

You can read Manson's summarized takeaways here. ®

More about

TIP US OFF

Send us news


Other stories you might like