The sun has been cranky lately

 Uncategorized  Comments Off on The sun has been cranky lately
Mar 082011
 

UPDATE: Below is a video of the recent and impressive solar activity. It’s worth watching all the way to the end.

A very large coronal mass ejection (CME) was caught today on cameras from the Solar and Heliospheric Observatory (SOHO).  The sun’s relative size can be seen by the white outline on the occlusion disk in the center of the picture.

March 8, 2011, CME

March 8, 2011, CME

This eruption follows a warning issued yesterday by South Africa of a large solar flare triggering high radiation levels and possibly impacting the power grid.

Army officer denied evidence of Obama’s eligibility

 News, Uncategorized  Comments Off on Army officer denied evidence of Obama’s eligibility
Sep 022010
 

WorldNetDaily follows the court martial case of Lt. Col. Terrence Lakin. Lakin was denied “disclosure of Barack Obama’s documentation proving his eligibility to be commander in chief.”

WorldNetDaily reports on the decision by acting judge, Army Col. Denise R. Lind here.  “Lind ruled that it was ‘not relevant’ for the military to be considering such claims, that the laws allegedly violated by Lakin were legitimate on their face and that the chain of command led up to the Pentagon, and that should have been sufficient for Lakin,” reports the WorldNetDaily. Most disconcerting is that Lind rules as a number of federal judges in civil lawsuits which seek to obtain documentation proving Obama’s eligibility.

Aug 202010
 

Note: This report was originally published at Bright Side of News* on April 8, 2010.  After their server crashed, BSN* has not yet been able to recover the article after several weeks.  We are reposting the report here to serve as a mirror of the original article.  There are likely to be minor editing differences with the BSN* article.

Note 2: Only a month or two after it was published, a detailed report that I wrote was wiped out during a BrightSideOfNews* hard drive crash. That exhaustive report, praised by many throughout the industry as the finest of its kind yet produced, examined the emerging and inevitable ARM versus x86 clash.

It took a little while and cost BSN* a lot of money to recover the data on the hard drive, but that report is now back up and can be read here.

I’m currently working on a followup to that bit of analysis that will include even more hardware than the initial report. I’m still waiting on a vendor or two, so I can’t promise an ETA yet, but one thing I can state is that the new report will be very interesting.

The computing landscape is changing rapidly and the war between x86 and ARM microprocessors is now underway. The competitors have dramatically different strengths and weakness, making for a particularly exciting confrontation.

Most importantly, the results of this war will have profound effects well beyond the CPU market, where several companies will possibly see their fortunes upended. One thing is absolutely certain: computing will never be the same again.

Introduction

In this report we will discuss the emerging competition between ARM and x86 microprocessors. Led by the Intel Atom, x86 chips are quickly migrating downwards into embedded, low-power environments, while ARM CPUs are beginning to flood upwards into the more sophisticated and demanding market spaces currently owned by x86 processors. The central focus of this report will be an extensive compute performance comparison between the ARM Cortex-A8 versus the new Intel Atom N450, the new VIA Nano L3050 and, for historical perspective, an old AMD Mobile Athlon based upon the Barton core.   The Apple iPad A4 system-on-chip (SoC) is reportedly equipped with a 1GHz ARM Cortex-A8.

The Coming War: ARM versus x86

Over the last few years a war has been brewing. Two armies have been massing troops in their respective strongholds. Inside desktops, notebooks, servers and now even reaching into mainframes and supercomputers, the x86 family of microprocessors has mercilessly driven all competitors to extinction.

The “x86” moniker refers to the descendents of the 16-bit Intel 8086. Its 8-bit little brother, the Intel 8088, was the chip that powered the first IBM PC back in August, 1981. Shockingly primitive by today’s standards, the 8088 spoke a computer dialect that is still understood by the most modern, powerful and successful CPUs from Intel, AMD and VIA.

The roll call of those vanquished by the x86 family include microprocessors from IBM, DEC, Motorola, HP, Sun, Silicon Graphics, Commodore and even rivals from within Intel itself. Resistance has been futile. Eventually, even persistent holdout Apple succumbed to the relentless performance advances of the x86 juggernaut, dumping IBM’s Power architecture for the safety and reliability of x86 microprocessor advancements. Almost like clockwork, x86 CPUs double in capability every 18 months while prices continue to slowly decline.


x86 microprocessors, including AMD x86-64 and Intel EM64T, have taken over supercomputing. [Image taken from: http://en.wikipedia.org/wiki/File:Processor_families_in_TOP500_supercomputers.svg]

Yet, almost silently, a stealthy opponent has built up forces within the modest confines of PDAs, calculators, routers, media players, printers, GPS units and a plethora of other embedded devices but most notably mobile phones. Based in Cambridge, England, ARM Holdings dominates 32-bit microprocessor sales despite its very low profile. While AMD, a microprocessor vendor that commands about one-fifth of the x86 market, celebrated the sale of their 500-millionth CPU last July in their 40th year of operation, there were nearly 3-billion ARM chips shipped in 2009 alone.

The history of ARM microprocessors is almost as long as that for x86 CPUs. Sometimes called the “British Apple,” Acorn Computers began in 1978 and created a number of PCs that were very successful in the United Kingdom including the Acorn Electron, the Acorn Archimedes and the computer that dominated the British educational market for many years, the BBC Micro.

As the Commodore produced, 2 MHz MOS Tech 6502 microprocessor that powered the BBC Micro grew long-in-the-tooth, Acorn realized it needed a new chip architecture to compete in business markets against the IBM PC. Inspired by the Berkeley RISC project which demonstrated that a lean, competitive, 32-bit processor design could be produced by a handful of engineers, Acorn decided to design its own RISC CPU sharing some of the most desirable attributes of the simple MOS Tech 6502.

Officially begun in October, 1983, the Acorn RISC Machine project resulted in first silicon on April 26, 1985. Known as the ARM1, the chip worked on this first attempt. The first production product, the ARM2, shipped only a year later.

In 1990, Acorn spun off its CPU design team in a joint venture with Apple and VLSI under a new company named Advanced RISC Machines Ltd, which is now an alternative expansion of the original “ARM” acronym. While Acorn Computers effectively folded over ten years ago, its progeny, ARM Holdings is stronger than ever and dominates the market for mobile phone microprocessors.

Contrary to x86 chipmakers Intel, AMD and VIA, ARM Ltd does not sell CPUs, but rather licenses its processor designs to other companies. These companies include NVIDIA, IBM, Texas Instruments, Intel, Nintendo, Samsung, Freescale, Qualcomm and VIA Technologies. Late last year, AMD spin-off GlobalFoundries announced a partnership with ARM to produce 28-nanometer versions of ARM-based system-on-chip designs.

The ARM Cortex-A8 versus x86

Like the Intel Atom, the ARM Cortex-A8 is a superscalar, in-order design. In other words, the Cortex-A8 is able to execute multiple instructions – in the case of the Atom, up to two – during each clock tick,  but can only execute instructions in the order they arrive, unlike the VIA Nano and all current AMD and Intel chips beside Atom.  The Nano, for instance, can shuffle instructions around and execute them out-of-order to improve processing efficiency by about 20-30% beyond superscalar in-order chips.

The immediate predecessor of the Cortex-A8 is the ARM11 which found a home in the original Apple iPhone and countless other smartphones. The ARM11 is a simple, scalar, in-order microprocessor, so the best it can ever do is execute one instruction per clock cycle. As the Cortex-A8 is roughly equivalent to the Intel Atom, the ARM11 is somewhat similar to the VIA C7.

In-order chips suffer a performance hit because processing can come to a screeching halt when an instruction is encountered that takes a long time to complete. On the other hand, out-of-order chips can shuffle instructions around so that forward progress can usually be made while a lengthy instruction is simultaneously processed.

The Intel Atom manages to partially overcome this problem by implementing HyperThreading, Intel’s brand name for its version of symmetric multithreading (SMT). Like a few other Intel CPUs (and the three IBM PowerPC-based cores in the Xbox 360’s Xenon), the operating system (OS) views the Atom as if it has more processing cores than it actually does. In the case of the single core Atom N450, the OS sees two “virtual” cores. The operating system will accordingly distribute a thread (independently running task or program) to each core at once. Consequently, the Atom often churns through two unrelated instruction streams simultaneously, so even if one gets blocked by a slow, “high latency” instruction, the other thread can usually still be processed.

While HyperThreading doesn’t help much on single threaded tasks – and a vast amount of modern computing remains single-threaded – HyperThreading helps a great deal with slow input/output (I/O) intensive instruction streams since I/O operations can take an eternity from the CPU’s vantage point and can block even an out-of-order core. For instance, the Atom boots Windows 7 relatively quickly compared with even superscalar, out-of-order, single-core chips like the VIA Nano because the Atom can continue processing a second thread and does not have to frequently stop and wait on the vast number of I/O operations encountered during boot-up.

Intel chose to equip the Atom with HyperThreading instead of making the chip out-of-order because HyperThreading is simpler and consumes less power. Intel’s Austin design team created the Atom especially for low-power environments.

However, the benefits of HyperThreading diminish when multiple cores are available. The newer ARM Cortex-A9 MPCore is designed to be deployed in two or more cores, so SMT is not as important under multi-core conditions. For instance, the new NVIDIA Tegra 2 boasts two ARM Cortex-A9 MPCore processors. Moreover, the A9 is superscalar, and out-of-order with speculative execution, putting it on equal footing with the newer x86 chips, at least superficially.

Keep in mind that modern x86 microprocessors tend to be very rich in execution units and, after decades of development, are extremely refined in terms of low instruction latencies and feature sets.  Perhaps most importantly, the supporting x86 “ecosystems” are unmatched.  “Ecocsystem” is the current buzzword that refers to the surrounding chip set, memory, I/O, interconnect and peripheral infrastructure.

Moreover, ARM chips are RISC cores which have reduced instruction sets. In fact, RISC is an acronym for “Reduced Instruction Set Computer” and ARM CPUs typify this genre in many ways.

In general, RISC chips are leaner and usually support fewer instructions than CISC or “Complex Instruction Set Computer” microprocessors. While today’s x86 CPUs wield a decidedly CISC-style instruction set, the underlying hardware has absorbed most of the advantages of RISC while implementing many complex instructions in microcode. For instance, the VIA C3 bolted a CISC x86 frontend over a very MIPS-like RISC core.

An issue to watch out for when comparing ARM CPUs against x86 microprocessors is the size of binary files. In the past, RISC machines have produced larger executables because more instructions are often necessary than with CISC-derived systems. If binary sizes differ significantly, this places greater pressure on cache sizes, RAM size and memory bandwidth. With today’s terabyte-scale mass storage devices, increased binary bloat is not significant since the vast majority of drive space is consumed by video and other multimedia data.

Binary size comparison ARM x86
STREAM 112.3% 100.0%
miniBench 115.1% 100.0%
CoreMark 107.3% 100.0%

The table above shows that ARM Cortex binaries are indeed larger than x86 binaries, but the difference is only about 10-15 percent. If this sampling is representative for both platforms, binary size differences will rarely matter. ARM L1i and L2 caches should minimally be as large as those found on x86 microprocessors, but that is not currently the case, as will be discussed shortly.

ARM representatives responded with the following:

The binary size of the ARM benchmarks is significantly lowered with the Thumb-2 hybrid instruction set.  Expected results are 20-30% lower code size at equivalent or better performance.  The 10.0x version of Ubuntu Linux has been optimized for Thumb-2.  (the version as tested was Ubunu 9.04)

Of course, the real story in the battle between ARM and x86 is how they measure up against each other in the performance arena. In this report, we’ll take a close look at competitive performance across a broad range of tests and also take a peek at power usage.

Benchmarking considerations

Normally it is a primary prerequisite to insure that all systems under test have been configured identically prior to benchmarking. Unfortunately, this is impossible to achieve in this report given the highly integrated nature and grossly dissimilar “ecosystems” of ARM versus x86 microprocessors. For instance, the Freescale i.MX515 system that we used in our tests only supports DDR2-200 32-bit memory, much slower than the VIA Nano L3050 system’s DDR2-800 64-bit memory. Worse, the i.MX515’s integrated video solution is far more limited, maxing out at 1024×768 at 16-bit color depth, than the graphics solutions on any of the x86 systems.

Given this rigidly set, unlevel playing field, we deployed a battery of benchmarks that run primarily within the CPU’s caches. In other words, we made an attempt to only measure CPU-bound performance.

We verified the CPU sensitivity of each test by increasing the clock speed of the VIA Nano from 800MHz to 1800MHz. Tests should scale closely to the clock speed ratio of 225 percent.

Benchmark scaling
Hardinfo 226%
Peacekeeper 209%
Google V8 272%
SunSpider 228%
miniBench 220%
CoreMark 225%
stream add 108%

As shown in the table above, all of the benchmarks scaled appropriately with the exception of Google V8 and Stream Add, a memory bandwidth test that is constrained by memory performance and is included here as a counter example. Benchmarks that scale superlinearly (superlinear: to increase at a rate greater than can be described with a straight line) like Google V8 usually are not good benchmarks. Indeed, Google V8 also demonstrated very large run-to-run variations on several tests like EarlyBoyer and RegExp. Nevertheless, we have included full Google V8 results since it remains a popular JavaScript benchmark.

Speaking of run-to-run variation, we ran each test at least three times and calculated the coefficient of variation (CV) to insure result validity.

For this report, we placed four CPUs under test: the 800MHz Freescale i.MX515 which is based upon the ARM Cortex-A8, the new VIA Nano L3050 downclocked to 800MHz, the new Intel Pineview-based Atom N450 downclocked to 1GHz and, for historical perspective, an 800MHz Mobile Athlon (Barton core).

Unfortunately, it was impossible to downclock the 1.67GHz Atom N450 below 1GHz, but, as you will see, the results we obtained are still very interesting. The Atom N450 introduces an on-die GPU which significantly reduces overall platform power consumption compared with the older Silverthorne-based Atom platforms.

I purchased a Gateway LT2104u netbook from Best Buy for this report in order to test the Intel Atom N450. The Gateway is a very well executed netbook design with a solid feel, attractive appearance, excellent battery life and good feature set.

The VIA Nano L3050 is the new, second generation, “CNB” Nano that boosts performance from 20-30 percent beyond the original “CNA” Nano, while also reducing power demands by similar amounts. The CNB-based Nano is still based upon the same 65nm Fujitsu process leveraged with the original CNA-based VIA Nano.  Despite these improvements, the CNB Nano die-size is almost identical to its predecessor’s at around 62-64 square millimeters.

The table below summarizes relevant system details.

Freescale i.MX515 (ARM Cortex-A8) Mobile Athlon (Barton) VIA Nano L3050 Intel Atom N450
L1i 32 kB 64 kB 64 kB 32 kB
L1d 32 kB 64 kB 64 kB 24 kB
L2 256 kB 512 kB 1,024 kB 512 kB
frequency 800 MHz 800 MHz 800 MHz 1,000 MHz
memory speed DDR2-200 MHz (32-bit) DDR-800 MHz DDR2-800 MHz DDR2-667 MHz
operating system Ubuntu 9.04 Ubuntu 9.04 Ubuntu 9.04 Jolicloud (Ubuntu 9.04)
gcc 4.3.3 4.3.3 4.3.3 4.3.3
Firefox 3.5.7 3.5.7 3.5.7 3.5.7

All systems ran Ubuntu Linux Version 9.04 with the exception of the Atom netbook where we had to install Jolicloud Linux because of video driver issues. However, Jolicloud is based upon Ubuntu 9.04, so programs installed from the Ubuntu repositories were identical.

We chose Ubuntu 9.04 because the ARM-based Pegatron nettop we used in this report came with Ubuntu 9.04 preinstalled. An attempt to upgrade that box to the latest version of Ubuntu failed due to insufficient disk space. The Pegatron device was equipped with a 4GB flash drive.

We undersclocked the 1.8GHz VIA Nano L3050 to 800MHz by using the CPU multiplier setting in the Centaur reference system’s BIOS.  We verified the proper clock speed by reading MSR 0x198.  For the Atom N450 Gateway netbook, we underclocked the Atom to 1GHz using the Gnome CPU Frequency Monitor taskbar applet.  This handy applet does not support the VIA Nano yet.


We used the Gnome CPU Frequency Monitor applet to set the Atom’s clock speed to 1GHz.

For JavaScript tests, all systems ran Firefox version 3.5.7. It is very important to use the same browser version for JavaScript tests because performance can vary tremendously from browser to browser or even version to version of the same browser.

We thank C.J. Holthaus and Glenn Henry from Centaur Technology for the VIA Nano L3050 reference board, and Katie Traut and Phillipe Robin from ARM for the tiny, Ubuntu based Freescale i.MX515-based Pegatron prototype system.

The Pegatron “nettop” is only slightly larger than a CD case yet it boasts a full complement of features including 512MB of DDR2-200MHz memory (32-bit interface), a VGA connector, wireless “N” networking, Bluetooth 2.1 + EDR, a flash memory card reader, and audio, headphone, Ethernet and USB ports. Total system power usage rarely rises much above 6 Watts.

Unless specified otherwise, all benchmark results are reported so that larger numbers correspond to better performance. Many tests have been “normalized” against the ARM Cortex-A8 so that results are reported in terms of the performance ratio with the Cortex-A8. For instance, if the Atom is twice as fast at the Cortex-A8 on a certain test, it will score 2.00.

A gander at memory subsystem performance

As mentioned earlier, the memory subsystems vary significantly among these dissimilarly configured systems. The ARM Cortex-A8 struggles with it very weak DDR2-200MHz, 32-bit memory.

Nevertheless, memory bandwidth results are important because they underscore a handicap that ARM must eventually address. ARM systems have typically been optimized for extreme low-power environments while x86 systems have been aggressively optimized for performance. A sacrifice made in the Freescale i.MX515 is memory speed exchanged for low power usage, but this absolutely destroys performance on many types of tasks as exemplified by our STREAM results.

As can be seen in the graph above, the ARM Cortex-A8 as part of the Freescale i.MX515 struggles against even the ancient AMD Athlon and is creamed by the VIA Nano and the Intel Atom. While part of the problem is its pokey memory, another component is the ARM chip’s meager 32-bit memory interface, half the width used for single-channel memory access by x86 chips. If the Cortex-A8 were equipped to access DDR2-800 memory through a 64-bit interface, it might very well keep up with its x86 rivals in terms of memory bandwidth.

For this report, ARM representatives explained the design decisions behind the Freescale i.MX515 used in our Pegatron prototype:

The ARM ecosystem is centered on a “right-sized” computing philosophy. ARM Partners design their SoCs to a particular set of applications, enabling the best tradeoff for power, cost and performance for a given application.   The Freescale i.MX51 was designed for a particular application class, with the memory subsystem designed for the needs of these applications. It is understandable that the performance of this memory subsystem will be different from platforms targeted at general purpose computing applications.

Incidentally, the VIA Nano can also be configured to support 32-bit memory access. This is desirable in severely space constrained environments where trace and pin counts adversely impact package and PCB implementation size.

Integer Performance

Although it might not always appear to be the case, all computing is the processing of numbers. From the words of a love letter, to the glistening dew drops on a rose, to Johnny Cash’s rumbling, anguished, repentant voice, to Gordon Freeman’s apocalyptic universe, to the ruby slippers on Dorothy’s feet, all are simply numbers to a computer.

For most chores, the only numbers that matter are integers. Integers are the natural counting numbers like 1, 2, 3 and their negative counterparts plus zero. With the exclusion of 3D gaming and some types of video and still image rendering, encoding and manipulation, the vast bulk of day-to-day computing is integer-based. The integer test results we look at here can give us insight into typical system performance across chores like word processing and web browsing.

The Embedded Microprocessor Benchmark Consortium (EEMBC) recently released a benchmark that is freely available to anyone. Dubbed “CoreMark,” this test provides a quick way to compare CPU performance across entirely different processor architectures.

We compiled CoreMark on each platform using GCC version 4.3.3 and the following flags:

-O3 -DMULTITHREAD=4 -DUSE_FORK=1 -DPERFORMANCE_RUN=1  -lrt

We chose to generate four threads to insure scaling across a variety of systems featuring multiple cores and/or HyperThreading like the Intel Atom.

As you can see from the graph above, the ARM Cortex-A8 is very competitive on EEMBC CoreMark, running almost as fast as the Athlon and Nano. The Atom pulled ahead thanks to HyperThreading combined with its 25 percent clock speed advantage over the other chips. Unfortunately, there aren’t many more overall wins for the Atom ahead; please note, however, that most of the remaining tests are single-threaded.

“miniBench” is a diverse benchmark that I’ve been working on for several years. It’s part of my OpenSourceMark benchmarking project. miniBench contains a wide variety of popular tests and runs quickly from the command-line. I also have a GUI-based version that I wanted to use for this report but could not do so because the Qt tool chain would not install completely on the ARM system. Instead, I used the excellent and relatively lightweight Code::Blocks IDE to create and manage the necessary C++ project files for a command-line binary.

You can download the x86 Code::Blocks project here. An x86 Linux binary compiled with static libraries is here. A similar ARM Cortex-A8 Linux binary is here. Both the x86 Linux project and the ARM Cortex-A8 project will eventually be uploaded to the OpenSourceMark SourceForge page, along with GUI adaptations of these benchmarks.

The ARM Cortex-A8 struggles on three of the five tests in this first miniBench chart. Heap Sort is the worst result for the A8 and this is almost certainly because the test appears to be significantly impacted by memory bandwidth. The i.MX515 system is saddled with very poor bandwidth as already demonstrated in this report. Integer Matrix Multiplication is another memory bandwidth sensitive test where the ARM chip comes up short.

However, the ARM Cortex-A8 is extremely impressive on the Integer Arithmetic test, blowing away the Athlon and doubling the Atom’s performance. The Integer Arithmetic test does exactly what you’d expect it to do: it performs a large number of very simple integer arithmetic calculations.

Also notice that the 800MHz ARM Cortex-A8 beats the 1GHz Intel Atom N450 on the ubiquitous Dhrystone benchmark despite the fact that the ARM chip spots the Atom a 25 percent clock speed advantage.  ARM advertises that we should be able to get 1,600 Dhrystone MIPS from an 800MHz Cortex-A8.  On our tests, the 800MHz ARM Cortex-A8 achieved 1,680 Dhrystone MIPS.

It’s clear that the ARM Cortex-A8 is aggressively optimized for Dhrystone performance, a fact borne out by the fact that ARM touts the chip’s Dhrystone throughput.

On the second set of miniBench integer tests, the ARM Cortex-A8 holds its own against the brawnier x86 CPUs. The ARM Cortex-A8 even beat the VIA Nano L3050 on the Sieve test.  More remarkably, the Cortex-A8 is very close to parity with the Atom across all of these tests, save for one, if the Atom’s 25 percent clock speed advantage is considered.

Notice, though, that the ARM chip could not run the String Concatenation test. This is an important indication of the relatively immature state of ARM’s Linux/GNU software support. Ubuntu as a whole was often flakey. Doubtlessly, this will improve with time.

The VIA Nano L3050 obliterates all of the competition on the hashing tests because the Nano features hardware support for these important security functions.

However, the 800MHz ARM Cortex-A8 is amazingly good at hashing and thoroughly beats the 1GHz Atom on both tests and is only slightly slower than the Athlon.

The VIA Nano L3050 enjoys its biggest triumph on the miniBench cryptography tests because the Nano is equipped with robust hardware support for AES ECB encryption and decryption.

Again, the ARM Cortex-A8 remains very close to the Intel Atom if the Atom’s 25 percent clock speed advantage is considered.

HardInfo is one of the few CPU benchmarks available from within Ubuntu’s repositories.

The ARM Cortex-A8 doesn’t perform quite as well on HardInfo as it did on miniBench, possibly because I used very aggressive optimization flags for both platforms when compiling miniBench. Nevertheless, the ARM Cortex-A8 stays within spitting distance of the x86 CPUs except on the FPU Raytracing test which is not an integer test but rather a floating-point test.

Floating-point performance is the ARM Cortex-A8’s Achilles ’ heel as we will see in the next section.

Floating-point performance

Gaming, scientific computing, certain spreadsheets like financial simulations and some image and video manipulation tasks involve fractional and irrational numbers. Called “floating-point” because the decimal or radix point can float around among the significant digits of a number, floating-point performance has become increasingly important in modern computing.

However, good floating-point performance is relatively hard to engineer and requires a substantial number of additional transistors.  Of course, this drives up power usage. Typically, floating-point intensive operations consume more power than pure integer tasks. In fact, miniBench’s LinPack test was the worst case power consumer on the VIA Nano.  Centaur discovered this while I worked there as head of benchmarking.  However, this does not include “thermal virus” programs like the absolute worst case program developed by Glenn Henry, Centaur’s president.

Integrated floating-point (FP) hardware is a fairly new addition to ARM processors and even though the Freescale i.MX515 ARM Cortex-A8 features two dedicated floating-point units, there are still severe limitations. The faster of the two FP units is the “Neon” SIMD engine, but it only supports 32-bit single-precision (SP) numbers. Single-precision numbers are too imprecise for many types of calculations.

Hardware support for 64-bit, double-precision, floating-point calculations is provided by the “Vector Floating-Point” (VFP) unit, a pretty weak coprocessor. And despite being called a “vector” unit, the VFP can only really operate on scalar data (one at a time), although it does support SIMD instructions which helps improve code density.

Oddly enough, during our performance optimization experiments, Neon generated the same level of double-precision performance as the VFP, while doubling the VFP’s single-precision performance.  When we asked ARM about this, company representatives replied, “NEON improves FP performance significantly. The compiler should be directed to use NEON over the VFP.”

We therefore compiled miniBench to leverage Neon for this report. Note that while the Neon compiler flag was used for the ARM chip, none of the tests are explicitly SIMD optimized – the x86 version of miniBench used in this report does not include hand-coded SSE or SSE2 routines and the ARM Cortex-A8 version of miniBench does not include similar Neon code.

In the miniBench MFLOPS tests, the ARM Cortex-A8 looks pretty bad except on division.

While the VIA Nano has the best DP (double-precision) performance, note how well the Intel Atom  N450 handles SP calculations.

It is also worthwhile to recognize the very good floating-point division performance of the ARM Cortex-A8’s Neon.  Unlike all of the x86 chips that I have ever tested, the Cortex-A8 delivers identical throughput for both floating-point division and multiplication.  Division is much slower on x86 processors than multiplication.  Consequently, the Cortex-A8 keeps up very well with the x86 CPUs in this report on DP division, more than doubling the Atom’s performance when the Atom’s clock speed advantage is considered. In single-precision division, the ARM Cortex-A8 beats ALL of the x86 microprocessors it’s pitted against here.

The ARM Cortex-A8 continues to languish on the remaining miniBench floating-point tests with two notable exceptions. The Cortex-A8 is fairly strong on FFT calculations, an extraordinarily important algorithm for many, many tasks. The ARM chip is also competitive with the Atom on the Double Arithmetic test.

Observe how the old Barton-core Mobile Athlon demolishes all of the other chips on Trig. AMD has historically provided industry leading performance on transcendental calculations, while the same area has always been a big weakness for VIA’s CPUs.  ARM really needs to bolster their chips’ performance on transcendental operations like the trigonometry functions exercised in this test.

The takeaway from this section is that the ARM Cortex-A8 does not deliver acceptable floating-point performance for netbooks, notebooks or desktops compared with x86 CPUs. This is an area ARM must address if the company plans to compete toe-to-toe with x86 microprocessors.

JavaScript performance

JavaScript performance has become very important as cloud-based computing has finally begun to take hold with the appearance of solutions like Google Apps, Zoho Office, Adobe’s Acrobat.com, Aviary and many more applications. The Google Android operating system largely foregoes native applications and leverages Web-based JavaScript programs. Jolicloud Linux takes a similar but less aggressive tack allowing native and cloud-based applications to seamlessly co-exist.

There are several widely used JavaScript tests that run across all of the CPUs examined in this report. However, it is very important to run these tests on the same browser across all platforms.  Even specific browser version is also very important because JavaScript performance varies wildly from browser to browser and version to version as web browser developers push each other in a mad race to provide the fastest JavaScript engines.

Thankfully, Firefox 3.5.x is available for each system included in this report and we used it for these tests.

FutureMark, the maker of PCMark and 3DMark, has introduced its own JavaScript benchmark called Peacekeeper. FutureMark Peacekeeper is hands down the most elaborate JavaScript benchmark currently available, although it is difficult to assess its validity. PeaceKeeper is the only JavaScript test in our roundup that had complex graphical components.

The Freescale i.MX515 ARM system fared poorly against its x86 rivals across all Peacekeeper tests. This might be partially accountable to the slow main memory subsystem which saddled Cortex-A8. The i.MX515 Cortex-A8 only has 256kB of L2 cache compared to 512kB for the Athlon and the Atom and 1,024kB for the Nano, so it is much easier for a benchmark to spill out of the Cortex-A8’s L2 cache and into its extremely slow main memory.

ARM representatives agreed that the Cortex-A8’s poor showing on FutureMark Peacekeeper is most likely due its L2 handicap, perhaps making Peacekeeper, in the context of this report, more of a comparison of memory subsystems, not processors.

Note also that the ARM system failed to complete the Peacekeeper complex graphics test.

The VIA Nano L3050 was the clear winner of FutureMark’s PeaceKeeper, besting all of its rivals on every test. Even though the Intel Atom N450 was far behind the two other x86 chips, its overall score was nearly twice that of the ARM system. Again, keep in mind that the Atom also ran with a 25 percent clock speed advantage over the other chips in this comparison. Also be aware that JavaScript is not threaded, so the Atom’s HyperThreading engine won’t help it much on JavaScript tests.

With Google in the lead of cloud-based computing efforts, it should not be surprising that the search engine giant also provides its own JavaScript benchmark. Unfortunately, the Google V8 benchmark does not behave like a very good benchmark at this point, demonstrating large run-to-run variation and superlinear scaling. Nevertheless, Google V8 is a popular JavaScript benchmark, so we included it here.

The Google V8 benchmark closely reproduced FutureMark Peacekeeper’s results. VIA’s Nano L3050 won every test by significant margins again. The Atom trailed the other x86 processors badly, but still nearly doubled the ARM Cortex-A8’s showing.

Our final JavaScript benchmark is SunSpider, perhaps the most popular JavaScript test in use today.

Again, the ARM Cortex-A8 does not look good, faring only slightly better than on the other two JavaScript benchmarks.

The VIA Nano L3050 barely pulls out an overall win, its score hurt by very poor performance on bit level operations.  The ARM Cortex-A8 beats the Nano on two of these tests.

Despite its age, the AMD Mobile Athlon based on the Barton core has delivered competitive performance across nearly all tests.

I must state at this point that the JavaScript results do seem to reflect the relative, subjective, overall feel of the four systems. Despite its strong showing on many integer tests, the Freescale i.MX515-based Pegatron system feels much more sluggish than all three of the x86 systems; the Pegatron’s extremely slow memory subsystem doubtlessly contributes to this issue. The Atom N450 is also clearly more lethargic than either the AMD Mobile Athlon or the VIA Nano L3050 systems. The AMD and VIA systems are essentially indistinguishable during normal usage.

2D graphics performance

Take the following chart with a grain of salt because the video subsystems across the three systems are very dissimilar. The VIA, Intel and Freescale systems all used integrated graphics while the AMD system was equipped with a discrete NVIDIA NX6200 AGP card.

Even though the three x86 systems ran at 24-bit color depth, they were all two to three times faster than the ARM system that ran at only 16-bit color depth. We tested all systems at 1024×768 (XGA) resolution except the Atom, which we tested at the native panel resolution of 1024×600.

Power consumption

While the x86 microprocessors in this comparison enjoy a clear overall performance advantage, ARM CPUs are renowned for their power usage thriftiness. It is very difficult to compare power usage among the four CPUs under test for this report. The AMD and VIA systems are inappropriate for power comparisons because they are based on desktop hardware.

The chart below contrasts power consumption between the Intel Atom N450 and the ARM Cortex-A8 while running miniBench. The power curves were generated from system power usage adjusted downwards so that idle system power was discarded. For the Atom, idle power was 13.7W with the Gateway netbook’s integrated panel disabled while the idle power for the Pegatron system was only 5.4W.

Be aware that the Pegatron prototype does not implement many power management features.  ARM representatives note:

The Pegatron development board was designed as a software development tool and does not have a commercial production software build so it does not have many of the power management features found in ARM-based mobile devices. Production systems would expect to have aggressive power management implemented, lowering the ARM power consumption.

Given this information, the results we show here likely represent an energy consumption condition considerably worse than would be encountered with a similarly configured, commercial, ARM Cortex-A8-based system.

Subtracting idle power usage should isolate the curves to the power necessary for running miniBench. Note that the Atom reached minimum power usage shortly after startup and never reached that level again. Idle power beyond that point is about 1 Watt higher. Even taking that into consideration, the Atom consumes at least three times the power of the ARM Cortex A8 on the same tests.

It’s particularly interesting to see how power usage compares on the AES tests where both CPUs deliver comparable performance. The first major hump on the Atom curve shows the power consumed on the AES tests. Compared with the ARM Cortex-A8, the Intel Atom N450 required about four times more power while delivering only about 30 percent additional performance – and this is with a 25 percent clock speed advantage.

The sharp peak in Atom power usage occurred on the miniBench floating-point memory bandwidth tests.

The Atom completes miniBench in about one-half the time needed by the ARM Cortex-A8 due to the ARM processor’s very poor floating-point performance. The first major dips in both curves (at 1000s and 2000s) indicate where the two systems complete the benchmark.

Even though floating-point hardware can draw a lot of power, FP units usually deliver significant energy savings because floating-point operations take much less time to complete with accelerated hardware support. Energy consumed for a task is: E = P * t, where “E” is for “Energy,” “P” is for “Power” and “t” is for “time.” Good floating point hardware might drive up power demands, but the time to complete FP operations is reduced enough to dramatically reduce the total energy needed for those operations.

Despite the fact that the ARM Cortex-A8 blows away the Intel Atom in power thriftiness, don’t belittle the Atom. It is a resounding success in terms of reducing the power demands of x86 microprocessors. The Intel Atom is currently the only realistic x86 system-on-chip (SoC) design ready to migrate downwards into smartphones.

Doubtlessly inspired by the VIA C7 — which explains why Intel set up shop in Austin, the same town where VIA’s Centaur design team is headquartered  (in fact, a few ex-“Centaurians” worked on the Atom)  – the Intel Atom delivers acceptable performance while sipping power at levels far lower than usually seen in the x86 world. Right now, there is no competing, low-power x86 CPU – let alone SoC – that can match the Atom in terms of performance per Watt, especially on multithreaded applications.

Conclusion

The ARM Cortex-A8 achieves surprisingly competitive performance across many integer-based benchmarks while consuming power at levels far below the most energy miserly x86 CPU, the Intel Atom. In fact, the ARM Cortex-A8 matched or even beat the Intel Atom N450 across a significant number of our integer-based tests, especially when compensating for the Atom’s 25 percent clock speed advantage.

However, the ARM Cortex-A8 sample that we tested in the form of the Freescale i.MX515 lived in an ecosystem that was not competitive with the x86 rivals in this comparison. The video subsystem is very limited.  Memory support is a very slow 32-bit, DDR2-200MHz.

Languishing across all of the JavaScript benchmarks, the ARM Cortex-A8 was only one-third to one-half as fast as the x86 competition. However, this might partially be a result of the very slow memory subsystem that burdened the ARM core.

More troubling is the unacceptably poor double-precision floating-point throughput of the ARM Cortex-A8. While floating-point performance isn’t important to all tasks and is certainly not as important as integer performance, it cannot be ignored if ARM wants its products to successfully migrate upwards into traditional x86-dominated market spaces.

However, new ARM-based products like the NVIDIA Tegra 2 address many of the performance deficiencies of the Freescale i.MX515. Incorporating two ARM Cortex-A9 cores (more specifically, two ARM Cortex-A9 MPCore processors), a vastly more powerful GPU and support for DDR2-667 (although still constrained to 32-bit access), the Tegra 2 will doubtlessly prove to be highly performance competitive with the Intel Atom, at least on integer-based tests. Regarding the Cortex-A8’s biggest weakness, ARM representatives told us its successor, the Cortex-A9, “has substantially improved floating-point performance.” NVIDIA’s CUDA will eventually also help boost floating-point processing speed on certain chores.

Unmatched software support has always been the “ace in the hole” for the x86 contingent. However, with the success of Linux and the maturity of its underlying and critical GNU development toolset, Linux/GNU support could be the great equalizer that allows ARM to finally overcome the x86 stranglehold in netbooks and even notebooks and desktops. Maturing Linux support might also assist ARM chips to make further incursions into gaming devices.

I didn’t expect it, but the emerging war between ARM and x86 microprocessors is turning out to be much more competitive and interesting than I ever imagined.

In addition to the main ARM versus x86 focus of this report, there is also a subplot pitting the new Intel Atom N450 against the new VIA Nano L3050. The Intel Atom N450 is a remarkable product in that it is the first x86 SoC (system-on-chip) that is suitable for smartphones and other ultra-low power environments. As such, the Atom promises to dramatically improve the sophistication and performance levels of those market spaces.

While the various Atom models currently dominate the booming netbook market, it is evident from our JavaScript tests that the VIA Nano L3050 is much more desirable if JavaScript performance is important at all. Across our JavaScript benchmark results, the 800MHz VIA Nano L3050 is about 50 percent faster than the 1GHz Intel Atom N450.

However, VIA still lags Intel in terms of suitability for low power consumption environments, largely because Intel leverages its outstanding 45nm fabrication technologies with the Atom, while VIA still produces the Nano L3050 in the relatively elderly 65nm Fujitisu process node. The Atom is also strong on multithreaded tasks as demonstrated by its CoreMark victory. HyperThreading will also benefit Atom in I/O intensive environments where the single-core Nano will be hard-pressed to keep up.

Lastly, the AMD Mobile Athlon in this comparison gives us important insight into how the new chips from Intel, VIA and ARM stack up historically. Overall, across all of our performance tests, the ancient Barton core-based Athlon came in a very close second behind the VIA Nano L3050. This suggests AMD could easily produce a competitive low power CPU if the chipmaker did nothing else but shrink one of its older core designs while adding a few power saving tweaks.

In summary, ARM is positioned very well to engage in battles with the Intel Atom as that x86 chip advances into smartphones. The ARM Cortex-A8 appears to use much less power than the Atom, while often delivering comparable integer performance. Nevertheless, the Atom is significantly faster overall when considering holistic system performance, but that performance will be accompanied with a battery life penalty and significantly more heat production. Heat is a serious problem within the tight confines of mobile phones.

New chips based upon ARM Cortex-A9 derivatives, like the NVIDIA Tegra 2, address many of the performance weaknesses we encountered with the Freescale i.MX515. If ARM is to achieve sustained victories in the netbook space – let alone in the more performance demanding notebook and desktop spaces – ARM must substantially improve floating-point thoughput.

While the dedicated functional block approach used by ARM and its legions of licensees to provide image manipulation, video decoding/encoding, security and Java acceleration is still valid, it is not a substitute for double-precision floating-point performance.

ARM representatives told us for this report that the Cortex-A9 “has substantially improved floating-point performance.”  It will take a big jump forward to catch their x86 rivals, but if ARM pulls it off, Intel, AMD and VIA are going to have a big, bloody war on their hands.  It is conceivable the x86 empire might finally see the boundaries of its swelling, vast territories begin to retract in the near future under an army ant-like assault of tiny, fast, cheap, multi-core ARM microprocessors coming at them from dozens of different companies.

ARM’s success might also have a negative impact on Microsoft, since Linux will almost certainly play a major role in ARM’s ability to storm the netbook, “nettop,” notebook and even desktop spaces.

Whatever the outcome, it’s time to pay attention to ARM. Our results clearly demonstrate how it was possible for an ARM chip to steal the Apple iPad away from Intel’s Atom. The Apple iPad might represent merely the first of many ARM victories in its escalating war against the x86 world.

We thank Katie Traut and Phillipe Robin from ARM for the impressively tiny but full featured Freescale i.MX515-powered Pegatron prototype Ubuntu system. We also thank C.J. Holthaus and Glenn Henry from Centaur Technology for the VIA Nano L3050 reference board.

Last summer after eight years there, Van Smith left his job at Centaur Technology to form the company Cossatot Analytics Laboratories. Van was head of benchmarking for Centaur and represented VIA Technologies within the BAPCo benchmark consortium. Van has written a number of computer benchmarks including OpenSourceMark and miniBench and he has influenced or directly contributed to many others. For instance, Van wrote the cryptography tests in SiSoftware Sandra.

Nearly ten years ago, Van departed Tom’s Hardware Guide as Senior Editor to form his own website, Van’s Hardware Journal (VHJ). Van was recently interviewed and quoted in a CNN article based upon his investigative journalism published at VHJ. Van also served as Senior Analyst for InQuest Market Research.



Kagan appointment: How separate are church and state in the United States?

 News, Opinions, Uncategorized  Comments Off on Kagan appointment: How separate are church and state in the United States?
Jul 212010
 

Should we view the following astonishing statistics as separation of “church and state” in the United States, or rather “government without representation”?

The current makeup of the United States Supreme Court includes 6 Catholics and 2 Jews. If Kagan’s nomination goes through, there will be 6 Catholics and 3 Jews on the Supreme Court.  The make-up of the court would be 66.6% Catholic justices and 33.3% Jewish justices.

As of 2008 the U.S. population professed to be 76% Christian with 25.1% of the population specifying Catholicism. This means that slightly over 50% of the American population professes to be Protestant. The Jewish faith makes up 1.2% of the 2008 population and 15% of the population stated that it had no religious affiliation. Here is a graph with a more detailed break out.

This is a list of the current supreme court justices with their religious affiliations.

John Roberts – Catholic

Stephen G. Breyer – Jewish

Ruth Bader Ginsburg – Jewish

Anthony M. Kennedy –  Catholic

Antonin Scalia – Catholic

Sonia Maria Sotomayor – Catholic

Clarence Thomas – Catholic

Samuel Alito – Catholic

John Paul Stevens  – Protestant (retired June 29, 2010)

Elena Kagan – Jewish (Her nomination passed the Senate Judiciary Committee on July 20, 2010.)

If you’d like a more detailed look at the history of the Supreme Court and religious affiliation in our country, scroll through this interesting page.

At this point, I’m beginning to feel a little, no a lot, like Marvin, the robot in Hitchhiker’s Guide to the Galaxy….

Jun 192010
 

A most beloved aunt passed away today. The daughter of Italian immigrants, she worked as a bookkeeper for 3 generations of owners at the local car dealership. She has two daughters, 3 grandchildren and 6 great grandchildren. She is preceded in death by her husband, Eugene Anderson.

This pray summarizes her beauty.

Dear Lord, Let my life be a reflection of Your mercy and goodness. Allow a lovely and gentle spirit to shine forth from me. Make my countenance properly express your holiness. Keep me from attempting to glorify myself and help me to instead glorify You in all that I do, wear, and proclaim. Take my life, Lord, do with it what You will, and make me content and joyful always and in everything, Amen

from Raising Maidens of Virtue, Stacy MacDonald

She made those she touched feel special. May she rejoice and laugh in the presence of God.

I’ll miss you, Aunt Anne. Love, Kathy

Jun 102010
 

Dear Kathy,

Thanks for putting up with my shenanigans for fifteen years and keeping your faith in me.  And thank you for being such a wonderful mother to our five beautiful children.

Happy 15th Anniversary, Sweetheart!  I hope I’ve brought you at least a small fraction of the happiness that you have given me.

I love you and I need you,

Van

Jan 222010
 

Over the last few weeks, I made several posts to the message board of the TV program Conspiracy Theory with Jesse Ventura.  I discuss the Georgia Guidestones here.  I post that we have nothing out of the ordinary to fear from nature on 12 / 21 / 2012 here.  Responding to many questions on the issue, I examine HAARP’s potential use as a tectonic weapon and the possibility that the Alaskan facility triggered Haiti’s recent earthquake here.

Changing Seas

 Uncategorized  Comments Off on Changing Seas
Oct 272009
 

After nearly eight years, I resigned from my position at Centaur Technology to start a new company, Cossatot Analytics Laboratories (abbreviated CAna Labs). At Centaur, I was head of benchmarking.

I can say with a level head that Centaur is one of the best places to work in the world. I’ll write more about both my experiences at Centaur and CAna Labs soon.
“What about Van’s Hardware?” you might ask. That’s a good question. The site will likely remain on life support for the immediate future.
For an urgent fix of computer hardware information, I can recommend my favorites. Of course, HardOCP, AnandTech and The TechReport seem to be getting better and better and that’s why they continue to flourish.
I love Mike Magee; long live Mike! He’s now writing and editing for TGDaily, a very nice website that was spawned from my old haunt, THG. Likewise, I follow Loyd Case who seems to be just about everywhere nowadays.
My friend Rick C. Hodgin used to be managing editor for Wolfgang Gruener at TGDaily. Rick tempted me a time or two to write for him. Although Rick always spoke very glowingly of Wolfgang, he no longer works for him. Rick is still posting on and off at Geek.com.
For the longest time, our very own Joel Hruska worked for ArsTechnica, another top notch tech site. Now, HotHardware is lucky to have Joel’s services.
Speaking of Tom’s Hardware, I now visit there frequently after all of these years; the content there is quite good again.
I often don’t agree with his takes on NVIDIA, but I’ll never stop following my old friend Charlie.
And speaking of old friends, John Oram is now writing for Theo Valich’s Bright Side of News. I don’t know Theo, but John speaks very, very highly of him.
If you want to dive deep into computer tech, visit Lost Circuits, perhaps the most technically rigorous hardware site of them all.
Sadly, one of my favorite sites is no more. Ace’s Hardware has been defunct for several years now. Fortunately, Johan De Gelas continues to write detailed analysis for Anand. It’s true that a surrogate Ace’s forums sorta still lives on, but I’m not a big fan of message boards.
I haven’t posted on technology related message boards in many, many years. When VHJ was popular, whenever I posted under my real name I always had to be prepared to devote a lot of time for responses. To avoid this, for a few months soon after I began working for Centaur, I posted anonymously under whatever name popped into my head at the time (but I never attempted to hide my IP address out of respect to the forum owners). But even posting anonymously is a headache because it still invites responses, and many message board devotees can be determined, provocative and harsh. Some people simply like to argue. I do not like to argue, and I don’t like to spend my time reading people trade insults, which is all too common on many message boards.
Dave Graham did a fantastic job running our own VHJ message boards. We had interesting discussions there at times, but the message boards were still often a headache despite the fact that they achieved some level of success, largely due to Dave’s work.
Lastly, I like to take a peek at what Andrew Orlowski writes at The Register. I had the pleasure of meeting Andrew many years ago at a conference. He’s a very nice guy and his articles are insightful, intelligent, bold, well written and worth reading.

3,000 Low Temp Records Set This July in U.S.

 Uncategorized  Comments Off on 3,000 Low Temp Records Set This July in U.S.
Jul 262009
 

AccuWeather blogger Jesse Ferrell reports that July was unusually cool over large sections of the United States. In fact, he claims that 3,000 low temperature records will be broken over this region during this month of July. He writes:

First, some stats. 1,044 daily record low temperatures have been broken this month nationwide according to NCDC — count record “low highs” and the number increases to 2,925, surely to pass 3,000 before the end of the month.

Apparant attack on Dutch royal family

 Uncategorized  Comments Off on Apparant attack on Dutch royal family
Apr 302009
 

A 38-year old Dutch man drove his black Suzuki Swift into the Dutch royal parade, narrowly missing an open topped bus carrying the 71-year old Beatrix and her adult children. The man struck several onlookers, killing five and injuring twelve.

Upon questioning, the driver stated that his attack was directed at the royal family. The man is in critical condition after his car collided with a large, stone obelisk.

The Dutch royal family has close ties to the secretive Bilderberg Group, an organization seen by many as a major command component of globalism. Prince Bernhard was one of its founders, and the Bilderbergers maintain their headquarters in the South Holland town of Leiden.

Mexican Flu observations

 Uncategorized  Comments Off on Mexican Flu observations
Apr 302009
 

The U.S. Government, the corporate media and the WHO — three globalist organs — continue to under-report the number of Mexican Influenza cases and underplay its lethality.

At the same time, the WHO and the U.S. Government are deploying higher and higher levels of pandemic controls.

Monday’s conveniently timed buzzing of “Ground Zero” New York by one of the President’s 747s flanked by two fighters was almost certainly a psyop intended to blunt public attention away from the flu, where the first line measure of shutting down the U.S.-Mexican border has still not been taken.

It appears that the Mexican Flu is virulent and spreading rapidly. The number of existing cases appears to be significantly higher than what is being reported. When the true numbers are finally released, they will probably be accompanied with unprecedented measures from our federal government.

While no cases of the flu have been detected in pigs yet, Egypt has already announced that it will slaughter its entire hog population.

Cooler Master HAF 932 for $124.99

 Uncategorized  Comments Off on Cooler Master HAF 932 for $124.99
Apr 302009
 

New Egg is running a sale on the Cooler Master HAF full tower computer case. I have one of these at home, and it is an outstanding case that I highly recommend. Kathy paid Fry’s $160 my HAF. New Egg is selling them for $124.99 with promo code EMCLRPL24.

With the HAF and Sniper cases, Cooler Master is probably producing the best mid-priced enthusiast cases right now. The Antec Twelve Hundred has its high points, but it is not tool-free, has limited front panel connectors and the case fans do not have motherboard connectors.

My Cooler Master HAF recommendation comes with two caveats: 1. It’s a huge case. 2. It doesn’t have filters.

Although the Cooler Master HAF features three huge 230mm fans, only one is lit by LEDs. These fans do not have external speed controls, but are very quiet. Cool Master should also implement hot swappable SATA drive bays like those on Zalman’s case.

Apr 282009
 

As the music industry decides what its next medium for mass marketing shall be the music file industry has been a runaway train. I have been writing and testing PC-related products for almost 6-years now for http://www.madshrimsps.be/ among others, and recently have re-discovered my once favorite hobby. No its not surfing 25-ft swells in late September off Marconi Beach State park on Cape Cod Massachusetts, its High End Audio. When I suffered a spinal injury some years ago racing MTB bikes I was forced to liquidate all my High End Audio gear long before my humility would allow me to collect Social Security Disability (although this is an earned income everyone pays into). As I sold all that stuff, Futterman NYAL OTL prototype amps, panel speakers, a Sony x777 CD player (their $3k audiophile model) and thousands in cables, I realized I had begun several years before as a pure music lover and by that time I had actually lost that passion. I began after hearing a High End system running out and buying every CD I could find and by the time I sold everything I had just two CD’s I considered Audiophile quality recordings. I wasn’t listening to music, I was listening to the nuances in hardware.

I had spent well over $10k on my own system which consisted of Dealer Demo “bargains” and then spent years siting in front of over one hundred different types of loudspeakers. I was isolated, and frustrated in my perfectionism. High End Audio cost money and to reproduce a true 20Hz ~ 20kHz costs big money. When you want Joni Mitchell in the room it costs about as much in High End Gear as it would to have her perform live at your home. Its a world onto its own and a absolute pleasure for the electronics hobbyist with a open bank book. When it comes to Affordable High End what we have is another misnomer like PC-Audio.

PC-Audio has always been driven by multi-channel affairs which were designed for Gaming, hence surround sound. Recently this has begun to change. Influences from High End Audio are slowly influencing PC-Audio. When I was into High End separates were the rule. Isolate and specialize each function, purifying it. Instead of a CD-player you have a transport and separate DAC and then a distinct power unit for that device. Now the digital cable between transport and DAC becomes an industry onto itself. From the DAC’s Analog outputs we now need silver wire kapton insulated interconnects. Then the pre-amp and if there’s a phono-stage this is separate with separate power supply. Now we have the interconnects from Pre-amp to stereo a,p or monoblocs. Either way once again the laws of physics meet engineering as an art form. For example read this little gem about a pair of “vacuum” sealed interconnects which cost $14,900 per 1m pair, Tara Labs Zero interconnects. I understand the science, I understand the potential for why they”sound” better, I also understand that’s what I spent on my entire system in 1992. In that Tara Labs article the Reviewer’s speakers are my 10.0 drool factor, in the mbl 101E Radialstrahler loudspeaker costing a mere $44,900 or about the same as a P-Class Mercedes (P=Poor). And the mbl 101E are still considered a value given what some other High End models cost.

So, is there a silver solder trickle down effect in High End Audio? Erm maybe, the current trend are chinese assembled integrated tube amplifiers, especially single ended Pentode and many incorporating USB fed (or TOSLINK) DACs. This new breed of Audiophile quality Plug and Play integrated amps with on-board D/A conversion are made for use with the modern PC or Laptop, a first. This transforms your PC’s HDD into an instant music file server and provides access to thousands of free streaming Audio websites the world over. My favorite has been Deezer.com which offers one of the “cleanest” signals especially in their high quality stream Chanson Francaise. There you’ll find many live studio recordings, those which allow a a system capable of 3D imaging to come into its own. There many strong acoustics and clear vocals as well as rock, and some funky folk. Rare in the annals of High End I recently auditioned a Single Ended Pentode integrated amp with on-board Burr Brown USB DAC, from Tecon. Their Model 55 reviewed here cost a mere $389 shipped in the USA and all that’s required is a PC and passive speakers. I mated the Tecon to the venerable Lovecraft Designs, Abby also reviewed by me, although at $2k this defeated “affordable” at least relative to what “we” might conisder. In the High End world some wouldn’t make this purchase out of some discombobulated sense of pride. An ideal match for the Tecon Model 55 would be the affordable Tekton Model 6.5 which cost just $350 and Tekton makes models as low as $200 or as high as $3,300. Basically $600 will get you one of the best sounding systems you’ve ever owned if your an owner of mass-market mid-fi. And very respectable, accurate and their ability for three dimensional imaging will certainly launch your High End Audio love affair.

Undoubtedly the gradual decline of the CD will not necessarily go the way of vinyl, vinyl still has in its favor a true analog signature and will be favored by a large majority of the Purist Audiophile Crowd so long as they continue to press it. Except for the Compact Discs sitting on shelves, the future is the music file albeit Lossless, Mp3, Mp4 etc.

There is one product out there which has recently captivated me, this is a unique active speaker system by a company known as Avi HiFi Originating out of the UK like so many great loudspeakers, they have released the Avi ADM9.1 active loudspeaker system. In my many years of “High End Audio” this system has completely redefined what I’ve come to expect from High End Audio as well as what is considered a value. How are they different? The typical powered speaker system utilizes a single amplifier in a main enclosure from which zip-cord feeds the right or left passive unit. Already we have problems with such a design, since the additional electronics in one enclosure give it slightly different sonic characteristics and other potential offsets. Avi in the Purist tradition has placed bi-amplified 250W mid/bass driver and 75W tweeter in each enclosure. Therefore each speaker is a true active unit under its own power, ergo each has its own power cord. The main unit contains two TOSLINK inputs which feed a Wolfson 8741 DAC. Two analog inputs allow you to bypass the Wolfson DAC (although I can’t see any reason why) which is one of the most neutral sounding I’ve had the pleasure of hearing. It simply invites streaming music files and gives back all it takes. Another RCA out supplies the right speaker and if its more bass you desire there is a dedicated subwoofer output. Avi makes a subwoofer which was designed with the ADM9.1 speakers in mind. The following was taken from Avi’s webpage:

Therefor we’ve produced a special dedicated, ultra high powered 10″ model for the ADM9’s. The voice coil is 3″ diameter, maximum excursion is 2″ and it’s in a sealed box and driven by a linear, analogue bipolar amplifier that can produce up to 30 Amps and 200 Watts. The filter can be set to 20, 30, 40, 60, 80 or 100 Hz and gain adjusted to suit room acoustics. In practice it extends low frequency excursion to below 30Hz…

I do not feel the need for a subwoofer since I don’t feel as if I am missing much. The ADM9.1’s are rated down to 60Hz and I am certain this is the most honest 60Hz I’ve ever heard. Given the number of speakers I have owned and heard rated down to “50Hz ~ 40Hz” based on budget constraints, this is a region I am most familiar with. The sound from the ADM9.1’s is like nothing I’ve ever heard from a active speaker system, indeed from any combination of separates up to $10k and beyond. They are as fast as planar ribbons (there will be consequences for that description)image as if they were 360 radiators, and pack bass worthy of full range. If any of these attributes were lacking they would still be a bargain at approximately $2k USD. The fact they present a realism which flows into and and fills the room with live music level, even the most critical listener can’t deny they perform far beyond their stature. Listening to Joni Mitchell BLUE, the ADM9.1’s were able to maintain what are some of the highest pitched vocals and still allow you to hear her diaphragm expand, the moisture on her lips and know exactly where she sat when letting loose on the microphone bested my Proac, Cary, Audible Illusions combo from years back. If i can summarize these speakers in one word that would be, “Effortless.” They just don’t seem as if they have to work as hard as many to give what most cannot.

If I had $10,000 to spend on anything I wanted in High End Audio, I would still buy the ADM9.1 and spend the rest on a Tapestry. They are the best value in Audio I’ve seen in many years, especially when you consider you don’t have to leave your seat to enjoy streaming audio from your PC. I’ve neglected other reviews since the ADM9.1’s arrived and if I could afford them I’d own them by now. The highest compliment I can given them, is that I don’t want to give them back 🙂

Britain to screen all passengers arriving from Mexico for flu

 Uncategorized  Comments Off on Britain to screen all passengers arriving from Mexico for flu
Apr 272009
 

Britain has begun screening all airline passengers arriving from Mexico for Mexican Influenza.

Although the United States announced a public health emergency yesterday with domestic Mexican Flu cases now reaching around 20, Janet Napolitano, head of Homeland Security, dismissed similar measures for American airports. Napolitano also failed to take any action to tighten US-Mexico border security, exposing Americans to further infections from our southern neighbor where panic is causing many people to flee infected areas.

Fort Detrick disease samples may be missing

 Uncategorized  Comments Off on Fort Detrick disease samples may be missing
Apr 262009
 

A criminal investigation is underway at the U.S. Army infectious disease research center in Fort Detrick, Maryland where infectious disease samples have apparently vanished.

Fort Detrick was the source of the anthrax bioweapon material used during the anthrax attacks soon after September 11th, 2001. Bush and his staff were already on ciprofloxacin, an antibiotic that is commonly used to treat anthrax infections, at the time of the attacks which eventually killed five people and sickened seventeen.

Mexican Flu Emergency Declared in U.S.

 Uncategorized  Comments Off on Mexican Flu Emergency Declared in U.S.
Apr 262009
 

U.S. federal representatives declared a public health emergency today as more Mexican Flu victims have been identified within the country. The official count of domestic Mexican Flu cases has risen to 20 with one requiring hospitalization. Among the emergency measures taken by the Government are plans to “release a quarter of its 50-million-unit strategic reserve of antiviral medications” to areas sustaining flu cases.

Notably absent were any plans to lock-down the United States’ border with Mexico, an obvious measure that would impede the disease’s migration northwards. This stands in contrast with actions already taken by countries like Japan who are discouraging if not outright restricting travel to and from Mexico.

As we reported yesterday, evidence suggests that the disease is already widespread in the U.S., but with flu awareness and fear exploding over the last few days, case numbers are likely to escalate rapidly as people with flu symptoms flock for medical treatment.

So the soaring numbers of U.S. Mexican flu cases over the next week will produce the illusion that the disease is spreading rapidly, when in actuality we are only gaining insight on the true, existing prevalence of the disease.

The Mexican Flu has several characteristics that suggest it might be a man-made bioweapon. Perhaps most concerning of all is that the disease has pig, bird and human flu components, suggesting that it not only targets humans hosts, but pigs and perhaps birds as well. From the CDC’s recent press briefing:

We know so far that the viruses contain genetic pieces from four different virus sources. This is unusual. The first is our North American swine influenza viruses. North American avian influenza viruses, human influenza viruses and swine influenza viruses found in Asia and Europe.

That particular genetic combination of swine influenza virus segments has not been recognized before in the U.S. or elsewhere. Of course, we are doing more testing now and looking more aggressively for unusual influenza strains. So we haven’t seen this strain before but we haven’t been looking as intensively as we are these days.

The viruses are resistant to amantadine and rimantadine anti-viral drugs but they are sensitive or susceptible to oseltamivir and zanamivir, the newer anti-viral drugs for flu. And at this time we don’t know exactly how people got the virus. None of the patients have had direct contact with pigs.

You can get swine influenza without direct contact but it’s a bit more unusual. And we believe at this point that human-to-human spread is occurring. That’s unusual.

Russia has already halted pork imports from Mexico and several U.S. states. In my post yesterday, I mentioned the potential impact of the flu to hog farmers. If birds become infected as well, the U.S. Government might also slaughter chickens and turkeys throughout the country.

Mexican Flu Pandemic Threatens U.S.

 Uncategorized  Comments Off on Mexican Flu Pandemic Threatens U.S.
Apr 252009
 

A new, hybrid flu strain containing pig, avian and human flu components has struck Mexico, killing up to 68 and sickening at least 1,000 others. Given Mexico’s impoverished third-world status and with the country currently coming apart at the seams, the real number of infections is probably at least a hundred times higher.

The infection count north of the border continues to rise with two confirmed cases in Kansas to add to the nine already identified in Texas and California. A private Christian school in Queens, New York is suspected to have been hit by at least eight additional cases.

The two Texas flu victims originate from San Antonio, but we are aware of several severe flu cases in Austin from early this month that have the hallmarks of the new, deadly strain. In all likelihood, the Mexican Flu has probably already established a firm foothold in America with hundreds of unreported or misidentified cases.

The Mexican Flu has unusual characteristics that suggest that it might be man-made. The timing of the Mexican Flu outbreak also follows closely behind the recent distribution of a Baxter flu vaccine contaminated with live avian flu virus.

With America already under siege from a carefully constructed economic attack, a manufactured flu pandemic would serve as the second prong of an offensive made to weaken and then pacify our country. I anticipated this tactic in my predictions for 2009.

If indeed the Mexican Flu is the second stage of the globalists’ war against the U.S., the fear and chaos the corporate media nurtures will probably be leveraged to further the NAIS initiative and serve as an excuse to slaughter hog populations throughout the country. This will decimate many small farmers. Along with new, so-called “Food Safety” measures, these steps will will allow the federal government, now largely a proxy for globalist interests, to seize control of the nation’s food supply.

Additionally, the crisis will allow the President’s emergency powers to be grossly extended, setting the stage for random and warrantless “health safety” searches of Americans and their homes. Travel regulation, gun seizures, forced relocation, forced inoculations and troop deployment for police actions throughout the country are also likely to follow.

In fact, Mexico has already enacted many of these measures. From the CNN article linked above:

Mexican President Felipe Calderon on Saturday issued an executive decree detailing emergency powers of the Ministry of Health, according to the president’s office.

The order gives the ministry with the authority to isolate sick patients, inspect travelers’ luggage and their vehicles and conduct house inspections, the statement said.

The government also has the authority to prevent public gatherings, shut down public venues and regulate air, sea and overland travel.

The main intention of this second prong of attack would be to deploy the control mechanisms that the globalists need to pacify uprisings in this country that might otherwise awaken and unite our very powerful nation against them.

If you are aware of extant Mexican Flu cases in the U.S., it is important that you quickly disseminate that information in order to defuse the groundswell of panic that the corporate media and the globalist controlled WHO appear to be nurturing. Moreover, take precautions and limit exposure to yourself and your family. The recent, suspect flu cases in Austin were severe, although everyone recovered.

Mar 122009
 

New York Times investigative reporter Seymour Hersh claims that former U.S. Vice President Dick Cheney ran an “executive assassination ring.” During a talk Tuesday at the University of Minnesota, Hersh described the Joint Special Operations Command, a group of assassins with no Congressional oversight, that reported directly to Cheney. Apparently without the permission of anyone except the Bush Administration, the shadowy group secretly entered countries and killed people on lists presumably vetted if not authored by Cheney.

Hersh also maintains that the CIA was “very deeply involved” in illegal domestic investigations of people the sometimes sinister spy organization considered “enemies of the state” after 9/11.

Flu Vaccine contaminated with live avian flu virus

 Uncategorized  Comments Off on Flu Vaccine contaminated with live avian flu virus
Mar 052009
 

Was this part of a planned pandemic? Baxter, an American flu vaccine provider, shipped vaccine material tainted with the live H5N1 human variation of the avian flu virus. The “accidental” concoction is almost a perfect, deadly bioweapon.

Problems posting comments to this blog

 Uncategorized  Comments Off on Problems posting comments to this blog
Feb 252009
 

We are aware that there are persistent, ongoing problems posting comments to this blog. Google’s service is unreliable and often times-out. We apologize for the trouble. I am extremely busy at work, so I will not likely be able to find a remedy anytime soon. If you need to send feedback that you can be certain I will get and, if you want, post to this blog, you can send it to my personal email address.

UPDATE: Problems continue to persist into 2010, but Internet Explorer and Opera browsers appear to work reliably. Comments can’t be published with Firefox and, ironically, Google’s own Chrome browser because the comment input section is not rendered properly.

Feb 242009
 

I’m having one of those weird nights again. I have been working a lot lately because we are testing a new part, but I was able to get home early tonight, a little after 9PM. I went to bed before 11, and woke up believing that it was morning, but only two hours had passed. Anyhow, I decided that I’ll blog a little until I get sleepy.

I bought an Asus G1S notebook about a year ago from Best Buy. I have been very happy with it other than the notebook will not boot to the desktop on cold days.

All overclockers know that the key to achieving high frequencies is keeping the part cool. Consequently, producing aftermarket computer component thermal solutions has become a sizable industry unto itself.

Less well known is that many semiconductor devices have a distinct cold limit as well. In fact, some devices will not function unless they are hot. Semiconductor device manufacturer have to ensure that these “cold failures” do not occur within normal operating temperatures.

My Asus G1S will not boot to the desktop if the room temperature is lower than 70 degrees Fahrenheit. The failure occurs when the GPU is initialized for desktop compositing engines like Vista’s Aero or Compiz in Linux. Particularly for Aero, as soon as video initialization is attempted, the notebook either freezes or spontaneously reboots (Compiz might limp along for a few seconds before locking up).

If the room temperature is only a degree or two cooler than 70, the notebook will eventually warm up enough to reach the desktop, but if the ambient temperature is much lower than 65 then the notebook will enter a perpetual reboot cycle, if it doesn’t lock up first.

This is especially annoying when resuming from S3 or waking from hibernation, since rebooting totally defeats the timesaving aspects of these measures and could potentially corrupt files.

So apparently NVIDIA had, at least for a little while, a hole in their screening process that allowed for cold failure test escapes. Since the failures begin to manifest just below room temperature, it looks like NVIDIA did not test under cold conditions using a refrigerated thermal head, which is an odd screening omission.

I don’t want to infer that this is a widespread, serious issue, because there does not appear to be many cases of this failure in the wild. I’m simply recounting my personal and apparently rare experience with the NVIDIA Geforce 8600 GT in my Asus G1S.

Although I first witnessed the failure soon after I purchased the notebook, I procrastinated until my warranty almost expired before returning the G1S for repair. Best Buy provides warranty service for Asus, so after I backed up all of my data and reinstalled Vista from scratch, Kathy took my notebook to them about two weeks ago. The “Geek Squad” sent me an email last week reporting that they are currently awaiting parts. The only viable repairs that come to mind are either mainboard replacement or a new notebook. We’ll see. I miss the G1S since I used the notebook daily.

In the meantime, Kathy bought me an early birthday present — my birthday is not until March 22, but who am I to complain? — components for an AMD Phenom II system as an upgrade for my two year old Dell Core2Duo E6600 desktop.

Yeah, it’s not like my old reviewing days when companies would literally send me more free computer hardware than I knew what to do with. But Kathy bought a great collection of components including a Cooler Master HAF, a really fantastic case for a geek like me. The Phenom II also has exceeded my expectations, providing stout performance while consuming little power.

It’s smoothly running 64-bit Ubuntu 8.10. I’ve installed VirtualBox and will run XP from it and Vista from an e-SATA drive that I will take back and forth to work where I have an identical test system.

Speaking of NVIDIA, the economic apocalypse overtaking the world now is going to rapidly bring down broad swaths of familiar companies. The computing industry is already hurting. I made a prediction earlier this year that a major player will be on the brink of collapse by 2010. It won’t be NVIDIA.

Three computer hardware vendors that will still have a pulse come next year are Intel, NVIDIA and VIA. These are times of great hardship for AMD, I am afraid. But that is a story for another sleepless night.

I can’t believe the good people of Tulsa are tolerating this

 Uncategorized  Comments Off on I can’t believe the good people of Tulsa are tolerating this
Feb 202009
 

The TSA has implemented full-body scans in the Tulsa airport producing nude images of passengers, and the good people of Oklahoma are accepting the outrageous indignity passively. I expected riots. Tulsa, what has happened to you? Where is your heart?

Feb 152009
 

Kathy witnessed the widely reported “fireball” over Texas around 11AM this morning. She was northbound on I-35 near Georgetown and described a persistent cloud created by the hyperbolic shaped, metallic object (this appearance was probably a result of a shockwave created by a meteor bouncing off the atmosphere). The cloud lingered for about an hour. She drove directly underneath the cloud which appeared to be centered over Waco.