?

Log in

No account? Create an account
Who, me? [userpic]

Ridiculous speed comparisons

August 27th, 2008 (10:14 pm)

I've been using my new Web tablet for programming—mostly in C++ lately. Tonight I checked in some work on my laptop, then checked it out to the tablet and did a clean build. The tablet took 12 minutes 52 seconds, compared to 3.2 seconds on the laptop.

I'm kind of boggled about this, actually. I mean, obviously the tablet is going to be slow; it's a 400MHz ARM, compared to a 1.6GHz Core Duo (and I was running with make -j 3, to max out both cores). But the MHz numbers alone (a factor of 8) can't come close to explaining this level of difference (a factor of 240). I suppose the RAM is faster, too, but that just reduces the bottleneck.

I'm not sure which way the flash-versus-disk comparison should go; flash is generally supposed to be faster, but the SecureDigital interface on the handheld is slower than the SATA interface in the laptop. (It's a Class 6 card, which is supposed to max out about 20MB/s, as opposed to 150MB/s.) Also, the laptop has 2GB of RAM, which makes for a pretty effective disk cache—I should try it out sometime after blowing through enough data to empty the cache. (And not run make depend first, since that reads all the source files.)

Geek, geek.

Comments

Posted by: robertdfeinman (robertdfeinman)
Posted at: August 28th, 2008 01:32 pm (UTC)

You didn't mention how much RAM each machine has and what else may have been occupying some of it at the same time.

My only recent experience has been with Photoshop. If I run certain programs first and then run Photoshop (even after I quit the first ones) it runs much slower than if I reboot and run Photoshop by itself.

Photoshop does a lot of fancy swapping using its own memory management scheme so I'm guessing that non contiguous RAM makes a difference. I'm willing to bet that those who write compilers don't give my thought to the speed at which the compilers run as opposed to the speed at which the compiled code runs. Different CPU design, OS version, etc. could all have an impact.

Posted by: Who, me? (metageek)
Posted at: August 28th, 2008 01:54 pm (UTC)
More details

You didn't mention how much RAM each machine has

The laptop has 2GB (plus 1GB of swap, I think), the tablet has 128MB (plus 128MB of swap).

and what else may have been occupying some of it at the same time.

Mmm, I didn't look at memory consumption, but it's a pretty safe bet the laptop wasn't swapping. The tablet might have been, but it would most likely have swapped out other processes (e.g., the Web browser), since I wasn't interacting with the tablet while the compiler ran.

Photoshop does a lot of fancy swapping using its own memory management scheme so I'm guessing that non contiguous RAM makes a difference.

No, it doesn't; that's what the RA means. Photoshop has its own overlay system because that means it can use an algorithm that knows which parts of memory it's likely to need soon, and not swap them out. The kernel can't know that kind of thing. Plus, an overlay system can manage an amount of data too big to fit into an address space; on a 32-bit machine, the theoretical limit is 4GB, but the more common limit (depending on the OS) is 2GB or 3GB. A serious Photoshop user could easily hit that limit.

I'm willing to bet that those who write compilers don't give my thought to the speed at which the compilers run as opposed to the speed at which the compiled code runs.

Oh, they do—although they probably do put it at a lower priority. A lot of compiler research is in speeding up the optimizer (which is the most CPU-intensive section).

Posted by: Justin du Coeur (jducoeur)
Posted at: August 28th, 2008 06:13 pm (UTC)
Re: More details

Mmm, I didn't look at memory consumption, but it's a pretty safe bet the laptop wasn't swapping. The tablet might have been, but it would most likely have swapped out other processes (e.g., the Web browser), since I wasn't interacting with the tablet while the compiler ran.

Sure, but a typical build can involve multiple processes all by itself (compiler/linker/make/etc), and that's a very small memory footprint for nowadays. So my *guess* would be that it got sufficiently memory constrained that the processes were swapping between each other as it went from phase to phase, with several swaps per compiled source file. That could easily hose the system dramatically, especially compared to the laptop, whose only I/O was probably the data files themselves...

3 Read Comments