AMD beats Intel on price versus performance every single time.

UPDATE: I wrote a newer “AMD beats Intel” article with much better information and more relevant processors.

This was written April 1, 2012 and is not an April Fool’s joke. If you’re reading this years later for some reason, check to see if my reasoning still applies.

I walked into a CompUSA store to purchase myself a new machine with lots of cores for faster compilation of the Tritech Service System, among other things I do daily that require Linux and for which I didn’t have a decent home machine to work with. Ever since I got Netflix, my Toshiba Satellite P775-S7215 (arguably the best laptop I’ve ever used in my life, and certainly more than I ever paid for a laptop before) has been stuck in Windows 7 so that I can watch things while I work. It’s also nice to have the Windows GUI running for Internet use and document reading while plunking around in Linux on the compiling machine, which I have given the name “Beast” because…well, it’s a beast…but I digress. I walked into a CompUSA store, started tossing items into the shopping cart, and got to the CPUs, for which someone must help me since they’re behind a counter.

I asked what they had, and then said I was debating AMD vs. Intel. The employee behind the counter made the blanket statement, “Intel is always going to beat AMD.” I knew better, so I headed over to my favorite place to compare raw CPU performance, and started asking him for CPU prices and names. When PRICE was taken into account, AMD always beat Intel, rather than what he had told me, and he seemed as if he had lost a piece of his religion when I told him about it. There’s a serious problem in the computer hobbyist world where blanket statements are made and repeated ad infinitum regarding a variety of things, and this AMD vs. Intel performance debate is the worst of them all.

Before I explain why I say Intel loses to AMD on every price-to-performance ratio comparison, I’d like to mention another hardware experience that came before this which illustrates that skepticism and Google-Fu are extremely powerful tools. The WD20EARS 2TB 5900RPM SATA hard drives no longer have the excessive head unloading issue, which was a severe problem and very common cause of failure before even a single year of use was had in those particular Western Digital drives (and I believe some other early WD Green drives as well). I know this because I looked it up while staring at two of these drives I wanted, and reading that the issue was no longer present in the newer series of WD20EARS drives, and then purchasing them and using smarton montools in Linux to CHECK THE HEAD UNLOAD COUNT during a variety of usage scenarios. The count didn’t exceed 100 unloads within a week, and that put the issue to rest for me. (The approximate unload count needed for a drive to start failing is 300,000 and 100 in a week would take 3,000 weeks to reach that unload count.) I got two 2TB hard drives for $80 before the Thailand flooding happened, and I don’t have to worry about a manufacturer-caused premature failure occurring in them.

On to the meat of this discussion. My methodology is extremely simple. Go to a website such as Newegg, pull up CPUs that are the same price (or very nearly so), and compare the CPUs at cpubenchmark.net. If you’d like to give them some sort of price-to-performance score so you can perform comparisons across prices, you can divide the CPU benchmark score by the price, then multiply by 100 (since you’ll get LOTS of decimal places). Let’s see how this works out in real-world terms. As of April 1, 2012, the price of an AMD Phenom II X6 1045T processor at Newegg is $149.99, while the best Core i3 available at Newegg (the Core i3-2130 dual-core) is also $149.99. There are two other Core i3 CPUs at that price, but they are slower or are a first-generation i3, and anyone who is a savvy buyer will get the best bang for the buck, so those are being ignored. Why not an i5 or i7? Well, it’s not an apples-to-apples comparison when you put a Phenom II X6 against an i5 or i7, not because of some notion of “CPU generation,” but because you can’t even get a Core i5 desktop CPU at Newegg for less than $179.99, so there’s simply no i5 or better in the Phenom II X6 price range; also keep in mind that I’m justifying a personal purchase which fits personal budgetary concerns (mine was a 1035T for $130), and I put the price difference toward getting 16GB of RAM instead. If you have a higher budget, you’d need to compare against a better AMD CPU, which we’ll do in a minute. So if we perform the price-to-performance score calculation that I came up with earlier, what do we come up with for these CPUs? We’ll also compare the cheapest available i5, which on a price-to-performance scale is also beaten by the selected Phenom II X6.

AMD Phenom II X6 1045T Thuban 2.7GHz: 3355

Intel Core i3-2130 Sandy Bridge 3.4GHz: 2942

Intel Core i5-2300 Sandy Bridge 2.8GHz: 3130

So in terms of price-to-performance (which most of us refer to as “bang for the buck”) the AMD Phenom II X6 stomps both the i3 and i5 chips closest to its price. (Interestingly enough, we also see that the i5 is a much better value than the i3, both of which are the newer Sandy Bridge chips.) Let’s look at the new AMD FX chips that some of my friends have been raving about (and building gaming machines with) to see how they compare against the best possible Intel offering for the same price…

AMD FX-8120 Zambezi 3.1GHz ($189.99): 3743

Intel Core i5-2400 Sandy Bridge 3.1GHz ($189.99): 3222

The AMD FX chip pummels the Core i5 at the same price point, and even my Phenom II X6 fails to be “worth it” compared to the FX-8120. If I was not on a budget, I would have gone for the FX-8120 instead. Note how even though the i5-2400 is the best Intel chip in this comparison so far, it still scores 133 points lower than the Phenom II X6. Higher numbers mean more value for the price. Let’s do a few more comparisons against CPUs that I might be interested in if I was building a high-performance box with a higher budget, such as the awesome i7-2600K, just to see where the numbers fall.

Intel Core i7-2600K Sandy Bridge 3.4GHz ($324.99): 2799

Intel Core i7-3960X Extreme Edition Sandy Bridge-E 3.3GHz ($1049.99): 1342

AMD FX-8150 Zambezi 3.6GHz ($249.99): 3307

I’ve gathered all of these numbers into a chart to summarize the point of this article. I think the chart speaks for itself. I also invite you to do your own math and draw your own conclusions. Feel free to leave a comment as well!

15 thoughts on “AMD beats Intel on price versus performance every single time.

  1. Unfortunately, Passmark doesn’t represent real-world anything. in many disciplines, AMD will be less appealing from a price/performance standpoint. In some, AMD will be better, yes. The point is, your methodology is flawed because you have assumed passmark is a perfect representation of performance, and it is not.

    1. First of all, your argument is very weak, as it doesn’t have any supporting evidence. Passmark is indeed a synthetic benchmark. However, the utility of a processor will vary depending on the type of work being performed, and processor purchases should reflect the workload being performed. If you’re shuffling massive data structures around in memory and doing very few heavy calculations, of course a huge set of caches will make a bigger difference than faster SSE instruction execution, for example. Likewise, if thousands of threads are being run with very short timeslices, context switching speed is drastically more important than raw execution speed. This is why a Pentium 4 at 2.2 GHz isn’t nearly as fast relative to a Pentium III-m 1.2 GHz as it “should be,” even though the P4 is a newer microarchitecture. In fact, the two chips I just mentioned share a Passmark score of 278, and since I literally just two days ago reinstalled Windows XP on systems based on both. Despite the P4 using DDR SDRAM, it still “feels” no faster than the laptop with the P-III. While a Passmark score doesn’t guarantee that it is an apples-to-apples comparison for all chips under all types of workload, it is a good number to work with as a general comparison. Your argument against Passmark doesn’t invalidate it. It is up to the reader to decide what specific factors on a processor are suitable for the workload, but that introduces subjectivity and you are welcome to write your own response analyzing potential workloads and the aspects of your favorite processor that make it a better choice for that workload. Or, you can show “real world benchmarks” that support whatever position you are taking (you don’t explicitly state one.) Otherwise, the usage of Passmark as the basis for the assertion that Intel doesn’t offer as much performance per dollar as AMD is sufficient and perfectly valid.

  2. Actually, *your* original argument lacks the evidence as you show absolutely no applications that mirror Passmark’s performance.

    Any real benchmark suite shows that every app performs differently, based on whether it’s multi-threaded or not, depends on cache, depends on clocks, or favors a specific architecture. You want supporting evidence, look at any website’s CPU review. You want specifics? Here’s one: http://www.tomshardware.com/reviews/gaming-fx-pentium-apu-benchmark,3120-10.html

    It’s silly to suggest an argument is weak without demonstrating otherwise. I simply pointed out the flaw in your axiom, you never even responded to that. You simply listed more invalid synthetic data.

    1. Evidently you failed to read the part where I discuss that CPU suitability is always tied to the workload being handled by that CPU. Synthetic benchmarks measure the CPU’s raw performance, whereas benchmarks such as the gaming benchmark you provide are actually less suitable for general-purpose CPU evaluation. Why? Because gaming relies on factors that cannot possibly be normalized across processors, AND gaming is only one type of workload. A gaming benchmark can fluctuate wildly depending on the chipset in use, the exact memory class and timing and channel configuration, the PCI/PCIe devices in the system, the type and quantity of network traffic that might be arriving at any given time, the bandwidth and mechanical latencies of the storage subsystem, and what kind of OS and software updates have occurred, the versions of hardware drivers being used, and the GPU configuration and timing. If you play games all the time, perhaps a gaming benchmark means something to you…but it means nothing whatsoever to me. I have personally seen a Sony VAIO laptop next to an HP Pavilion laptop, with identical specifications, running identical hard drive testing software, and observed the Sony streaming data off the hard drive at about 35MB/sec while the HP with the same capacity drive streamed the data at 85MB/sec, even though the CPU and storage interface type in use were identical. There are variables at play in computing benchmarks that aren’t tied directly to the CPU in use, or the hard drive in use, or whatever other part you would like to benchmark. Passmark CPU tests, from my understanding, run benchmark code in a tight loop on the processor cores such that it isn’t affected by the memory bandwidth available, running entirely from CPU caches and minimizing the performance effects of the other hardware being used.

      I have no need to demonstrate to anyone that Passmark is valid; you challenged its validity, and have not presented an argument that supports that assertion. In fact, you use logically fallacious statements such as “any real benchmark suite does this and that,” implying somehow that benchmarks which do not fit your specified criteria are “not real.” That’s actually something I can agree with if you’re benchmarking for the workload you’re planning to perform. Your link is for gaming performance only, and thus can suffer from the factors I already mentioned (in fact, I would argue that gaming benchmarks should be aimed at a specific game running on a specific combined system to be valid, as the CPU is only one factor in gaming performance and it’s totally invalid to assign all responsibility for less gaming performance to the processor in use.)

      The bottom line is that even if Passmark isn’t a “real world” benchmark at all, the number of uses of a system are so varied that it’s about the only useful GENERAL PURPOSE measurement you can find. Why? Consider the possible workloads: gaming vs. database server vs. storage server vs. domain controller vs. software compilation vs. Blu-Ray watching vs. Internet browsing vs. video editing vs. image editing…how do you propose to compare ALL of these workloads for evaluating which CPU to purchase? MySQL isn’t a game; neither is Adobe Premiere. Gaming is a useless benchmark if you’re not gaming.

      In summary: Passmark tests raw CPU performance, and if you’re buying a CPU for a wide variety of applications (general purpose, not for very specific workloads) then Passmark is a useful metric. If you’re buying for something specific, then (A) Passmark may not be suitable for your intended workload, but also (B) the entire system must be considered when benchmarking for a particular workload, so the CPU isn’t fully responsible in the first place. I’ll leave you with a quote from the Wikipedia article for Benchmark (computing) and the readers can decide for themselves:

      “Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. Whilst application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device.”

  3. You assumed it was a viable metric before I challenged its validity, and you still haven’t defended it except to say you don’t ‘like’ the example I supplied.

    OK, here’s another non-gaming example:
    http://www.tomshardware.co.uk/fx-overclock-crossfire-ssd,review-32342-8.html

    If you like I can supply hundreds. If you prefer, you can supply a particular AMD CPU model you think is priced similarly to a particular Intel model, and I can show you how the Passmark example falls apart in actual use from application to application.

    1. Unfortunately, that example might be valid today, but it isn’t necessarily going to be valid tomorrow. You may not be aware of this, but the AMD FX chips represent their paired “cores” to the operating system as actual full cores and not SMT cores. This means that Windows 7 doesn’t schedule threads properly on the FX chip. What happens is that it assumes the FX is an 8-core CPU, and Windows schedules things appropriately for that type of architecture. It’s not FX-aware yet, so the scheduling of threads doesn’t use the cores appropriately, particularly since the Turbo Boost functionality is disabled if more than two “modules” are running at once. Windows 7 will need to be updated to take full advantage of the FX chip. I agree that FX chips perform poorly on stock Windows 7 installs due to the exotic core pairing and the OS not being aware of how to properly use them, but that also means that most real-world benchmarks will make the FX seem less powerful than it actually is.

      Here’s the thing though: I take the time to understand CPUs before I run off and buy them, and I didn’t purchase the FX chip either; I went with a Phenom II X6 1035T. Just in case you think I’m biased, consider that I’m writing this on my Intel Core i7-2630QM powered laptop, which I love and recommend to everyone else who can afford it and I even posted a video to YouTube of it compiling software AMAZINGLY fast. I have no problem with Intel chips, and I know AMD doesn’t really make a chip that matches the desktop i7. My entire point is that you always get more raw CPU power per dollar with AMD than you do with Intel. I do appreciate you posting that link to that real-world benchmark. It illustrates quite well why my points regarding real-world tasks and how many factors can affect their performance are important. It doesn’t really make Passmark look bad, though; Passmark isn’t AAC encoding, nor does it have Windows 7’s FX scheduling “bug” (I’m aware that it’s not a true bug, but rather a symptom of new designs being run on old OS code).

  4. I see what you’re saying, but I’m afraid we’ll have to agree to disagree as to the validity of using a synthetic such as Passmark to ‘prove’ that a CPU is a price/performance leader or not.

    IMHO, price/performance is ENTIRELY dependent on the real-world application you plan to use it for. If you’re putting together a gaming PC, passmark doesn’t hep you. If you’re putting together a PC for use with any application that runs best on an i3 or i5 vs a similarly priced Phenom II or FX (and there’s a significant percentage of those kind of applications, I suspect a majority based on the tests I’ve seen and performed), passmark doesn’t represent best value.

    1. That’s fine; like I said, people need to research the processor’s suitability for the workload being performed before purchasing for that workload.

  5. forgot to include the price it will be to operate the processors. AMD loses by a landslide every time there. in the long run intel wins again.

    1. The Phenom II X6 1045T has a 95W TDP. The Intel Core i5-2300 has a 95W TDP. The TDP is the same for the i5-2400 and the AMD FX-8120. The i3 chips have a lower TDP but are also not even in the same general performance class as all of the other chips. I don’t intend this to be an offensive remark, but your comment shows some serious ignorance. Also consider that the scope of the article did not extend to total cost of ownership, and that other components in the system have an influence on the power consumption as well. A great video card’s GPU will burn off watts that are comparable to or even exceed the power dissipation of a CPU.

  6. The problem with using Passmark as a point of reference for this is that it uses all cores and it uses them all well. Most software does not. This is the one thing where AMD excels at right now because although AMD’s technology is inferior, they are simply using more of their slow cores in an attempt to make up for their shortcomings. Also, there are many other things to consider than just what Passmarks tests (total integer throughput). Passmark doesn’t tell you about how Phenom II has horrible Floating Point performance compared to Bulldozer and Sandy Bridge and even Nehalem.

    Basically, anyone who does a lot of AES encryption would not be able to use Phenom II because it would be a minuscule fraction of the newer architecture’s performance (the new ones have new AVX and similar instructions that speed up AES throughput incredibly).

    There is even more that Passmark does not tell you. For example, almost all programs use either 1, 2, or 4 threads. Quad threaded software is on the rise. Only specific software uses more than 4 threads, software such as encoding and media editing. Basically, it’s mostly professional media software and other enterprise/corporate/business software. Oh, and folding. The average users does not do these things. Hell, even most of the power users don’t do all of this too much. A lot of people do, but most do not. You’re post here implies that AMD has more value in most work because you don’t explicitly state that this is not true in your review.

    With most software only using up to four threads, the i3s and i5s hammer AMD. Single and dual threaded software on a $130 i3 will run about 30-50% faster than on a $250 or so FX-8150. Of course, quad threaded software will run faster on the $110 FX-4100 than on the $`30 i3s, but the two are so close that it isn’t enough of a difference. However, the i3’s ~50% lead on the 4100 in singe and dual threaded software negates this minor highly/moderately threaded performance advantage. The i5s beat everything that AMD has by huge margins too for anything that doesn’t use more than 4 threads.

    Considering that a lot of popular software is still even single threaded (Firefox comes to mind as a prime example of single threaded software) or at least has one or two heavy threads and maybe another one or two light threads, the i3s can still beat out the FX-4100 fairly often. For example, despite supposedly supporting four threads, the i3s will beat the FXs in World of Warcraft due to it’s unbalanced usage of it’s several threads. The same is true for many other games and other software as well.

    You also way over exaggerate the variability of gaming performance.Just to make them a good point of reference for software that uses the same amount of threads as any particular game, here’s a few very simply ways to eliminate variables. One, use a moderate resolution such as 720p, but a high end graphics card (single GPU card, not multi GPU setup). Two, use an SSD for the OS and game being used to benchmark. Three, optimize your network settings (easy to do with freeware, it only takes a few minutes) to avoid the very minor bottleneck that may be imposed by a slow internet connection. Four, use 1866MHz CL9 memory or better. Considering that you can get 8GB of such for $50 nowadays, if you have an AMD CPU, there is little reason to skimp on this if you have an AMD CPU or do a lot of work with programs that are known to be very memory dependent (such as archiving and rendering).

    Five, having other PCI/PCIe cards does not matter. Having other PCI cards matters if you have a PCI video card instead of a PCIe or AGP video card and even then, it only mattered if the two (or more) PCI devices were on the same channel (PCI shares bandwidth between all devices on the bus, so some motherboards had multiple buses to alleviate the problem of having multiple devices sharing an already low bandwidth and high latency bus).PCIe does not share a single bus between all devices, it has many lanes instead. Some devices such as graphics cards and some RAID controllers or PCIe SSDs use multiple lanes per card, some like USB or low end SATA controllers and such only use one per card. Due to each lane being a separate bus and not shared between multiple devices, latency is fairly constant. Having multiple PCIe cards only matters when some of the PCIe lanes normally used by the graphics card are routed to another PCIe card (maybe you have a new PCIe SSD or something like that, point is that it’s highly unlikely) that needs more than the four lanes provided by the chip set. Even then, the performance drop is negligible (basically means that your card goes from x16 to x8 despite x4 being enough for the vast majority of the cad’s performance. Basically, no big deal there, no wide performance varying either. Also, if you have an AGP or PCIe graphics card like most people, then PCI cards should have zero effect on gaming performance and the performance of pretty much anything else, unless that(those) PCI card(s) are eating up CPU time like candy.

    Basically, gaming performance mainly depends on only two things: The CPU+RAM and graphics subsystems. RAM capacity is far more important than RAM performance, especially with Intel CPUs that have much more bandwidth per memory frequency efficiency and lower latency than the AMD memory controllers. Once you have 8GB, there wil be no gains from more RAM. Once you have more graphics performance than the CPU can handle, there will be no more gains from improving the graphics subsystem. Once your CPU is enough for your graphics card, there will be no more gains from a faster CPU. In order to isolate CPU performance well, you basically use a huge, single card graphics system at a very non-demanding setting such as 720p, maybe not even with maxed out settings and get 8GB of RAM. In order to get a full picture with AMD CPUs, it’s 1866MHz CL9 to make sure that memory isn’t bottlenecking the CPU due to AMD’s poor memory controller design. If a game related at all to internet perofrmance is to be becnhmarked, then using freeware network traffic optimization will keep the network and internet perofmrance from becoming a bottleneck. Basically, then benchmarking will be purely the CPU’s performance in that game. Usually, it will be representative of performance even outside of gaming with programs that have similar thread counts and similarly balanced thread utilization.

    Also worth considering is that with increased core counts is an exponentially increasing diminsihing returns with high core counts in highly threaded performance. An excelecnt way to show this is too look at graphics cards, where the phenomenon is shown at much greater detail. The Radeon 7950, despite having almost 50% more cores and more than 50% more memory bandwidth, is about as fast as the Radeon 7870. Why? It’ simple… Regardless of core count, increasing clock frequency increases performance almost linearly. The 7950’s increased cores actually only equate to a roughly 65-75% scaling efficiency, contrary to the 7870’s increased clock speed’s near 100% scaling efficiency.

    Basically, at the same clock frequency, the 7950 should be about 65-75% faster than the Radeon 7870. If you reduce it’s memory bandwidth to more completely isolate the raw shader core count advantage, that may go down a little more to more like 60-70% or 55-65%. Another example from thr GCN Radeon cards is the negligible difference between the 7950 and the 7970. The only difference between the two is 256 shader cores and clock frequencies. Overclcok the 7950 to the 7970’s frequencies and there’s a less than 5% difference between the two and that’s not even noticeable. Basically, the 7970 is only for people who want to pay another 20% more cash for les than 5% more performance. Now, due to this scaling exponentially dropping with increased core counts, it’s much more linear at lower core counts. However, it does drop. Basically, the FX-8150 is not exactly twice as fast as the 4100, but it is close to being twice as fast because their core counts are too low for the scaling to get low. However, this will get progressivly worse for AMD if they don’t improve their performance per core.

    Memory performance, Internet/network traffic, number of PCI/PCIe graphics card, all of this has little to no effect on gaming performance. Memory frequency has a minor effect on the AMD CPUs because of their bandwidth starvation, but even then, it’s a fairly minor difference for any CPU.

    I’m not saying that your info is useless or anything like that, (far from it), but that it only applies to people who do work that takes advantage of as many cores as you can throw at it. The problem with that is that many people who do that sort of work won’t settle for AMD performance, they would want six core i7 performance or even better because the more they can do in a given time frame, the more money that they make.

    1. This is the kind of reply that I love. I can’t really even begin to respond to it other than to applaud you and thank you for the time you spent on it. The only response I have time to write that is also relevant is that I run a single heavy workload: program compilation with GCC. When I wrote this article, that’s the workload I had in mind. GCC also can use all cores with a parallel Make and it definitely uses them hard, but floating-point performance isn’t important to compilation, so that shortcoming isn’t relevant to what I do. The Phenom II series is my compilation workhorse of choice because I can build a system around it cheaply, yet it stomps out compiled code at an amazing rate. That’s why Passmark is relevant for me: I do in fact use all cores, and use them to the fullest, but I can’t afford to build i7 EE systems just to build programs faster. The FX architectural decision to make what I call “false cores” that share some components has a negative performance impact on compilation, since there is less cache per “core” pipeline than a true core would possess. All of these things combined are my justification for opting for a $130 Phenom II X6 1030T, plus it was the only Phenom II X6 available at retail locally. The machine compiles far faster than my AMD Phenom II X4 965 server, and if I run distcc over my gigabit LAN the speed is obscene. I don’t really even NEED more than this X6!

      However, I’m well aware that my comparison isn’t right for all workloads. We both agree that Passmark tests raw core performance fairly well, and in my workload, that’s very important. You did a fantastic job of explaining why people should buy a processor for the programs they intend to run, and ultimately that’s what matters.

      I would love to see something like Linux kernel compilation benchmarks in CPU comparison benchmarks. That would be more relevant for me than gaming. The only place I seem to see things like that are Phoronix. For the record, after I got my fabulous Toshiba Satellite P775-S7215 at Best Buy, with its Core i7-2630QM, I almost immediately posted a video of it compiling something huge (I think it was glibc) and it’s obnoxiously fast as well. One more item of note that doesn’t fit anywhere else is that I use very slow laptops such as one based on a Transmeta Crusoe TM5800 and a Sylvania G netbook with a VIA C7-M 1.2. I like to run Linux on old machines like this and I’m often surprised at how well-written lightweight software is highly usable on a clunky Crusoe. They are prime compile targets for my Linux distribution.

      1. Glad that you enjoyed it. I’ve also got some slower machines that I play with Linux and other OSs on. For example, there is an OS called React OS that is more or less compatible with Windows despite being open source. It isn’t Linux, but was coded specifically so people could have a free, open source OS that is mostly compatible with Windows. It is being developed slowly due to a current lack of developers (there was a fight between previous devs over some of the code, some of them seemed to have though it was to similar to Windows code and violated M$ copyrights or something like that), but it’s still moving. It is expected to leave the alpha stages and become beta later 2012 and might leave the beta stages in early 2013.

        There are many other OSs kinda like it that are very different from Windows and Linux because they are neither. For example, There is also Haiku (React uses some code from the Haiku project, which is also moving a little slow, But I think it’s moving faster than React) and a plethora of Linux distributions that I look at. Mostly, I’ve been trying out a bunch of web browsers on my Windows laptop lately. It’s a decent Gateway machine with a 15 inch screen, Turion 64 x2 dual core (mobile variant of the Athlon 64 x2) TL-60 @2GHz (stock), Radeon 1270m integrated graphics (it’s not as good as even the integrated graphics on the 790 motherboards, let alone Intel’s Integrated HD graphics and then the Liano graphics), only 2GB of DDR2 533MHz, and a 5400RPM 2.5″ 250GB hard drive. I decided that it was too slow to run a virtual machine on when it was running Windows Vista, so I fixed that up a little… Got is Windows Server 2008r2 x64 off of M$’s website (I might not pay for some things, but I do so in a legal way because I refuse to go as far as stealing it or cracking it just to have a better machine) and optimized the machine until even it could run a virtual machine with 1GB of RAM dedicated pleasantly. It was a fun little project of mine.

        My slow laptop is an old Dell Inspiron 1100 with a 2.4GHz P4 processor, 256MB of RAM, the worst graphics imaginable, and a tiny 40GB hard drive that keeps my boot time so slow that it’s measured in minutes instead of seconds (my Server 2008r2 laptop boots very fast considering that it has a 5400RPM drive, the Dell does not). I also have my desktop that I built with the Phenom II x6 1090T BE (yeah, I’m occasionally a hypocrite) and overclocked to ~4.016GHz (251MHz BLCK * 16 CPU multiplier) as my performance machine because I like to multitask and do some well-threaded work such as archiving with the many OS archives that I download.

        As for the web browsers, I’ve tried (besides the obvious FF, Chrome, Opera and IE) Lunascape Orion (extremely memory efficient and has four rendering engines, it’s compatible with Lunascape addons, FF addons, and either IE, Chrome, or both addons, I don’t remember how it works with them because I almost always use it in the Gecko mode for FF addon support and it stays below 300MB of RAM usage even with very large tab counts. Previous versions of it did not do this, so it’s heavily improvedsince last year), Comodo Dragon (based on Chromium and written by the fairly well known security company Comodo. It’s supposedly more secure than Chrome and I think it might be because it gives me more security options than Chrome does and it doesn’t seem to be tracking my web usage like Chrome did, both of these are claimed features too), Palemoon (based of FF, but instead of security optimized like Dragon, is performance optimized. It can improve browsign performance by over 30% in many cases. You might not notice the difference on faster machines, but it’s very obvious on slower machines. It also has a slightly different UI from FF and Dragon also has a sligthtly different UI from Chrome and Chromium, wouldn’t it suck if they were all the same?), SWare Iron (it’s a little better than Chrome and is also based on Cromium like Dragon is, but I liked Dragon better because it still had more security features such as using Comodo’s DNS servers that might improve speed and security. I’ll need to test their effect on performance some time.), and several others.

        For RAM tight machines, Lunascape is great. It also has very wide compatibility with websites , even those that will only run with IE (at least, it works for me). There’s also browsers such as Minibrowser (based on the Trident engine present in IE, so I’m a little weary of it, but it’s installed and occasionally used on my Dell nonetheless) that uses even less RAM than Lunascape when you aren’t using many tabs. Well, I hope you liked this post too.

  7. Anyone gets all whinny and posts ” your methodology is flawed because you have assumed passmark is a perfect representation of performance” has never truly seen the performance of a AMD CPU, based on its price. AMD makes one of the best, more reliable chips out there. No performance bench mark is perfect or really puts the CPU under real world conditions. Only the “the real world” user can decide if the AMD is better than the Intel. I’ve been building for a long time and I’ve used both AMD and Intel some times AMD is crap where as other times its Intel. Thank God for the competition. Computer geeks will always have the click that says ” intel is much better, because at 1000 dollars more I can play Warcraft faster than any other AMD loser online”

Leave a Reply

Your email address will not be published. Required fields are marked *