Computer Specs Discussion/Argument Thread

Agent23

Ни шагу назад!
Architecture matters more. It has always mattered more. The i7-980 was a 6c/12t CPU on 32 nm with a tdp of 130w, the contemporary FX part on 32nm was the FX 9590 at a 220w tdp. The first was faster.
I am pretty sure that Intel switched completely to 22nm at the time the FX-8xxx and later series were coming out, also the FX-8xxx and FX-9xxx had higher speed performance where raw Ghz were concerned.
Also, Intel managed to do hyperthreading before AMD, but AMD still had better single core performance speed-wise.
 

Vyor

My influence grows!
I am pretty sure that Intel switched completely to 22nm at the time the FX-8xxx and later series were coming out, also the FX-8xxx and FX-9xxx had higher speed performance where raw Ghz were concerned.
Also, Intel managed to do hyperthreading before AMD, but AMD still had better single core performance speed-wise.

Nope, the next series intel launched was the 2000 series, which was also on 32nm.

And intel's first hyperthread equipped CPU was one of the xeon chips released in 2002(which I can't actually find a datasheet for) and the desktop variant was the Intel Pentium 4 3.06GHz: Intel® Pentium® 4 Processor supporting HT Technology 3.06 GHz, 512K Cache, 533 MHz FSB - Product Specifications | Intel

Which was on 130nm and competed against AMD's AthlonXP series, and specifically the Barton core design which released a few months later, also on 130nm: Ace's Hardware

As the benchmarks show, the AthlonXP chips and the Pentium 4 were about on par with one another, with the P4 taking the lead in most of the gaming tests and the AthlonXP taking slight leads in a few of the application tests.

Now, hyperthreading was later removed from their architectures with the Core 2 architectures, but it was added back with Nehalem on 45nm: Intel® Xeon® Processor X7560 (24M Cache, 2.26 GHz, 6.40 GT/s Intel® QPI) - Product Specifications | Intel

With the proffered link leading to the highest end intel server chip at the time, 8cores/16threads, 130w, 2.27 GHz baseclock... and even with such low clockspeeds, it was still faster per core than its competing AMD chip, the Magny Cours based Opteron 6176 SE; a 12 core 12 threaded chip with a 137w tdp and a 2.3ghz base clock.
22159.png


Even multithreaded, a 6c12t xeon beat the 12c12t chip, and hyperthreads only add around 20% in this benchmark.

This performance disparity only grew when AMD launched the FX chips, as those were against the next gen intel parts which clocked higher and had higher IPC, the only thing FX had was clockspeed and a newer instruction set compared to Magny Cours! It even dropped IPC a bit!

So no, architecture matters for 90% of power draw and performance, we can even see that today with the Radeon 7 against the 6900XT. Same TDP, Same Node, almost 100% faster.

Then there's zen2 vs zen3, where zen3 clocked higher and added 20% IPC while pulling the same power, also on the same node!
 

Agent23

Ни шагу назад!
Nope, the next series intel launched was the 2000 series, which was also on 32nm.

And intel's first hyperthread equipped CPU was one of the xeon chips released in 2002(which I can't actually find a datasheet for) and the desktop variant was the Intel Pentium 4 3.06GHz: Intel® Pentium® 4 Processor supporting HT Technology 3.06 GHz, 512K Cache, 533 MHz FSB - Product Specifications | Intel

Which was on 130nm and competed against AMD's AthlonXP series, and specifically the Barton core design which released a few months later, also on 130nm: Ace's Hardware

As the benchmarks show, the AthlonXP chips and the Pentium 4 were about on par with one another, with the P4 taking the lead in most of the gaming tests and the AthlonXP taking slight leads in a few of the application tests.

Now, hyperthreading was later removed from their architectures with the Core 2 architectures, but it was added back with Nehalem on 45nm: Intel® Xeon® Processor X7560 (24M Cache, 2.26 GHz, 6.40 GT/s Intel® QPI) - Product Specifications | Intel

With the proffered link leading to the highest end intel server chip at the time, 8cores/16threads, 130w, 2.27 GHz baseclock... and even with such low clockspeeds, it was still faster per core than its competing AMD chip, the Magny Cours based Opteron 6176 SE; a 12 core 12 threaded chip with a 137w tdp and a 2.3ghz base clock.
22159.png


Even multithreaded, a 6c12t xeon beat the 12c12t chip, and hyperthreads only add around 20% in this benchmark.

This performance disparity only grew when AMD launched the FX chips, as those were against the next gen intel parts which clocked higher and had higher IPC, the only thing FX had was clockspeed and a newer instruction set compared to Magny Cours! It even dropped IPC a bit!

So no, architecture matters for 90% of power draw and performance, we can even see that today with the Radeon 7 against the 6900XT. Same TDP, Same Node, almost 100% faster.

Then there's zen2 vs zen3, where zen3 clocked higher and added 20% IPC while pulling the same power, also on the same node!
As an owner of an FX I remember quite well that it won the speed war for raw GHz per core.
At the time hyper threading was a big thing, since it doubled the threads per physical core.

As to the benchmarks you cite, I don't recall what it targets exactly,but I am pretty sure that single core performance does not depend that much on core count and that at one point we hit a major bottleneck where every extra Hz of performance pushed the TDP up.

Then more and more cores begsn to be added.
 

Vyor

My influence grows!
As an owner of an FX I remember quite well that it won the speed war for raw GHz per core.
At the time hyper threading was a big thing, since it doubled the threads per physical core.

As to the benchmarks you cite, I don't recall what it targets exactly,but I am pretty sure that single core performance does not depend that much on core count and that at one point we hit a major bottleneck where every extra Hz of performance pushed the TDP up.

Then more and more cores begsn to be added.

MHz and GHz started running into diminishing returns when Dennard Scaling died in the early 2000s. And single core performance is a measure of clockspeed... and IPC. Both of which are controlled by architecture. So while yes, the FX9590 was the first ever 5ghz CPU... it was still slower than intel's 4ghz ones per core because it had lower IPC.
 

Agent23

Ни шагу назад!
MHz and GHz started running into diminishing returns when Dennard Scaling died in the early 2000s. And single core performance is a measure of clockspeed... and IPC. Both of which are controlled by architecture. So while yes, the FX9590 was the first ever 5ghz CPU... it was still slower than intel's 4ghz ones per core because it had lower IPC.
Nope, wrong, a bunch of RISC CPUs hit 5Ghz years earlier.
Google IBM Power.

As to instruction level parallelism, well, it really depends on the workload, and the compiler.

Nice in theory approaches like that work nice in theory, but a lot of developments that banked heavily on them such as VLIW/EPIC died.
 

Vyor

My influence grows!
Nope, wrong, a bunch of RISC CPUs hit 5Ghz years earlier.
Google IBM Power.


As to instruction level parallelism, well, it really depends on the workload, and the compiler.

IPC doesn't even have an L in it, unlike your perpetual one.

Instructions Per Clock

How much a CPU can do per clock cycle.

Now, the better term is Performance Per Clock, or Output Per Clock, but whatever. Point is that while the ability to extract ILP does help increase it there's more to it than that. Lower latency caches, faster memory accesses, lower latency execution engines, keeping track of multiple instructions in flight, etc all increase IPC without increasing ILP.

So no, you're wrong. Again.
 

Agent23

Ни шагу назад!
5Ghz in 2008
And they wasn't even their biggest beast, I think whatever powered their z series mainframes hit 5Ghz earlier.
IPC doesn't even have an L in it, unlike your perpetual one.
No A, unlike you, either. :ROFLMAO:
And yeah, the A that ends in tism.
Instructions Per Clock

How much a CPU can do per clock cycle.

Now, the better term is Performance Per Clock, or Output Per Clock, but whatever. Point is that while the ability to extract ILP does help increase it there's more to it than that. Lower latency caches, faster memory accesses, lower latency execution engines, keeping track of multiple instructions in flight, etc all increase IPC without increasing ILP.
But which can run Crysis faster. :ROFLMAO:
You were the one going on about instructions per cycle.
Newsflash, most instructions are supposed to be atomic, attempts to add multicycle instructions and more complex instructions have usually led to underwhelming results, why do you think most high-end enterprise hardware as well as the vsrious ARM derivatives opted for a RISC architecture?
Oh, yeah, compiler support and underwhelming performance for more complex instructions in real world scenarios.
Oh, and the large the cache size the less effective it is.
Which is not to say that caching can ot be improved, but overall for general purpose, multiprocess computing haveing more GHz and more cores is king.
So no, you're wrong. Again.
Lol, tell me, oh great electronics expet, have you heard of this thing called Ohm's law?
 

Vyor

My influence grows!
But which can run Crysis faster. :ROFLMAO:

The i7 980 can run it faster than the FX9590. The last Core 2 Quad might be able to as well, but don't quote me on that.

Newsflash, most instructions are supposed to be atomic, attempts to add multicycle instructions and more complex instructions have usually led to underwhelming results, why do you think most high-end enterprise hardware as well as the vsrious ARM derivatives opted for a RISC architecture?

Question: how many cycles do you think a simple division needs to finish?

Which is not to say that caching can ot be improved, but overall for general purpose, multiprocess computing haveing more GHz and more cores is king.

This is just wrong again. While having more cores and threads can help, yes and having higher clockspeeds is good... neither is the end all be all of technology. As an example, the 13900K has a higher core count and equivalent clockspeeds to the Zen4 based 7950X.

In gaming they tie, in multithreaded applications they tie.

We see something similar happening in the previous gen too, the 5950X being faster than the 12900k in spite of similar clockspeeds and the same core count.

Then we have Zen2 vs Zen+, which had equal all core clockspeeds on the 8 core models and... Zen2 was faster.

Lol, tell me, oh great electronics expet, have you heard of this thing called Ohm's law?

Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points.

Are you referring to moores law per chance? Which states that every 2 years the amount of transistors on a silicon wafer will double?
 

Agent23

Ни шагу назад!
The i7 980 can run it faster than the FX9590. The last Core 2 Quad might be able to as well, but don't quote me on that.



Question: how many cycles do you think a simple division needs to finish?



This is just wrong again. While having more cores and threads can help, yes and having higher clockspeeds is good... neither is the end all be all of technology. As an example, the 13900K has a higher core count and equivalent clockspeeds to the Zen4 based 7950X.

In gaming they tie, in multithreaded applications they tie.

We see something similar happening in the previous gen too, the 5950X being faster than the 12900k in spite of similar clockspeeds and the same core count.

Then we have Zen2 vs Zen+, which had equal all core clockspeeds on the 8 core models and... Zen2 was faster.





Are you referring to moores law per chance? Which states that every 2 years the amount of transistors on a silicon wafer will double?
>Tries to talk about electronics.
>Has no idea about what Ohm's law is.
No, Ohm's law.
Older, and more general as it deals with things like current, voltage and resistance.
 

Agent23

Ни шагу назад!
Ohms law breaks down when you get into the nanoscopic.
Ohm's law holds down to atomic scale – Physics World

The result is a chain of phosphorus atoms embedded inside a silicon crystal – effectively an atomic wire. The team found that the resistivity of these wires was constant right down to the atomic scale. This means that the resistance of such a wire is proportional to its length and inversely proportional to its area, just as you would expect from Ohm's law.
:sneaky:
 

Vyor

My influence grows!

An article from 2012, amazing. From researchers that don't actually know what they're talking about even!

"no quantum effects" my fat ass, quantum tunneling is a serious risk!

Thankfully, people much more intelligent than those morons actually made a paper two years before which explains why Ohms Law does, in fact, break down: Microcircuit Modeling and Simulation Beyond Ohm’s Law

Meanwhile, college course curriculum for circuit design are saying the same thing: https://www.allaboutcircuits.com/textbook/direct-current/chpt-2/nonlinear-conduction/
Ohm's Law is not very useful for analyzing the behavior of components like these where resistance varies with voltage and current. Some have even suggested that "Ohm's Law" should be demoted from the status of a "Law" because it is not universal. It might be more accurate to call the equation (R=E/I) a definition of resistance, befitting of a certain class of materials under a narrow range of conditions.

And from 2019 a circuit design company says the following: Failure of Ohm's Law and Circuit Analysis
Even though experiments involving Ohm's law are a fundamental part of any electrical engineer's education, Ohm's law has its limits. At high frequency, in semiconductor devices, and in some circuits with feedback, a circuit will actually exhibit a nonlinear response to a driving voltage. This illustrates a failure of Ohm's law in that it no longer correctly describes the current induced in a circuit. As a result, using a SPICE-based simulation for circuits with nonlinear elements is complicated and requires specialized techniques.

So I will ask again: what does ohms law have to do with CPU or node design?
 

Agent23

Ни шагу назад!
An article from 2012, amazing. From researchers that don't actually know what they're talking about even!

"no quantum effects" my fat ass, quantum tunneling is a serious risk!
Yeah, ok, sure, actual physicists don't know about physics...
Thankfully, people much more intelligent than those morons actually made a paper two years before which explains why Ohms Law does, in fact, break down: Microcircuit Modeling and Simulation Beyond Ohm’s Law
>Muh science is settled.
Looks like it doesn't according to the newer research that came after.
So, how many jabs did you get from St. Fauci?


Meanwhile, college course curriculum for circuit design are saying the same thing: https://www.allaboutcircuits.com/textbook/direct-current/chpt-2/nonlinear-conduction/


And from 2019 a circuit design company says the following: Failure of Ohm's Law and Circuit Analysis


So I will ask again: what does ohms law have to do with CPU or node design?
Run energy through a medium and you get resistance, since no system is 100% energy efficient.
The diameter of the medium and the length of the medium play a role, thus you get more of a loss that gets radiated as heat.

Some minor changes because of the way materials act under temperature increases or because the current is not constant will not make that simple reality go away even plasma has some degree of risestence.
😏
 

Vyor

My influence grows!
Run energy through a medium and you get resistance, since no system is 100% energy efficient.
The diameter of the medium and the length of the medium play a role, thus you get more of a loss that gets radiated as heat.

Some minor changes because of the way materials act under temperature increases or because the current is not constant will not make that simple reality go away even plasma has some degree of risestence.
😏
So I will ask again: what does ohms law have to do with CPU or node design?
 

ThatZenoGuy

Zealous Evolutionary Nano Organism
@Agent23
Vyor is the dude who doesn't understand that electrical resistance raises with temperature, requiring additional energy to get the same results (which means more heat, and so on)
 

ThatZenoGuy

Zealous Evolutionary Nano Organism
Aaaaand doesn't matter for a CPU or GPU with sufficient cooling.
That depends, if there's enough resistance despite low temperatures (for whatever reason), still no workie.
But yes, cooling prevents shit from stopping working, from a temperature standpoint at least.
 

Users who are viewing this thread

Top